[GH-ISSUE #213] Bad request for uploads of >4kb recordings under CentOS (Python 3.4) #769

Closed
opened 2026-03-15 10:15:37 +03:00 by kerem · 58 comments
Owner

Originally created by @andyone on GitHub (Jun 7, 2017).
Original GitHub issue: https://github.com/asciinema/asciinema/issues/213

Bug report

System info:

  • Version used: 1.4.0 (1.1.1 also have same issue)
  • OS: CentOS Linux release 7.3.1611
  • Python version: Python 3.4.5
  • Install tools: yum (from EPEL repository)

Steps to reproduce:

  1. asciinema upload asciicast.json

Expected behavior:

File uploaded to asciinema.org

Actual behavior:

Client print error message:

Error: Invalid request: <html><body><h1>400 Bad request</h1>
Your browser sent an invalid request.
</body></html>

Additional info:

Client create broken recording if zsh (4.3.11 (x86_64-redhat-linux-gnu) in my case) is used and oh-my-zsh is installed. If oh-my-zsh disabled or bash used as a shell, client create and upload recording without any problems.

Recording JSON: https://gist.github.com/andyone/b2a883e8c3795a6ad393a715ff7a41df

Originally created by @andyone on GitHub (Jun 7, 2017). Original GitHub issue: https://github.com/asciinema/asciinema/issues/213 ### Bug report **System info:** * **Version used:** 1.4.0 (1.1.1 also have same issue) * **OS:** CentOS Linux release 7.3.1611 * **Python version:** Python 3.4.5 * **Install tools:** yum (from EPEL repository) **Steps to reproduce:** 1. `asciinema upload asciicast.json` **Expected behavior:** File uploaded to asciinema.org **Actual behavior:** Client print error message: ``` Error: Invalid request: <html><body><h1>400 Bad request</h1> Your browser sent an invalid request. </body></html> ``` **Additional info:** Client create broken recording if zsh (`4.3.11 (x86_64-redhat-linux-gnu)` in my case) is used and oh-my-zsh is installed. If oh-my-zsh disabled or bash used as a shell, client create and upload recording without any problems. Recording JSON: https://gist.github.com/andyone/b2a883e8c3795a6ad393a715ff7a41df
Author
Owner

@ThiefMaster commented on GitHub (Jun 7, 2017):

Happens for me too. Using ZSH but not OMZ.

$ zsh --version
zsh 5.3.1 (x86_64-pc-linux-gnu)
$ asciinema --version
asciinema 1.4.0

tmpw6byrbv8-asciinema.json

<!-- gh-comment-id:306837053 --> @ThiefMaster commented on GitHub (Jun 7, 2017): Happens for me too. Using ZSH but not OMZ. ``` $ zsh --version zsh 5.3.1 (x86_64-pc-linux-gnu) $ asciinema --version asciinema 1.4.0 ``` [tmpw6byrbv8-asciinema.json](https://github.com/asciinema/asciinema/files/1058570/tmpw6byrbv8-asciinema.json.txt)
Author
Owner

@andyone commented on GitHub (Jun 7, 2017):

I found that if I change API url from HTTPS to HTTP all works fine.

<!-- gh-comment-id:306933447 --> @andyone commented on GitHub (Jun 7, 2017): I found that if I change API url from HTTPS to HTTP all works fine.
Author
Owner

@ku1ik commented on GitHub (Jun 8, 2017):

I've changed load balancer configuration yesterday so this may be related.

<!-- gh-comment-id:307009066 --> @ku1ik commented on GitHub (Jun 8, 2017): I've changed load balancer configuration yesterday so this may be related.
Author
Owner

@ku1ik commented on GitHub (Jun 8, 2017):

I was able to reproduce this in Centos 7 Vagrant VM. I think this has something to do with Brightbox load balancer (with SSL termination, automatic Let's Encrypt certificate) which we use since yesterday.

<!-- gh-comment-id:307028084 --> @ku1ik commented on GitHub (Jun 8, 2017): I was able to reproduce this in Centos 7 Vagrant VM. I think this has something to do with Brightbox load balancer (with SSL termination, automatic Let's Encrypt certificate) which we use since yesterday.
Author
Owner

@ku1ik commented on GitHub (Jun 8, 2017):

@andyone @ThiefMaster can you try now? I may have solved it.

<!-- gh-comment-id:307057873 --> @ku1ik commented on GitHub (Jun 8, 2017): @andyone @ThiefMaster can you try now? I may have solved it.
Author
Owner

@ThiefMaster commented on GitHub (Jun 8, 2017):

still getting a 400

<!-- gh-comment-id:307058274 --> @ThiefMaster commented on GitHub (Jun 8, 2017): still getting a 400
Author
Owner

@andyone commented on GitHub (Jun 8, 2017):

I think it is OpenSSL related issue. Sending data with curl is ok because curl uses NSS (Network Security Services) for working with SSL/TLS.

with Brightbox load balancer

It is nginx based solution?

<!-- gh-comment-id:307068028 --> @andyone commented on GitHub (Jun 8, 2017): I think it is OpenSSL related issue. Sending data with curl is ok because curl uses NSS (Network Security Services) for working with SSL/TLS. > with Brightbox load balancer It is nginx based solution?
Author
Owner

@ku1ik commented on GitHub (Jun 8, 2017):

@andyone I think Brightbox load balancer uses Haproxy.

<!-- gh-comment-id:307079939 --> @ku1ik commented on GitHub (Jun 8, 2017): @andyone I think Brightbox load balancer uses Haproxy.
Author
Owner

@ku1ik commented on GitHub (Jun 8, 2017):

I can consistently reproduce this. I created Vagrantfile and instructions: https://github.com/sickill/bb-lb-400

<!-- gh-comment-id:307082349 --> @ku1ik commented on GitHub (Jun 8, 2017): I can consistently reproduce this. I created Vagrantfile and instructions: https://github.com/sickill/bb-lb-400
Author
Owner

@ku1ik commented on GitHub (Jun 8, 2017):

@andyone the problem doesn't seem to be this specific line in your recording, but the overall size of the uploaded json file.

<!-- gh-comment-id:307085154 --> @ku1ik commented on GitHub (Jun 8, 2017): @andyone the problem doesn't seem to be this specific line in your recording, but the overall size of the uploaded json file.
Author
Owner

@andyone commented on GitHub (Jun 8, 2017):

I created a proxy https://ascii.kaos.io based on webkaos (it's improved nginx with BoringSSL) with this config. My and @ThiefMaster recordings uploaded successfully over this proxy.

<!-- gh-comment-id:307086249 --> @andyone commented on GitHub (Jun 8, 2017): I created a proxy https://ascii.kaos.io based on [webkaos](https://github.com/essentialkaos/webkaos) (it's improved nginx with BoringSSL) with [this config](https://gist.github.com/andyone/63233754d4bddd6abaded9cdb7e56687). My and @ThiefMaster recordings uploaded successfully over this proxy.
Author
Owner

@ku1ik commented on GitHub (Jun 8, 2017):

Here's what I know so far:

HTTP requests go fine through Brightbox load balancer, but HTTPS ones give 400 Bad Request
for request where the request body is larger than about 4KB.

Interesting thing is we're getting 400 for HTTPS under CentOS. HTTPS under macOS works fine. (HTTP works fine everywhere).

I looked deeper, tried to find out where's the difference. I used tcpdump to see the requests on both CentOS and macOS (HTTP, assumed the request itself is formatted the same as under HTTPS).

The only difference seems to be 2 empty lines before body on macOS, 1 empty line on CentOS (probably due to slightly differen version of urllib that comes with Python 3 on these OSes):

CentOS:

POST /api/asciicasts HTTP/1.1
Accept-Encoding: identity
User-Agent: asciinema/1.4.0 CPython/3.4.5 Linux/3.10.0-514.16.1.el7.x86_64-x86_64-with-centos-7.3.1611-Core
Authorization: Basic <61 bytes of base64 encoded credentials>
Content-Length: 13582
Content-Type: multipart/form-data; boundary=c3f4e35afa4a4ce6b65b6420da09b46e
Connection: close
Host: asciinema.org

--c3f4e35afa4a4ce6b65b6420da09b46e
Content-Disposition: form-data; name="asciicast"; filename="asciicast.json"
Content-Type: application/json

<about 13 kb of json>

macOS:

POST /api/asciicasts HTTP/1.1
Accept-Encoding: identity
Content-Length: 13582
Host: asciinema.org
User-Agent: asciinema/1.4.0 CPython/3.6.1 Darwin/16.5.0-x86_64-i386-64bit
Content-Type: multipart/form-data; boundary=71d5b757e9d1451b9540dc286f74207d
Authorization: Basic <61 bytes of base64 encoded credentials>
Connection: close


--71d5b757e9d1451b9540dc286f74207d
Content-Disposition: form-data; name="asciicast"; filename="asciicast.json"
Content-Type: application/json

<about 13 kb of json>

To see how it affects things I temporarily changed "Request buffer size" on LB from 4096 (default) to 8192 (max) and it suddenly started working fine everywhere (all OSes, HTTPS), yay!

I'm not super confident this is the ultimate solution because with buffer size of 4096 this is true:

  • I am able to make POST request with 3MB body with no problem over
    HTTPS on macOS
  • Thus I assume this buffer size is for headers and not the request body (this was confirmed by John from Brightbox)
  • I am able to make POST request with < 4KB body with no problem over
    HTTPS on CentOS
  • I am NOT able to make POST request with > 4KB body over HTTPS on CentOS
  • Above contradicts my assumption about buffer applying only for headers...
  • Request headers are small (~330 bytes) in all cases

When I bump "request buffer size" to 8192 the body size and protocol
doesn't matter and all works fine. I wonder though whether by bumping
it to 8192 I'm only buying time (make less people affected) or this
solves the problem completely (if so then why?).

I contacted Brightbox about this, hopefully they can explain what's going on.

<!-- gh-comment-id:307086529 --> @ku1ik commented on GitHub (Jun 8, 2017): Here's what I know so far: HTTP requests go fine through Brightbox load balancer, but HTTPS ones give 400 Bad Request for request where the request *body* is larger than about 4KB. Interesting thing is we're getting 400 for HTTPS under CentOS. HTTPS under macOS works fine. (HTTP works fine everywhere). I looked deeper, tried to find out where's the difference. I used tcpdump to see the requests on both CentOS and macOS (HTTP, assumed the request itself is formatted the same as under HTTPS). The only difference seems to be 2 empty lines before body on macOS, 1 empty line on CentOS (probably due to slightly differen version of urllib that comes with Python 3 on these OSes): CentOS: ``` POST /api/asciicasts HTTP/1.1 Accept-Encoding: identity User-Agent: asciinema/1.4.0 CPython/3.4.5 Linux/3.10.0-514.16.1.el7.x86_64-x86_64-with-centos-7.3.1611-Core Authorization: Basic <61 bytes of base64 encoded credentials> Content-Length: 13582 Content-Type: multipart/form-data; boundary=c3f4e35afa4a4ce6b65b6420da09b46e Connection: close Host: asciinema.org --c3f4e35afa4a4ce6b65b6420da09b46e Content-Disposition: form-data; name="asciicast"; filename="asciicast.json" Content-Type: application/json <about 13 kb of json> ``` macOS: ``` POST /api/asciicasts HTTP/1.1 Accept-Encoding: identity Content-Length: 13582 Host: asciinema.org User-Agent: asciinema/1.4.0 CPython/3.6.1 Darwin/16.5.0-x86_64-i386-64bit Content-Type: multipart/form-data; boundary=71d5b757e9d1451b9540dc286f74207d Authorization: Basic <61 bytes of base64 encoded credentials> Connection: close --71d5b757e9d1451b9540dc286f74207d Content-Disposition: form-data; name="asciicast"; filename="asciicast.json" Content-Type: application/json <about 13 kb of json> ``` To see how it affects things I temporarily changed "Request buffer size" on LB from 4096 (default) to 8192 (max) and it suddenly started working fine everywhere (all OSes, HTTPS), yay! I'm not super confident this is the ultimate solution because with buffer size of 4096 this is true: - I am able to make POST request with 3MB body with no problem over HTTPS on macOS - Thus I assume this buffer size is for headers and not the request body (this was confirmed by John from Brightbox) - I am able to make POST request with < 4KB body with no problem over HTTPS on CentOS - I am NOT able to make POST request with > 4KB body over HTTPS on CentOS - Above contradicts my assumption about buffer applying only for headers... - Request headers are small (~330 bytes) in all cases When I bump "request buffer size" to 8192 the body size and protocol doesn't matter and all works fine. I wonder though whether by bumping it to 8192 I'm only buying time (make less people affected) or this solves the problem completely (if so then why?). I contacted Brightbox about this, hopefully they can explain what's going on.
Author
Owner

@ku1ik commented on GitHub (Jun 8, 2017):

Update re 8192 buffer size on Brightbox side: with this number it works for me under CentOS but still doesn't work for @ThiefMaster .

<!-- gh-comment-id:307086997 --> @ku1ik commented on GitHub (Jun 8, 2017): Update re 8192 buffer size on Brightbox side: with this number it works for me under CentOS but still doesn't work for @ThiefMaster .
Author
Owner

@andyone commented on GitHub (Jun 8, 2017):

Ops, sorry.

<!-- gh-comment-id:307087187 --> @andyone commented on GitHub (Jun 8, 2017): Ops, sorry.
Author
Owner

@ku1ik commented on GitHub (Jun 8, 2017):

Before I put the traffic through Brightbox LB I terminated SSL in Nginx and everything was working fine for years. If it works with @andyone's proxy based on Nginx then it may suggest Nginx is more "forgiving" about request formatting, while Haproxy is more strict, and asciinema client formats the request incorrectly (for Haproxy standards) under Python 3.4 (and its urllib, which is older than the 3.6.1 I use on mac).

<!-- gh-comment-id:307088073 --> @ku1ik commented on GitHub (Jun 8, 2017): Before I put the traffic through Brightbox LB I terminated SSL in Nginx and everything was working fine for years. If it works with @andyone's proxy based on Nginx then it may suggest Nginx is more "forgiving" about request formatting, while Haproxy is more strict, and asciinema client formats the request incorrectly (for Haproxy standards) under Python 3.4 (and its urllib, which is older than the 3.6.1 I use on mac).
Author
Owner

@andyone commented on GitHub (Jun 8, 2017):

I can check it later with Haproxy, but my version is built with LibreSSL instead of OpenSSL.

<!-- gh-comment-id:307089578 --> @andyone commented on GitHub (Jun 8, 2017): I can check it later with Haproxy, but my version is built with LibreSSL instead of OpenSSL.
Author
Owner

@ku1ik commented on GitHub (Jun 8, 2017):

My current theory is this:

This single new line before headers and body is not enough for LB to finish reading headers (it expects 2 new lines), and it keeps reading all data below it as headers, counting bytes, which eventually exceed max size for headers. If LB has some variable like bytes_read (bytes read from socket), it checks its value after finishing reading headers, and then later again after reading body. If you upload <4kb file then it never crosses 4kb limit for headers, and if you upload >4kb it exceeds it.
(and this only happens under HTTPS)

No idea if that's the case, just thinking out loud 😀

<!-- gh-comment-id:307092303 --> @ku1ik commented on GitHub (Jun 8, 2017): My current theory is this: This single new line before headers and body is not enough for LB to finish reading headers (it expects 2 new lines), and it keeps reading all data below it as headers, counting bytes, which eventually exceed max size for headers. If LB has some variable like `bytes_read` (bytes read from socket), it checks its value after finishing reading headers, and then later again after reading body. If you upload <4kb file then it never crosses 4kb limit for headers, and if you upload >4kb it exceeds it. (and this only happens under HTTPS) No idea if that's the case, just thinking out loud 😀
Author
Owner

@ku1ik commented on GitHub (Jun 8, 2017):

Updated source code so it adds extra new line, checked under CentOS and still fails. So the above theory is wrong.

<!-- gh-comment-id:307095212 --> @ku1ik commented on GitHub (Jun 8, 2017): Updated source code so it adds extra new line, checked under CentOS and still fails. So the above theory is wrong.
Author
Owner

@ku1ik commented on GitHub (Jun 8, 2017):

This works under CentOS with HTTPS:

curl -v -X POST -u $USER:api-token https://asciinema.org/api/asciicasts -F asciicast=@over-4k.json

* About to connect() to asciinema.org port 443 (#0)
*   Trying 109.107.38.233...
* Connected to asciinema.org (109.107.38.233) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* 	subject: CN=asciinema.org
* 	start date: Jun 07 09:12:00 2017 GMT
* 	expire date: Sep 05 09:12:00 2017 GMT
* 	common name: asciinema.org
* 	issuer: CN=Let's Encrypt Authority X3,O=Let's Encrypt,C=US
* Server auth using Basic with user 'vagrant'
> POST /api/asciicasts HTTP/1.1
> Authorization: Basic <...hidden...>
> User-Agent: curl/7.29.0
> Host: asciinema.org
> Accept: */*
> Content-Length: 5658
> Expect: 100-continue
> Content-Type: multipart/form-data; boundary=----------------------------6ca3f3de6469

So maybe SSL lib used by Python is different than curl and the problem lies somewhere in SSL-land?

<!-- gh-comment-id:307096777 --> @ku1ik commented on GitHub (Jun 8, 2017): This works under CentOS with HTTPS: ``` curl -v -X POST -u $USER:api-token https://asciinema.org/api/asciicasts -F asciicast=@over-4k.json * About to connect() to asciinema.org port 443 (#0) * Trying 109.107.38.233... * Connected to asciinema.org (109.107.38.233) port 443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 * Server certificate: * subject: CN=asciinema.org * start date: Jun 07 09:12:00 2017 GMT * expire date: Sep 05 09:12:00 2017 GMT * common name: asciinema.org * issuer: CN=Let's Encrypt Authority X3,O=Let's Encrypt,C=US * Server auth using Basic with user 'vagrant' > POST /api/asciicasts HTTP/1.1 > Authorization: Basic <...hidden...> > User-Agent: curl/7.29.0 > Host: asciinema.org > Accept: */* > Content-Length: 5658 > Expect: 100-continue > Content-Type: multipart/form-data; boundary=----------------------------6ca3f3de6469 ``` So maybe SSL lib used by Python is different than curl and the problem lies somewhere in SSL-land?
Author
Owner

@andyone commented on GitHub (Jun 8, 2017):

I think so. Python uses OpenSSL, curl uses NSS.

<!-- gh-comment-id:307097429 --> @andyone commented on GitHub (Jun 8, 2017): I think so. Python uses OpenSSL, curl uses NSS.
Author
Owner

@ku1ik commented on GitHub (Jun 8, 2017):

@andyone the certificate for ascii.kaos.io is not Let's Encrypt?

<!-- gh-comment-id:307097902 --> @ku1ik commented on GitHub (Jun 8, 2017): @andyone the certificate for ascii.kaos.io is not Let's Encrypt?
Author
Owner

@andyone commented on GitHub (Jun 8, 2017):

RapidSSL SHA256withRSA

<!-- gh-comment-id:307098400 --> @andyone commented on GitHub (Jun 8, 2017): RapidSSL SHA256withRSA
Author
Owner

@ku1ik commented on GitHub (Jun 8, 2017):

Normally I would say CentOS is missing root certificate for Let's Encrypt (or something like that 😊 ), but the SSL connection is being made and the error is on HTTP protocol level (400 Bad Request) so ... 👐

<!-- gh-comment-id:307099299 --> @ku1ik commented on GitHub (Jun 8, 2017): Normally I would say CentOS is missing root certificate for Let's Encrypt (or something like that 😊 ), but the SSL connection is being made and the error is on HTTP protocol level (400 Bad Request) so ... 👐
Author
Owner

@andyone commented on GitHub (Jun 8, 2017):

If root certificate for Let's Encrypt is missing it will not work even with curl.

<!-- gh-comment-id:307100544 --> @andyone commented on GitHub (Jun 8, 2017): If root certificate for Let's Encrypt is missing it will not work even with curl.
Author
Owner

@johnl commented on GitHub (Jun 8, 2017):

Our (Brightbox) load balancer does indeed use haproxy. The HTTP RFC and the haproxy docs do state that one CRLF is required to separate the headers from the body:

https://github.com/haproxy/haproxy/blob/master/doc/internals/http-parsing.txt

Is it possible that you're only sending a CR or a LF here, rather than a full CRLF?

<!-- gh-comment-id:307110929 --> @johnl commented on GitHub (Jun 8, 2017): Our (Brightbox) load balancer does indeed use haproxy. The HTTP RFC and the haproxy docs do state that one CRLF is required to separate the headers from the body: https://github.com/haproxy/haproxy/blob/master/doc/internals/http-parsing.txt Is it possible that you're only sending a CR or a LF here, rather than a full CRLF?
Author
Owner

@andyone commented on GitHub (Jun 8, 2017):

@sickill This is proxy on HA-Proxy 1.7.5 with LibreSSL 2.5.0 - https://ascii-ha.kaos.io. My and @ThiefMaster recordings, and over-4k.json from your repository uploaded successfully over this proxy.

<!-- gh-comment-id:307246603 --> @andyone commented on GitHub (Jun 8, 2017): @sickill This is proxy on [HA-Proxy 1.7.5](https://yum.kaos.io/7/release/x86_64/repoview/haproxy.html) with LibreSSL 2.5.0 - https://ascii-ha.kaos.io. My and @ThiefMaster recordings, and `over-4k.json` from your repository uploaded successfully over this proxy.
Author
Owner

@ku1ik commented on GitHub (Jun 9, 2017):

@andyone ok. So, can you change tune.bufsize (https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#3.2-tune.bufsize) to 4096?

<!-- gh-comment-id:307311285 --> @ku1ik commented on GitHub (Jun 9, 2017): @andyone ok. So, can you change `tune.bufsize` (https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#3.2-tune.bufsize) to 4096?
Author
Owner

@ku1ik commented on GitHub (Jun 9, 2017):

@johnl I checked for CRLF and all is OK here.

I tcpdumped the request on both CentOS and macOS again (over HTTP, again, assuming the HTTP payload is the same for HTTPS).

dump-centos.pcap.txt and dump-mac.pcap.txt contain tcpdump capture (tcpdump -s 0 dst port 80 -w dump-centos.pcap.txt).
dump-centos-hex.txt and dump-mac-hex.txt contain hex formatted dumps (via hexdump -C).

dump-centos-hex.txt
dump-centos.pcap.txt
dump-mac-hex.txt
dump-mac.pcap.txt

It seems on both OSes there's CRLF used for new lines, and there's one blank line between headers and body.

<!-- gh-comment-id:307319410 --> @ku1ik commented on GitHub (Jun 9, 2017): @johnl I checked for CRLF and all is OK here. I tcpdumped the request on both CentOS and macOS again (over HTTP, again, assuming the HTTP payload is the same for HTTPS). dump-centos.pcap.txt and dump-mac.pcap.txt contain tcpdump capture (`tcpdump -s 0 dst port 80 -w dump-centos.pcap.txt`). dump-centos-hex.txt and dump-mac-hex.txt contain hex formatted dumps (via `hexdump -C`). [dump-centos-hex.txt](https://github.com/asciinema/asciinema/files/1063283/dump-centos-hex.txt) [dump-centos.pcap.txt](https://github.com/asciinema/asciinema/files/1063284/dump-centos.pcap.txt) [dump-mac-hex.txt](https://github.com/asciinema/asciinema/files/1063285/dump-mac-hex.txt) [dump-mac.pcap.txt](https://github.com/asciinema/asciinema/files/1063286/dump-mac.pcap.txt) It seems on both OSes there's CRLF used for new lines, and there's one blank line between headers and body.
Author
Owner

@ku1ik commented on GitHub (Jun 9, 2017):

On the left CentOS, on the right macOS:

centos-mac-comparison
<!-- gh-comment-id:307322426 --> @ku1ik commented on GitHub (Jun 9, 2017): On the left CentOS, on the right macOS: <img width="1278" alt="centos-mac-comparison" src="https://user-images.githubusercontent.com/17589/26966357-02ccd71a-4cfa-11e7-9b48-d2bdacb14249.png">
Author
Owner

@andyone commented on GitHub (Jun 9, 2017):

@sickill Config updated. over-4k.json uploaded as well.

<!-- gh-comment-id:307345919 --> @andyone commented on GitHub (Jun 9, 2017): @sickill Config updated. `over-4k.json` uploaded as well.
Author
Owner

@ku1ik commented on GitHub (Jun 9, 2017):

@andyone thanks for the update. It seems it doesn't add X-Forwarded-Proto header (because the returned recording URL is http://). Can you add http-request set-header X-Forwarded-Proto https if { ssl_fc }?

<!-- gh-comment-id:307351056 --> @ku1ik commented on GitHub (Jun 9, 2017): @andyone thanks for the update. It seems it doesn't add `X-Forwarded-Proto` header (because the returned recording URL is `http://`). Can you add `http-request set-header X-Forwarded-Proto https if { ssl_fc }`?
Author
Owner

@andyone commented on GitHub (Jun 9, 2017):

This is my config:

frontend www-https
    bind 207.154.241.251:443 ssl crt /etc/ssl/private/kaos.pem
    reqadd X-Forwarded-Proto:\ https
    default_backend www-backend

backend www-backend
    server asciinema-backend asciinema.org:80

Where should I add this line?

<!-- gh-comment-id:307355998 --> @andyone commented on GitHub (Jun 9, 2017): This is my config: ``` frontend www-https bind 207.154.241.251:443 ssl crt /etc/ssl/private/kaos.pem reqadd X-Forwarded-Proto:\ https default_backend www-backend backend www-backend server asciinema-backend asciinema.org:80 ``` Where should I add this line?
Author
Owner

@ku1ik commented on GitHub (Jun 9, 2017):

@andyone I think it needs to go into backend section (I'm not haproxy expert though).

<!-- gh-comment-id:307373425 --> @ku1ik commented on GitHub (Jun 9, 2017): @andyone I think it needs to go into `backend` section (I'm not haproxy expert though).
Author
Owner

@ku1ik commented on GitHub (Jun 9, 2017):

@andyone btw, I REALLY appreciate you helping debug this 😍 Thanks!

<!-- gh-comment-id:307373565 --> @ku1ik commented on GitHub (Jun 9, 2017): @andyone btw, I REALLY appreciate you helping debug this 😍 Thanks!
Author
Owner

@johnl commented on GitHub (Jun 9, 2017):

don't forget forward-for too. This should replicate the setup pretty closely, with the ssl ciphers too:

global
    tune.bufsize 4096
    tune.ssl.default-dh-param 2048
    tune.maxrewrite 40

frontend www-https
    bind 207.154.241.251:443 ssl no-sslv3 crt /etc/ssl/private/kaos.pem ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
    reqadd X-Forwarded-Proto:\ https
    default_backend www-backend

backend www-backend
    server asciinema-backend asciinema.org:80
    mode http
    option forwardfor
    option httplog
<!-- gh-comment-id:307379616 --> @johnl commented on GitHub (Jun 9, 2017): don't forget forward-for too. This should replicate the setup pretty closely, with the ssl ciphers too: ``` global tune.bufsize 4096 tune.ssl.default-dh-param 2048 tune.maxrewrite 40 frontend www-https bind 207.154.241.251:443 ssl no-sslv3 crt /etc/ssl/private/kaos.pem ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA reqadd X-Forwarded-Proto:\ https default_backend www-backend backend www-backend server asciinema-backend asciinema.org:80 mode http option forwardfor option httplog ```
Author
Owner

@andyone commented on GitHub (Jun 9, 2017):

I modified config to this, but with no luck:

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option                  http-server-close
    option                  forwardfor
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

frontend www-https
    bind 207.154.241.251:443 ssl crt /etc/ssl/private/kaos.pem
    reqadd X-Forwarded-Proto:\ https
    default_backend www-backend

backend www-backend
    http-request set-header X-Forwarded-Proto https
    server asciinema-backend asciinema.org:80

Client still return links with http://.

I'm always happy to help improve the useful services 😉.

<!-- gh-comment-id:307379762 --> @andyone commented on GitHub (Jun 9, 2017): I modified config to this, but with no luck: ``` defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend www-https bind 207.154.241.251:443 ssl crt /etc/ssl/private/kaos.pem reqadd X-Forwarded-Proto:\ https default_backend www-backend backend www-backend http-request set-header X-Forwarded-Proto https server asciinema-backend asciinema.org:80 ``` Client still return links with `http://`. I'm always happy to help improve the useful services 😉.
Author
Owner

@andyone commented on GitHub (Jun 9, 2017):

@johnl This is full config, all required options is set in defaults and global sections:

global
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    tune.bufsize 4096

    # SSL configuration
    tune.ssl.default-dh-param 2048
    ssl-default-bind-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
    ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
    ssl-default-server-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
    ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option                  http-server-close
    option                  forwardfor
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

frontend www-https
    bind 207.154.241.251:443 ssl crt /etc/ssl/private/kaos.pem
    reqadd X-Forwarded-Proto:\ https
    default_backend www-backend

backend www-backend
    http-request set-header X-Forwarded-Proto https
    server asciinema-backend asciinema.org:80
<!-- gh-comment-id:307380226 --> @andyone commented on GitHub (Jun 9, 2017): @johnl This is full config, all required options is set in `defaults` and `global` sections: ``` global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon tune.bufsize 4096 # SSL configuration tune.ssl.default-dh-param 2048 ssl-default-bind-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets ssl-default-server-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets # turn on stats unix socket stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend www-https bind 207.154.241.251:443 ssl crt /etc/ssl/private/kaos.pem reqadd X-Forwarded-Proto:\ https default_backend www-backend backend www-backend http-request set-header X-Forwarded-Proto https server asciinema-backend asciinema.org:80 ```
Author
Owner

@ku1ik commented on GitHub (Jun 9, 2017):

If @andyone's haproxy config is now very close to BB and we still can't reproduce the issue, does it make sense to try with Let's Encrypt cert? This is one of the differences between https://ascii-ha.kaos.io and https://asciinema.org.

<!-- gh-comment-id:307423863 --> @ku1ik commented on GitHub (Jun 9, 2017): If @andyone's haproxy config is now very close to BB and we still can't reproduce the issue, does it make sense to try with Let's Encrypt cert? This is one of the differences between https://ascii-ha.kaos.io and https://asciinema.org.
Author
Owner

@andyone commented on GitHub (Jun 9, 2017):

This is one of the differences between https://ascii-ha.kaos.io and https://asciinema.org.

No. BB LB can be built with OpenSSL (I use LibreSSL).

I will try to add Let's Encrypt certificate for https://ascii-ha.kaos.io.

<!-- gh-comment-id:307453507 --> @andyone commented on GitHub (Jun 9, 2017): > This is one of the differences between https://ascii-ha.kaos.io and https://asciinema.org. No. BB LB can be built with OpenSSL (I use LibreSSL). I will try to add Let's Encrypt certificate for https://ascii-ha.kaos.io.
Author
Owner

@andyone commented on GitHub (Jun 9, 2017):

Done - https://ascii.kaos.re
HA-Proxy 1.7.5 (w/ LibreSSL 2.5.0) + Let's Encrypt certificate (created by Certbot)
Config:

global
    tune.bufsize 4096
    tune.ssl.default-dh-param 2048
    tune.maxrewrite 40

frontend www-https
    bind 207.154.241.251:443 ssl no-sslv3 crt /etc/ssl/private/ascii.kaos.re.pem ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
    reqadd X-Forwarded-Proto:\ https
    default_backend www-backend

backend www-backend
    server asciinema-backend asciinema.org:80
    mode http
    option forwardfor
    option httplog
<!-- gh-comment-id:307503702 --> @andyone commented on GitHub (Jun 9, 2017): Done - https://ascii.kaos.re HA-Proxy 1.7.5 (w/ LibreSSL 2.5.0) + [Let's Encrypt certificate](https://www.ssllabs.com/ssltest/analyze.html?d=ascii.kaos.re&hideResults=on&latest=yes) (created by Certbot) Config: ``` global tune.bufsize 4096 tune.ssl.default-dh-param 2048 tune.maxrewrite 40 frontend www-https bind 207.154.241.251:443 ssl no-sslv3 crt /etc/ssl/private/ascii.kaos.re.pem ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA reqadd X-Forwarded-Proto:\ https default_backend www-backend backend www-backend server asciinema-backend asciinema.org:80 mode http option forwardfor option httplog ```
Author
Owner

@andyone commented on GitHub (Jun 9, 2017):

Looks like all works fine. over-4k.json uploaded successfully.

<!-- gh-comment-id:307504061 --> @andyone commented on GitHub (Jun 9, 2017): Looks like all works fine. `over-4k.json` uploaded successfully.
Author
Owner

@ku1ik commented on GitHub (Jun 12, 2017):

I have no further ideas for this. I'm considering rolling back to my own Nginx instance for load balancing and SSL termination 🤕

<!-- gh-comment-id:307747642 --> @ku1ik commented on GitHub (Jun 12, 2017): I have no further ideas for this. I'm considering rolling back to my own Nginx instance for load balancing and SSL termination 🤕
Author
Owner

@johnl commented on GitHub (Jun 12, 2017):

I'm trying to whittle this down to a single curl command that can reproduce the problem, but haven't managed it yet, can anyone help?

I'm POSTing a 5k body, with an authentication username/password using curl. I'm hitting a Brightbox load balancer with a netcat web server backend, so I can see the raw request text. It always goes through - can't make it trigger a bad request response.

If this is being rejected by the load balancer, I should not need a real instance of the app on the backend, as it should never get that far - so we should be able to reproduce this with curl and no app.

I've tried curl on ubuntu and centos7, and with openssl specifically (note you can specify the --engine command to curl to choose which sslib lib to use. centos7 curl binaries are built against the most options)

<!-- gh-comment-id:307757225 --> @johnl commented on GitHub (Jun 12, 2017): I'm trying to whittle this down to a single curl command that can reproduce the problem, but haven't managed it yet, can anyone help? I'm POSTing a 5k body, with an authentication username/password using curl. I'm hitting a Brightbox load balancer with a netcat web server backend, so I can see the raw request text. It always goes through - can't make it trigger a bad request response. If this is being rejected by the load balancer, I should not need a real instance of the app on the backend, as it should never get that far - so we should be able to reproduce this with curl and no app. I've tried curl on ubuntu and centos7, and with openssl specifically (note you can specify the --engine command to curl to choose which sslib lib to use. centos7 curl binaries are built against the most options)
Author
Owner

@ku1ik commented on GitHub (Jun 12, 2017):

@johnl thanks for looking into this.

Makes sense to use netcat as the backend for testing 👍

curl equivalent for asciinema upload over-4k.json is more or less this:

curl -v -X POST -u test:uuid4 https://asciinema.org/api/asciicasts -F asciicast=@over-4k.json

(replace uuid4 with the result of python3 -c 'import uuid; print(uuid.uuid4())')

And it works with curl indeed...

I compared tcpdump of asciinema upload and the above curl and there isn't anything on HTTP protocol level that looks suspicious to me. However, some tcp frames show up in different locations (maybe more/less data is sent/fits in each tcp packet).

<!-- gh-comment-id:307781058 --> @ku1ik commented on GitHub (Jun 12, 2017): @johnl thanks for looking into this. Makes sense to use netcat as the backend for testing 👍 curl equivalent for `asciinema upload over-4k.json` is more or less this: curl -v -X POST -u test:uuid4 https://asciinema.org/api/asciicasts -F asciicast=@over-4k.json (replace `uuid4` with the result of `python3 -c 'import uuid; print(uuid.uuid4())'`) And it works with curl indeed... I compared tcpdump of `asciinema upload` and the above curl and there isn't anything on HTTP protocol level that looks suspicious to me. However, some tcp frames show up in different locations (maybe more/less data is sent/fits in each tcp packet).
Author
Owner

@ku1ik commented on GitHub (Jun 12, 2017):

I captured HTTP request (to http://asciinema.org) with tcpflow in CentOS 7 VM:

sudo tcpflow -p -C -i eth0 port 80 >tcpflow-req.txt

Then in another shell (in the same VM) ran:

ASCIINEMA_API_URL=http://asciinema.org asciinema upload /vagrant/over-4k.json

I cut off the response from it, leaving only request. Here's what gets sent, byte by byte: tcpflow-req.txt

I replayed this captured HTTP request against asciinema.org:80 with nc:

bash-4.4$ (cat tcpflow-req.txt; cat) | nc asciinema.org 80
HTTP/1.1 201 Created
Server: nginx
Date: Mon, 12 Jun 2017 13:30:03 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 48
Connection: close
Status: 201 Created
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Location: http://asciinema.org/a/4lgbbik7li4ywzqrfak0e7eku
ETag: "9beb7ac6bb5981f06fdc71df3947d8b0"
Cache-Control: max-age=0, private, must-revalidate
X-Request-Id: 2a8a8c75-ed06-4741-9adb-e5d276032ded
X-Runtime: 0.360858
Vary: Accept-Encoding
Strict-Transport-Security: max-age=15768000

http://asciinema.org/a/4lgbbik7li4ywzqrfak0e7eku

All good.

Now, I've sent over SSL to asciinema.org:443:

(cat tcpflow-req.txt; cat) | openssl s_client -connect asciinema.org:443

Here's the result:

CONNECTED(00000003)
depth=1 /C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
verify error:num=20:unable to get local issuer certificate
verify return:0
---
Certificate chain
 0 s:/CN=asciinema.org
   i:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
 1 s:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
   i:/O=Digital Signature Trust Co./CN=DST Root CA X3
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIFFDCCA/ygAwIBAgISBDhrp0YwV5NtleFOG+Zj61lQMA0GCSqGSIb3DQEBCwUA
MEoxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MSMwIQYDVQQD
ExpMZXQncyBFbmNyeXB0IEF1dGhvcml0eSBYMzAeFw0xNzA2MDcwOTEyMDBaFw0x
NzA5MDUwOTEyMDBaMBgxFjAUBgNVBAMTDWFzY2lpbmVtYS5vcmcwggEiMA0GCSqG
SIb3DQEBAQUAA4IBDwAwggEKAoIBAQC+/g237mVels4G9blsZlaeeiURbSp22eGO
T5OZ5As9NyuxSvRVEJrs4xk/RBEkCVgeZspSOmkRLwXG+FSMtjhbqIUt73AUKMdm
4DG+OwkVxjZatskL0wUWRcU7DmyW/Ls/OFJpPPcZ+pqu/v/ek99EiVNoAHJzXMXJ
ZsWy5KLE3fhkrlyMvdIkOkCK5zHOT95t0i8OmdaPIekPBa57VhvnDlUJsYyCF9GN
mP8Qg6OygexyULJGqBwiZ0BN2J6cYwChUlSvqFnkL4OzfixZ+mItuhl1b1vx/N5K
XMtPiM+nc/S+/liIWgtt7HIy9NmrOtSKbPTh3Bv/rfNdaiYx5CUHAgMBAAGjggIk
MIICIDAOBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUF
BwMCMAwGA1UdEwEB/wQCMAAwHQYDVR0OBBYEFNAMhQNNwl+/bJjml9hrrHYzBxbf
MB8GA1UdIwQYMBaAFKhKamMEfd265tE5t6ZFZe/zqOyhMG8GCCsGAQUFBwEBBGMw
YTAuBggrBgEFBQcwAYYiaHR0cDovL29jc3AuaW50LXgzLmxldHNlbmNyeXB0Lm9y
ZzAvBggrBgEFBQcwAoYjaHR0cDovL2NlcnQuaW50LXgzLmxldHNlbmNyeXB0Lm9y
Zy8wLwYDVR0RBCgwJoINYXNjaWluZW1hLm9yZ4IVc3RhZ2luZy5hc2NpaW5lbWEu
b3JnMIH+BgNVHSAEgfYwgfMwCAYGZ4EMAQIBMIHmBgsrBgEEAYLfEwEBATCB1jAm
BggrBgEFBQcCARYaaHR0cDovL2Nwcy5sZXRzZW5jcnlwdC5vcmcwgasGCCsGAQUF
BwICMIGeDIGbVGhpcyBDZXJ0aWZpY2F0ZSBtYXkgb25seSBiZSByZWxpZWQgdXBv
biBieSBSZWx5aW5nIFBhcnRpZXMgYW5kIG9ubHkgaW4gYWNjb3JkYW5jZSB3aXRo
IHRoZSBDZXJ0aWZpY2F0ZSBQb2xpY3kgZm91bmQgYXQgaHR0cHM6Ly9sZXRzZW5j
cnlwdC5vcmcvcmVwb3NpdG9yeS8wDQYJKoZIhvcNAQELBQADggEBABxmJxdQQCcy
FpCkiDrB+vonBUCLYSJtrFkmRdmj9W8/ADpC6M/EhYFOCgrO2cmhYfy1SxDAP5Hd
KIhd3p1F931MMXVcxYt2n6FiDJHN531qp6eBzjZsVIgHXS27PAV466IIMTydNQSe
reyDc9fi+q+ji1Gz89nI8lHIOlRt3dzVGT2J3oQidsm4ZuPNJFj4y8MUrbUAOOH6
YY4n395OKV7vWzl7VPKiCWx+zsv4bzr6IGUPlwqCN2e6cppPWE47ugnYsarINCHO
ie5lU4E2N0k2qVWe/+uYbwSUQ0nrEx8R078m6+6EjDkR4VLboLjuV5tGBgHsJLQB
CmLH6CmNCRE=
-----END CERTIFICATE-----
subject=/CN=asciinema.org
issuer=/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
---
No client certificate CA names sent
---
SSL handshake has read 3436 bytes and written 456 bytes
---
New, TLSv1/SSLv3, Cipher is DHE-RSA-AES128-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : TLSv1
    Cipher    : DHE-RSA-AES128-SHA
    Session-ID: AC26CBF8D3719B1DE709A9A8AEAB43D20B14C62085A74604338C512CEA4472C5
    Session-ID-ctx:
    Master-Key: 0C59B1A2B6802D35FAD26DEE139043A853F3E62787E9AA743A8CAFDA95744DB73AB42B511F37EA7D6BB398A352938551
    Key-Arg   : None
    Start Time: 1497273777
    Timeout   : 300 (sec)
    Verify return code: 0 (ok)
---
HTTP/1.0 400 Bad request
Cache-Control: no-cache
Connection: close
Content-Type: text/html

<html><body><h1>400 Bad request</h1>
Your browser sent an invalid request.
</body></html>

/cc @johnl

<!-- gh-comment-id:307790321 --> @ku1ik commented on GitHub (Jun 12, 2017): I captured HTTP request (to http://asciinema.org) with tcpflow in CentOS 7 VM: sudo tcpflow -p -C -i eth0 port 80 >tcpflow-req.txt Then in another shell (in the same VM) ran: ASCIINEMA_API_URL=http://asciinema.org asciinema upload /vagrant/over-4k.json I cut off the response from it, leaving only request. Here's what gets sent, byte by byte: [tcpflow-req.txt](https://github.com/asciinema/asciinema/files/1068284/tcpflow-req.txt) I replayed this captured HTTP request against asciinema.org:80 with `nc`: ``` bash-4.4$ (cat tcpflow-req.txt; cat) | nc asciinema.org 80 HTTP/1.1 201 Created Server: nginx Date: Mon, 12 Jun 2017 13:30:03 GMT Content-Type: text/html; charset=utf-8 Content-Length: 48 Connection: close Status: 201 Created X-Frame-Options: SAMEORIGIN X-XSS-Protection: 1; mode=block X-Content-Type-Options: nosniff Location: http://asciinema.org/a/4lgbbik7li4ywzqrfak0e7eku ETag: "9beb7ac6bb5981f06fdc71df3947d8b0" Cache-Control: max-age=0, private, must-revalidate X-Request-Id: 2a8a8c75-ed06-4741-9adb-e5d276032ded X-Runtime: 0.360858 Vary: Accept-Encoding Strict-Transport-Security: max-age=15768000 http://asciinema.org/a/4lgbbik7li4ywzqrfak0e7eku ``` All good. Now, I've sent over SSL to asciinema.org:443: (cat tcpflow-req.txt; cat) | openssl s_client -connect asciinema.org:443 Here's the result: ``` CONNECTED(00000003) depth=1 /C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3 verify error:num=20:unable to get local issuer certificate verify return:0 --- Certificate chain 0 s:/CN=asciinema.org i:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3 1 s:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3 i:/O=Digital Signature Trust Co./CN=DST Root CA X3 --- Server certificate -----BEGIN CERTIFICATE----- MIIFFDCCA/ygAwIBAgISBDhrp0YwV5NtleFOG+Zj61lQMA0GCSqGSIb3DQEBCwUA MEoxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MSMwIQYDVQQD ExpMZXQncyBFbmNyeXB0IEF1dGhvcml0eSBYMzAeFw0xNzA2MDcwOTEyMDBaFw0x NzA5MDUwOTEyMDBaMBgxFjAUBgNVBAMTDWFzY2lpbmVtYS5vcmcwggEiMA0GCSqG SIb3DQEBAQUAA4IBDwAwggEKAoIBAQC+/g237mVels4G9blsZlaeeiURbSp22eGO T5OZ5As9NyuxSvRVEJrs4xk/RBEkCVgeZspSOmkRLwXG+FSMtjhbqIUt73AUKMdm 4DG+OwkVxjZatskL0wUWRcU7DmyW/Ls/OFJpPPcZ+pqu/v/ek99EiVNoAHJzXMXJ ZsWy5KLE3fhkrlyMvdIkOkCK5zHOT95t0i8OmdaPIekPBa57VhvnDlUJsYyCF9GN mP8Qg6OygexyULJGqBwiZ0BN2J6cYwChUlSvqFnkL4OzfixZ+mItuhl1b1vx/N5K XMtPiM+nc/S+/liIWgtt7HIy9NmrOtSKbPTh3Bv/rfNdaiYx5CUHAgMBAAGjggIk MIICIDAOBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUF BwMCMAwGA1UdEwEB/wQCMAAwHQYDVR0OBBYEFNAMhQNNwl+/bJjml9hrrHYzBxbf MB8GA1UdIwQYMBaAFKhKamMEfd265tE5t6ZFZe/zqOyhMG8GCCsGAQUFBwEBBGMw YTAuBggrBgEFBQcwAYYiaHR0cDovL29jc3AuaW50LXgzLmxldHNlbmNyeXB0Lm9y ZzAvBggrBgEFBQcwAoYjaHR0cDovL2NlcnQuaW50LXgzLmxldHNlbmNyeXB0Lm9y Zy8wLwYDVR0RBCgwJoINYXNjaWluZW1hLm9yZ4IVc3RhZ2luZy5hc2NpaW5lbWEu b3JnMIH+BgNVHSAEgfYwgfMwCAYGZ4EMAQIBMIHmBgsrBgEEAYLfEwEBATCB1jAm BggrBgEFBQcCARYaaHR0cDovL2Nwcy5sZXRzZW5jcnlwdC5vcmcwgasGCCsGAQUF BwICMIGeDIGbVGhpcyBDZXJ0aWZpY2F0ZSBtYXkgb25seSBiZSByZWxpZWQgdXBv biBieSBSZWx5aW5nIFBhcnRpZXMgYW5kIG9ubHkgaW4gYWNjb3JkYW5jZSB3aXRo IHRoZSBDZXJ0aWZpY2F0ZSBQb2xpY3kgZm91bmQgYXQgaHR0cHM6Ly9sZXRzZW5j cnlwdC5vcmcvcmVwb3NpdG9yeS8wDQYJKoZIhvcNAQELBQADggEBABxmJxdQQCcy FpCkiDrB+vonBUCLYSJtrFkmRdmj9W8/ADpC6M/EhYFOCgrO2cmhYfy1SxDAP5Hd KIhd3p1F931MMXVcxYt2n6FiDJHN531qp6eBzjZsVIgHXS27PAV466IIMTydNQSe reyDc9fi+q+ji1Gz89nI8lHIOlRt3dzVGT2J3oQidsm4ZuPNJFj4y8MUrbUAOOH6 YY4n395OKV7vWzl7VPKiCWx+zsv4bzr6IGUPlwqCN2e6cppPWE47ugnYsarINCHO ie5lU4E2N0k2qVWe/+uYbwSUQ0nrEx8R078m6+6EjDkR4VLboLjuV5tGBgHsJLQB CmLH6CmNCRE= -----END CERTIFICATE----- subject=/CN=asciinema.org issuer=/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3 --- No client certificate CA names sent --- SSL handshake has read 3436 bytes and written 456 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES128-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES128-SHA Session-ID: AC26CBF8D3719B1DE709A9A8AEAB43D20B14C62085A74604338C512CEA4472C5 Session-ID-ctx: Master-Key: 0C59B1A2B6802D35FAD26DEE139043A853F3E62787E9AA743A8CAFDA95744DB73AB42B511F37EA7D6BB398A352938551 Key-Arg : None Start Time: 1497273777 Timeout : 300 (sec) Verify return code: 0 (ok) --- HTTP/1.0 400 Bad request Cache-Control: no-cache Connection: close Content-Type: text/html <html><body><h1>400 Bad request</h1> Your browser sent an invalid request. </body></html> ``` /cc @johnl
Author
Owner

@andyone commented on GitHub (Jun 12, 2017):

@sickill Can you check same request with https://ascii.kaos.re?

<!-- gh-comment-id:307815014 --> @andyone commented on GitHub (Jun 12, 2017): @sickill Can you check same request with https://ascii.kaos.re?
Author
Owner

@ku1ik commented on GitHub (Jun 12, 2017):

@andyone just checked. Did this (cat tcpflow-req.txt; cat) | openssl s_client -connect ascii.kaos.re:443 - uploaded successfully.

<!-- gh-comment-id:307822817 --> @ku1ik commented on GitHub (Jun 12, 2017): @andyone just checked. Did this `(cat tcpflow-req.txt; cat) | openssl s_client -connect ascii.kaos.re:443` - uploaded successfully.
Author
Owner

@johnl commented on GitHub (Jun 12, 2017):

I've done more digging here. curl on centos7 uses nss but wget uses openssl. I can successfully send the request with either curl or wget. I can even send using the python httpie tool (under python 3).

but it fails sending it to openssl s_client via stdin

but it succeeds sending it to openssl s_client by pasting the request into it, rather than using stdin!

I'm now pretty sure this is because something is sending requests with LF line endings rather than the required CRLF line endings, but I'm not sure quite what. I think "openssl s_client" is a bad testing tool and is making it difficult to be sure what is going on.

But I've yet to reproduce this with a proper http client, whether using nss or openssl (curl on ubuntu uses openssl and works fine too, so double confirmed that). Anyone else manage that?

<!-- gh-comment-id:307893302 --> @johnl commented on GitHub (Jun 12, 2017): I've done more digging here. curl on centos7 uses nss but wget uses openssl. I can successfully send the request with either curl or wget. I can even send using the python httpie tool (under python 3). but it fails sending it to openssl s_client via stdin but it succeeds sending it to openssl s_client by pasting the request into it, rather than using stdin! I'm now pretty sure this is because something is sending requests with LF line endings rather than the required CRLF line endings, but I'm not sure quite what. I think "openssl s_client" is a bad testing tool and is making it difficult to be sure what is going on. But I've yet to reproduce this with a proper http client, whether using nss or openssl (curl on ubuntu uses openssl and works fine too, so double confirmed that). Anyone else manage that?
Author
Owner

@benaryorg commented on GitHub (Jun 20, 2017):

I've just done some testing on my own and can confirm that this problem persists with a content-length of 4520, not however with the same request stripped by 1000 characters (Content-Length adjusted according to the changes made).

The CRLF are present in all my tests and xxd confirms that they are sent over the pipe.
I could also test with OpenBSD's nc (which supports TLS).

From the documentation:

tune.bufsize
Sets the buffer size to this size (in bytes). Lower values allow more
sessions to coexist in the same amount of RAM, and higher values allow some
applications with very large cookies to work. The default value is 16384 and
can be changed at build time. It is strongly recommended not to change this
from the default value, as very low values will break some services such as
statistics, and values larger than default size will increase memory usage,
possibly causing the system to run out of memory. At least the global maxconn
parameter should be decreased by the same factor as this one is increased.
If HTTP request is larger than (tune.bufsize - tune.maxrewrite), haproxy will
return HTTP 400 (Bad Request) error. Similarly if an HTTP response is larger
than this size, haproxy will return HTTP 502 (Bad Gateway).

As opposed to nginx which does not keep the whole request in memory but passes it on on the fly (AFAIK) or at the very least, buffers it into a temporary file.

There is the no option http-buffer-request Option, which, if I got that right disables exactly that behaviour (written for option http-buffer-request, without no):

It is sometimes desirable to wait for the body of an HTTP request before
taking a decision. This is what is being done by "balance url_param" for
example. The first use case is to buffer requests from slow clients before
connecting to the server. Another use case consists in taking the routing
decision based on the request body's contents. This option placed in a
frontend or backend forces the HTTP processing to wait until either the whole
body is received, or the request buffer is full, or the first chunk is
complete in case of chunked encoding. It can have undesired side effects with
some applications abusing HTTP by expecting unbufferred transmissions between
the frontend and the backend, so this should definitely not be used by
default.

<!-- gh-comment-id:309916413 --> @benaryorg commented on GitHub (Jun 20, 2017): I've just done some testing on my own and can confirm that this problem persists with a content-length of 4520, not however with the same request stripped by 1000 characters (`Content-Length` adjusted according to the changes made). The CRLF are present in all my tests and `xxd` confirms that they are sent over the pipe. I could also test with OpenBSD's `nc` (which supports TLS). From the [documentation](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#3.2-tune.bufsize): > tune.bufsize <number> Sets the buffer size to this size (in bytes). Lower values allow more sessions to coexist in the same amount of RAM, and higher values allow some applications with very large cookies to work. The default value is 16384 and can be changed at build time. It is strongly recommended not to change this from the default value, as very low values will break some services such as statistics, and values larger than default size will increase memory usage, possibly causing the system to run out of memory. At least the global maxconn parameter should be decreased by the same factor as this one is increased. If HTTP request is larger than (tune.bufsize - tune.maxrewrite), haproxy will return HTTP 400 (Bad Request) error. Similarly if an HTTP response is larger than this size, haproxy will return HTTP 502 (Bad Gateway). As opposed to nginx which does not keep the whole request in memory but passes it on on the fly (AFAIK) or at the very least, buffers it into a temporary file. There is the [`no option http-buffer-request` Option](https://cbonte.github.io/haproxy-dconv/configuration-1.6.html#4.2-no%20option%20http-buffer-request), which, if I got that right disables exactly that behaviour (written for `option http-buffer-request`, without `no`): > It is sometimes desirable to wait for the body of an HTTP request before taking a decision. This is what is being done by "balance url_param" for example. The first use case is to buffer requests from slow clients before connecting to the server. Another use case consists in taking the routing decision based on the request body's contents. This option placed in a frontend or backend forces the HTTP processing to wait until either the whole body is received, or the request buffer is full, or the first chunk is complete in case of chunked encoding. It can have undesired side effects with some applications abusing HTTP by expecting unbufferred transmissions between the frontend and the backend, so this should definitely not be used by default.
Author
Owner

@peterbrittain commented on GitHub (Jul 7, 2017):

I've just hit this too. It strikes me that with your testing of the same content working over HTTP but not HTTPS, it's unlikely to be the buffer sizes at fault, unless something between your client and the proxy is adding a lot of extra headers.

But maybe there is a bug in whatever is terminating your SSL connections such that it slightly corrupts the headers.

If so, there is an option that reduces the security of HAProxy, but allows less compliant HTTP traffic through. See https://stackoverflow.com/questions/39286346/extra-space-in-http-headers-gives-400-error-on-haproxy

While I don't advocate reducing security as a final fix, this might allow you to maintain the service while you're debugging it.

<!-- gh-comment-id:313664028 --> @peterbrittain commented on GitHub (Jul 7, 2017): I've just hit this too. It strikes me that with your testing of the same content working over HTTP but not HTTPS, it's unlikely to be the buffer sizes at fault, unless something between your client and the proxy is adding a lot of extra headers. But maybe there is a bug in whatever is terminating your SSL connections such that it slightly corrupts the headers. If so, there is an option that reduces the security of HAProxy, but allows less compliant HTTP traffic through. See https://stackoverflow.com/questions/39286346/extra-space-in-http-headers-gives-400-error-on-haproxy While I don't advocate reducing security as a final fix, this might allow you to maintain the service while you're debugging it.
Author
Owner

@ku1ik commented on GitHub (Jul 28, 2017):

@peterbrittain at the moment asciinema.org uses Brightbox Cloud load balancer, so I don't control their Haproxy config. We used to terminate SSL in our own Nginx and that was working fine. Since I switched to BB LB this problem occurs (for some). Are you experiencing it under CentOS, or other system?

Frankly, I haven't had any problem with the previous Nginx-based solution. SSL certificate we had was expiring so I thought I'll go with Let's Encrypt. Since LE certs are short-lived they are best managed automatically and Brightbox LB does that for me. I just wanted to save myself work in setting LE up and BB LB seemed to be the simplest solution (since asciinema.org is sponsored by Brightbox and runs on their great infrastructure). Now I think setting up LE myself in Nginx would probably take 1/10 of the time I already spent trouble-shooting this issue 😞😞😞

<!-- gh-comment-id:318639794 --> @ku1ik commented on GitHub (Jul 28, 2017): @peterbrittain at the moment asciinema.org uses Brightbox Cloud load balancer, so I don't control their Haproxy config. We used to terminate SSL in our own Nginx and that was working fine. Since I switched to BB LB this problem occurs (for some). Are you experiencing it under CentOS, or other system? Frankly, I haven't had any problem with the previous Nginx-based solution. SSL certificate we had was expiring so I thought I'll go with Let's Encrypt. Since LE certs are short-lived they are best managed automatically and Brightbox LB does that for me. I just wanted to save myself work in setting LE up and BB LB seemed to be the simplest solution (since asciinema.org is sponsored by Brightbox and runs on their great infrastructure). Now I think setting up LE myself in Nginx would probably take 1/10 of the time I already spent trouble-shooting this issue 😞😞😞
Author
Owner

@peterbrittain commented on GitHub (Jul 28, 2017):

Ah. I didn't spot the subtlety of who owned which bits. Have you had any luck getting diags from BB for this issue?

And in answer to your question: my box is a CentOS 6 VM.

<!-- gh-comment-id:318670620 --> @peterbrittain commented on GitHub (Jul 28, 2017): Ah. I didn't spot the subtlety of who owned which bits. Have you had any luck getting diags from BB for this issue? And in answer to your question: my box is a CentOS 6 VM.
Author
Owner

@ThomasWaldmann commented on GitHub (Aug 14, 2017):

I also just experienced the bad request issue, using asciinema 1.2.0 (version from ubuntu 16.04 lts).

The curl hack given above worked, thanks.

<!-- gh-comment-id:322292938 --> @ThomasWaldmann commented on GitHub (Aug 14, 2017): I also just experienced the bad request issue, using asciinema 1.2.0 (version from ubuntu 16.04 lts). The curl hack given above worked, thanks.
Author
Owner

@benaryorg commented on GitHub (Aug 15, 2017):

I just discovered that the very same file does yield a bad request on my Gentoo[1] box, but not on my OpenBSD[2] box.
The OpenBSD uploads it just fine.
I think there should be further investigation into the difference between these clients.
The Gentoo box supports the following Python targets per ebuild:

PYTHON_TARGETS="python3_4 -python3_5"

I can't currently test python3.5 easily though, but maybe this does help already.

Edit: I added the OpenSSL versions, completely forgot about those.

[1]: Gentoo GNU/Linux

  • asciinema 1.4.0
    • executed using python-exec 2.4.5
    • in turn executing Python 3.4.6
  • OpenSSL 1.0.2l 25 May 2017

[2]: OpenBSD 6.1

  • asciinema 1.3.0
    • executed using Python 3.6.0
  • LibreSSL 2.5.2
<!-- gh-comment-id:322608279 --> @benaryorg commented on GitHub (Aug 15, 2017): I just discovered that the very same file does yield a bad request on my Gentoo[1] box, but not on my OpenBSD[2] box. The OpenBSD uploads it just fine. I think there should be further investigation into the difference between these clients. The Gentoo box supports the following Python targets per ebuild: ``` PYTHON_TARGETS="python3_4 -python3_5" ``` I can't currently test python3.5 easily though, but maybe this does help already. **Edit**: I added the OpenSSL versions, completely forgot about those. [1]: Gentoo GNU/Linux * asciinema 1.4.0 * executed using python-exec 2.4.5 * in turn executing Python 3.4.6 * OpenSSL 1.0.2l 25 May 2017 [2]: OpenBSD 6.1 * asciinema 1.3.0 * executed using Python 3.6.0 * LibreSSL 2.5.2
Author
Owner

@ku1ik commented on GitHub (Aug 20, 2017):

I've just switched back to previous config (terminating SSL in Nginx). Let me know if it works for you now @andyone @ThiefMaster @benaryorg @peterbrittain @ThomasWaldmann

<!-- gh-comment-id:323594030 --> @ku1ik commented on GitHub (Aug 20, 2017): I've just switched back to previous config (terminating SSL in Nginx). Let me know if it works for you now @andyone @ThiefMaster @benaryorg @peterbrittain @ThomasWaldmann
Author
Owner

@benaryorg commented on GitHub (Aug 20, 2017):

@sickill I'm only 85% sure it's the same file that failed before, but if it is, you've fixed it.

<!-- gh-comment-id:323605108 --> @benaryorg commented on GitHub (Aug 20, 2017): @sickill I'm only 85% sure it's the same file that failed before, but if it is, you've fixed it.
Author
Owner

@andyone commented on GitHub (Aug 20, 2017):

@sickill Works like a charm for me now. 👍

<!-- gh-comment-id:323606963 --> @andyone commented on GitHub (Aug 20, 2017): @sickill Works like a charm for me now. 👍
Author
Owner

@ThomasWaldmann commented on GitHub (Aug 21, 2017):

Yup, works for me (with asciinema upload) also now. Thanks!

<!-- gh-comment-id:323643713 --> @ThomasWaldmann commented on GitHub (Aug 21, 2017): Yup, works for me (with `asciinema upload`) also now. Thanks!
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/asciinema#769
No description provided.