Thanks, I pushed up the change you recommended -- reading the request body -- and indeed that fixed the test case.
The weird TLS config thing I did was because the original problem we had (backend services, etc.) went away if we configured InsecureSkipVerify the usual straightforward way. The Go stdlib sources treat setting a TLSClientConfig with just InsecureSkipVerify:true is different than letting the net/http library set up its own config and then modifying it. e.g. in net/http/transport.go you can see where if TLSClientConfig != nil it will skip several items related to http2 configuration. In this test case I tried to recreate, it turns out all of this has no impact since the body being unread was the real issue, as you pointed out. Anyway, back to the drawing board to figure out the original problem. We have a backend service (Rails) running in Google's Kubernetes Engine, fronted by Google's Cloud Load Balancer. Next in line is a Go ReverseProxy based router (also in GKE), also fronted by Google's Cloud Load Balancer, which clients talk to. So... client <-> GCLBa <-> ReverseProxy <-> GCLBb <-> Service. Of course, there are Kubernetes "things" inline as well, all of this making debugging much more... interesting. If a client talks HTTPS/1.1 but our ReverseProxy talks to the backside with HTTPS/2, we get very strange behavior where the response the client receives has an explicit Content-Length but the body has obvious Transfer-Encoding:chunked markers, though no such Transfer-Encoding header since it had a Content-Length. If the client talks HTTPS/2 instead, everything works as it should. If we force the ReverseProxy to only talk HTTPS/1.1, that will also fix the issue. Our immediate fix is to configure ReverseProxy to never use HTTP/2, since we can't control what the clients do. But, we'd rather use HTTP/2 if it's available; and, if we ReverseProxy GRPC services, we'll have no choice anyway. Thanks for your help, I hope I can create a reproduction of the issue we're seeing, or find our bug finally. I'm still pretty sure it's triggered when the client uses Expect:100-continue but... So many herrings... On Thursday, November 22, 2018 at 5:08:35 AM UTC-8, Peter Waller wrote: > > I suspect this has to do with the fact that you're doing a roundtrip for > its side effects, but not reading the response body or closing it. If I fix > that, everything seems to work as expected. > > Try configuring the TLS client with t.TLSClientConfig = > &tls.Config{InsecureSkipVerify: true} > > On Mon, 19 Nov 2018 at 20:30, <greg...@unity3d.com <javascript:>> wrote: > >> Hi folks! >> >> Hoping somebody can help me figure out what I'm doing wrong (or what Go's >> doing wrong in the small chance it's that). >> >> It _seems_ Go's reverse proxy doesn't support 100 Continue when the >> backend is HTTP/2 (but I'm guessing). >> >> I put up the sample at https://github.com/gholt/proxrepro -- just `go >> run main.go` and then look at the `trace` file that's output. >> >> You can see where curl sends its request headers with the Expect: >> 100-continue but the first thing it gets back is 200 OK and then >> "weirdness" ensues. >> >> -- Greg Holt >> >> -- >> You received this message because you are subscribed to the Google Groups >> "golang-nuts" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to golang-nuts...@googlegroups.com <javascript:>. >> For more options, visit https://groups.google.com/d/optout. >> > -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.