On Mon, 15 Jun 2020, WGH via curl-library wrote:

When debugging some issue regarding possibly broken gzipped HTTP response in Go[1], I noticed that curl has no problems decompressing it, and returns successful exit code.

...

Does anyone has a clue what's going on? Does curl erroneously not report a broken file? Or it's some known workaround for broken web servers?

On the Internet, gzip-compressed content is often sent broken, like this and other ways, from servers. Browsers are - in their standard anything-goes ways - masters of hiding such errors and instead just showing the contents they could extract. Basically not providing any reason for sites to fix their gzip issues (if the web admins even notice them).

curl often ends up between a rock and a hard place in this battle. We can be strict and error out as soon as there's a problem, and there's the other side that sees how the contents is shown in browsers and why report error if the data could be extracted? It's tricky to say with absolutes what's right and wrong in a lot of these cases.

Right now, I think we're somewhat in an in-between state where we allow a certain leeway but we still regularly get reports where browsers allow even more "rubbish" than we do. There's for example this (non-merged and closed) PR, that was a take on making libcurl even more liberal in what it accepts: https://github.com/curl/curl/pull/3825

--

 / daniel.haxx.se | Commercial curl support up to 24x7 is available!
                  | Private help, bug fixes, support, ports, new features
                  | https://www.wolfssl.com/contact/
-------------------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Reply via email to