On Sat, 26 Dec 2020, XSLT2.0 via curl-library wrote:

Fortunately http/1.1 is simple enough, especially when downloading the "body" of a file (with Connection: close), you must admit that redoing what libcurl does here amounts to nothing.

This could be one of the biggest understatements I've read in a while:

 "redoing what libcurl does here amounts to nothing"

There's so much to say about that short little sentence.

You come here and discuss increasing performance with percentages and then you blurt out something like "with Connection: close". That's a super big blow to performance for consequtive transfers, often much more important than fewer copies. Also, suddenly compression doesn't seem to mean anything, which also can do a significant performance bost (if we count total time from start to the content being in the client's memory).

Then you repeat the myth that HTTP/1.1 is "simple". I've worked with HTTP transfers long enough to commonly use that phrase as a joke. HTTP is far from simple and redoing what libcurl does technically takes a long time if you want to have a rock solid behavior over the Internet to a wide assortment of servers you don't control.

In addition to that, what libcurl brings to the world is only partially the actual transfer tech. What it does even more is to provide a solid, mature, tested and well documented API for applications. A single stable API on 80 operating systems, for decades. That's what libcurl is, and that is far more valuable than shaving off a few percent time from a CPU bound transfer.

Also, libcurl provides an API that is designed to work with a lot of different transfer protocols and versions. You say HTTP/2 doesn't offer anything when it comes to file downloads, but that's not necesarily true. Reusing an existing connection to do multiple streams can be a huge benefit, depending on the conditions.

Also as demonstrated by the lack of interest about fcurl, writing a fuse driver (filesystem) might be one of those extremely rare occasions where a good working read-like interface makes a huge difference: efficiency + simpler code.

I think you overstate the "uniqueness" of your use case. I think most users want curl to perform as good as possible.

Subsidiary question, I have noted that changing the socket to "blocking" (with fcntl) seems to work perfectly, at least for http/1.1 GET. I don't really need it (quite the opposite), but was just testing performance: it does not seem to help either! Would such a change be a problem in other uses of curl_easy_recv()?

It sounds like something that would break existing behavior, wouldn't it?

By all means, write code that you want for your use case that makes you happy and that makes your code run the way you want. I just get the sense that you're not viewing the big picture here the same way I do.

--

 / daniel.haxx.se
 | Commercial curl support up to 24x7 is available!
 | Private help, bug fixes, support, ports, new features
 | https://www.wolfssl.com/contact/
-------------------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html

Reply via email to