"Loop inversion"

I have fixed fcurl for the read scenario (made a PR).

Made a basic fcurl_read/write_to_stdout program with fcurl.

(extract)

    while(!fcurl_eof(fcurl)) {
        sz = fcurl_read(buf, 1, BUFSIZE, fcurl);
        if (sz > 0) {
            wsz = write(STDOUT_FILENO, buf, sz);
            if (wsz != sz) {
                fprintf(stderr, "write() %ld bytes failed\n", (long)sz);
                return 1;
            }
        }
    }

With http(s) 1.1 the maximum memory allocated by the close loop
(transfer/callback in fcurl) seems very reasonable on all the tests I
have done -a few kilobytes-, hence this is close enough to loop
inversion, even if libcurl is still technically owning the transfer loop.


I can't explain why in http/2 it is very unreasonable!

./fcurl_transfer
https://www.youtube.com/s/player/5dd3f3b2/player_ias.vflset/en_US/base.js
>/dev/null

+ content-length: 1556389

+ maxalloc=1184200


So, basically, from a 1.5MB file, we load 1.2MB in memory!

With Valgrind I even hit 100% in memory...


Sorry I could not find bigger files hosted on http/2, if you have some,
I'll test.


An explanation to that?


Cheers

Alain


-------------------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html

Reply via email to