As of 2.0.11, the chunked input port of the HTTP client reads whole chunks at once:
--8<---------------cut here---------------start------------->8--- (define (read-chunk port) (let ((size (read-chunk-header port))) (read-chunk-body port size))) (define (read-chunk-body port size) (let ((bv (get-bytevector-n port size))) (get-u8 port) ; CR (get-u8 port) ; LF bv)) (define* (make-chunked-input-port port #:key (keep-alive? #f)) "Returns a new port which translates HTTP chunked transfer encoded data from PORT into a non-encoded format. Returns eof when it has read the final chunk from PORT. This does not necessarily mean that there is no more data on PORT. When the returned port is closed it will also close PORT, unless the KEEP-ALIVE? is true." (define (next-chunk) (read-chunk port)) [...] (define (read! bv idx to-read) [...] (set! buffer (next-chunk)) [...] (make-custom-binary-input-port "chunked input port" read! #f #f close)) --8<---------------cut here---------------end--------------->8--- This is undesirable because: 1. the HTTP server can produce arbitrarily large chunks, leading to large memory use in the client (nginx does indeed produce very large chunks in some cases); 2. it adds an extra level of buffering that the caller of ‘http-get’ does not control (a read of 1 byte from the HTTP body port leads to an actual read of a whole chunk); 3. it introduces extra copying and allocations. Ludo’.