On Tue, May 10, 2011 at 4:46 AM, Mark Ellzey <mtho...@strcpy.net> wrote: > On Tue, May 10, 2011 at 09:04:42AM +0200, Roman Puls wrote: >> whilst this might be nice for flow-blown web services, this does not >> work for embedded systems that have no or very limited disk storage. >> >> Also, this pattern disables effective stream handling, e.g. where >> you don't want to store and process, but do some operation like >> decompression and further post-processing in "real-time". >> >> Instead, my suggestion is to process chunk-wise, and optionally >> provide a chunk-handler that streams to a file. > > You forgot to quote the bit where I said it was a toggle.. > > To this point, the act of spooling doesn't have to be a file write, it > can be any file descriptor, one that you could say, install a handler > for. In an embedded environment, even a single chunk can be very large.
I don't see the point in libevent implementing spooling to a file (descriptor). If an application wants that, it can implement it on top of the existing evhttp_request_set_chunked_cb + the fairly small patches being proposed (call the chunked cb more often, offer some sort of flow control, make it possible for the server to set a chunked callback for POST bodies). It's not possible to do the reverse in a satisfactory way: * While technically you could open a pipe to the local process and handle chunked stuff in that fashion, it seems Rube Goldberg-esque when compared to direct callbacks so I imagine this isn't what you have in mind. * fork()+exec() in the request path is slow (think CGI). Maybe this isn't significant for long requests but if you also get any small requests it'd be problematic. Also even ignoring performance, it's surprisingly hard to fork() correctly in threaded programs. * Likewise, directly spooling to a file on disk is not satisfying for several reasons: ** It doesn't solve the problem of failing on an infinite stream; it makes some larger requests possible but for infinite ones just makes the failure mode running out of disk instead of running out of memory. ** It could as much as double the response time for proxy servers: they must receive the entire request then send the entire request instead of doing them simultaneously. And it'd break the downstream client's progress indicator. ** It's too slow to be done synchronously in the network thread (a single seek is ~10 ms under even ideal conditions, and I was just reading about cases at work where it could be more like a second) and there's no satisfactory async API for plain files, so it would require threading which complicates things. Currently I believe libevent doesn't manage any thread pools; the application is required to do all that. And libevent can even be compiled without threading support so you'd need to this support to be conditional on threading. ** As previously mentioned, there are embedded cases where it's impossible because there is no disk. I think libevent should provide the minimal API that satisfies all use cases, and that's chunked callback + flow control. > > If it's a "per-chunk" callback you are asking for - this has been done, I > wrote > the patch months ago. This ability actually already exists within the > current code. The issue was that the only way you could "enable" this > feature was from a client -> server connection. But on the backend it > uses the same function for chunks for both server and client.. > > The solution was to add a secondary callback that gave you access to a > request post-header-parsing / pre-body-read, at which time you could set > the chunk callback previously accessible only to clients. +1. > > *********************************************************************** > To unsubscribe, send an e-mail to majord...@freehaven.net with > unsubscribe libevent-users in the body. > -- Scott Lamb <http://www.slamb.org/> *********************************************************************** To unsubscribe, send an e-mail to majord...@freehaven.net with unsubscribe libevent-users in the body.