I added tests, handled the infinite-receive case as well as the infinite-send case, and slightly renamed a few things.
https://github.com/clifffrey/Libevent/tree/http-transfer-throttling I believe that the changes there are enough to prevent out-of-memory conditions when sending or receiving huge HTTP requests. Part of me wonders if this is the wrong way to go with the fixes, and if it would somehow be better/natural to make the HTTP layer act a little bit more like a filter, and expose a struct bufferevent interface to the incoming/outgoing data stream. However, that seems like it would end up being a bigger rewrite, and this is a concrete fix to a specific problem, so this change still feels worthwhile to me. >> Also, a completely different bug: If you want to support potentially >> infinite POST streams from clients (imagine that you wanted to >> implement word-count as an http server, where they POST a document, >> and you return the word count) then clients can run your server out of >> memory by sending one very large chunk. I think that the >> evhttp_request_set_chunked_cb callback should be called on every read, >> not just when a complete HTTP chunk has been read. I have made a >> patch that does this, but I worry that maybe some user out there >> depends on the only-read-complete-http-chunk behavior. > > Perhaps a flag of some kind could be set so that anybody depending on > the old behavior can get it? I did not add this flag because I really feel like it would be a weird flag to have, and do not imagine any code actually depended on the old behavior. If someone has a real-world example that this would break, I will add the flag. Let me know if any more test cases are necessary, or if you want me to make any other changes. Cliff *********************************************************************** To unsubscribe, send an e-mail to majord...@freehaven.net with unsubscribe libevent-users in the body.