Hi everyone

I'm trying to keep a hostile client from causing a libevent server from 
consuming too much memory in buffered data.

Using bufferevents, I let libevent keep the input buffer within limits using 
its watermark mechanism, and I added a simple mechanism in the server to, 
essentially, stop draining the input buffer if the output buffer is past a 
certain size.

The problem I'm observing using this pattern (and various variations I've 
tried), is that at some point libevent stops calling the write-ready callback 
set on the bufferevent.

I've simplified the server code to the bare essentials needed to demonstrate 
the problem,  source posted here:
http://pastebin.com/CzpVRRAy

When I run the server and hammer it with a client from a second terminal, I see 
this:
$ ./server 
Server listening on port 6565
Client connected
[Fri Jun 24 20:08:26 2011] BEV IN CALLED.  BEV LENGTHS: IN [4096] OUT [0]
[Fri Jun 24 20:08:26 2011] BEV IN CALLED.  BEV LENGTHS: IN [4096] OUT [34816]
[Fri Jun 24 20:08:26 2011] BEV IN CALLED.  BEV LENGTHS: IN [8192] OUT [18432]
[Fri Jun 24 20:08:26 2011] BEV OUT CALLED.  BEV LENGTHS: IN [8192] OUT [0]
[Fri Jun 24 20:08:26 2011] BEV IN CALLED.  BEV LENGTHS: IN [8192] OUT [0]
[Fri Jun 24 20:08:26 2011] BEV IN CALLED.  BEV LENGTHS: IN [4096] OUT [69632]
[Fri Jun 24 20:08:26 2011] BEV IN CALLED.  BEV LENGTHS: IN [8192] OUT [53248]
[Fri Jun 24 20:08:26 2011] BEV OUT CALLED.  BEV LENGTHS: IN [8192] OUT [0]
[Fri Jun 24 20:08:26 2011] BEV IN CALLED.  BEV LENGTHS: IN [8192] OUT [0]
[Fri Jun 24 20:08:26 2011] BEV IN CALLED.  BEV LENGTHS: IN [4096] OUT [69632]
[Fri Jun 24 20:08:26 2011] BEV IN CALLED.  BEV LENGTHS: IN [8192] OUT [53248]
[Fri Jun 24 20:08:26 2011] BEV OUT CALLED.  BEV LENGTHS: IN [8192] OUT [0]
[Fri Jun 24 20:08:26 2011] BEV IN CALLED.  BEV LENGTHS: IN [8192] OUT [0]
[Fri Jun 24 20:08:26 2011] BEV IN CALLED.  BEV LENGTHS: IN [4096] OUT [69632]
[Fri Jun 24 20:08:26 2011] BEV IN CALLED.  BEV LENGTHS: IN [8192] OUT [53248]
[Fri Jun 24 20:08:26 2011] BEV OUT CALLED.  BEV LENGTHS: IN [8192] OUT [0]
[Fri Jun 24 20:08:26 2011] BEV IN CALLED.  BEV LENGTHS: IN [8192] OUT [0]
[Fri Jun 24 20:08:26 2011] BEV IN CALLED.  BEV LENGTHS: IN [4096] OUT [69632]
[Fri Jun 24 20:08:26 2011] BEV IN CALLED.  BEV LENGTHS: IN [8192] OUT [53248]

The server is apparently stuck at this point, and the expected bev write 
callback (to indicate the 53248 bytes have been drained to 0) never happens.

Under Mac OS X, I can see it's stuck on kqueue().  Under linux 2.6 I can see it 
stuck on epoll_wait()

If you edit server.c and comment out the return;, the buggy behavior goes away, 
but at this point the output buffer is unconstrained in size and you can see 
the server log it's increasing size (and see the process consuming more and 
more resident memory).

I'd love any pointers about whether this is the proper way to keep a lid on the 
output buffer size, or if there's a better way to do it.  As well, whether the 
above behavior is somehow expected, or indicates a bug that should be filed.

Thank 
you.***********************************************************************
To unsubscribe, send an e-mail to majord...@freehaven.net with
unsubscribe libevent-users    in the body.

Reply via email to