* Peter Wemm <[EMAIL PROTECTED]> [010211 05:30] wrote:
> 
> For what it's worth, we found (at Yahoo) that excessively large listen
> queues tend to cause more problems than they solve.  The circumstances are
> probably different, but we found that on one particular application, a
> queue of 10 was better than the queue of 1024 that they had been using.
> This particular application is probably quite different to yours, but we
> found that it was generally bad to accept more than about a second or two's
> worth of connections.  ie: this particular group of systems were processing
> 7-8 connections per second, so a queue depth of 1024 was about 140 seconds.
> Most of them would time out when they waited that long (30 or 60 second
> protocol timeout) so when the machine was overloaded and backing up, it was
> being made worse by accepting all these connections, doing processing to get
> them in the listen queue, then timing out.  What we ended up with was a LOT
> of races where sockets would get to the head of the queue right as the
> remote was in the process of initiating a timeout, so we got large numbers
> of 'connection reset by peer' type problems being reported by accept and
> getsockname()/getpeername() etc.  It was also bad because the userland app
> then wasted time processing a dying connection, thus contributing further
> to the overload.
> 
> Anyway, just be careful, ok?  larger listen queues are not a magic solution
> for all problems.  At 100 connections per second, the current limit is about
> 327 seconds worth of delay.  at 500 per second, it is 65 seconds delay.

I'm a bit past 500 connections per second. :)

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message

Reply via email to