On Wed, 4 Jan 2012, Chuck Swiger wrote:

On Jan 4, 2012, at 12:22 PM, Dan The Man wrote:
Trying to stress test a framework here that tosses 100k of connections into a 
listen queue before doing anything, I realize I'll have to use multiple local 
IPs get get around port limitations, but why is this backlog using a limit?

Even a backlog of a 1000 is large compared to the default listen queue size of 
around 50 or 128.  And if you can drain 1000 connections per second, a 65K 
backlog is big enough that plenty of clients (I'm thinking web-browsers here in 
particular) will have given up and maybe retried rather than waiting for 60+ 
seconds just to exchange data.


For web browsers makes sense, but if your coding your own server application its only a matter of increasing the read and write timeouts to fill queue that high and still process them. Of course wouldn't need anything that high, but for benchmarking how much can toss in that listen queue then write something to socket on each one after connection established to see how fast application can finish them all, I think its relevant.

This linux box I have no issues:
cappy:~# /sbin/sysctl -w net.core.somaxconn=200000
net.core.somaxconn = 200000
cappy:~# sysctl -w net.ipv4.tcp_max_syn_backlog=20000
net.ipv4.tcp_max_syn_backlog = 200000
cappy:~#



Dan.

--
Dan The Man
CTO/ Senior System Administrator
Websites, Domains and Everything else
http://www.SunSaturn.com
Email: d...@sunsaturn.com

_______________________________________________
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to