On Wed, May 09, 2007 at 06:41:27PM -0400, Daniel Ouellet wrote:
> Hi,
> 
> I am passing my finding around for the configuration of sysctl.conf to 
> remove bottleneck I found in httpd as I couldn't get more then 300 httpd 
> process without crapping out badly and above that, the server simply got 
> out of wack.
> 

<SNIP>

> ===========================
> sysctl.conf changes.
> 
> kern.seminfo.semmni=1024
> kern.seminfo.semmns=4096
> kern.shminfo.shmall=16384
> kern.maxclusters=12000

What does netstat -m tell you about the peak usage of clusters is it
really that high?

> kern.maxproc=2048               # Increase for the process limits.
> kern.maxfiles=5000
> kern.shminfo.shmmax=67108864

> kern.somaxconn=2048

Is httpd really so slow in accepting sockets that you had to increase this
by factor 16? Is httpd actually doing a listen with such a large number?

> net.bpf.bufsize=524288

As tedu@ pointed out this has nothing todo with your setup.

> net.inet.ip.maxqueue=1278

Are you sure you need to tune the IP fragment queue? You are using TCP
which does PMTU discovery and sets the DF flag by default so no IP
fragments should be seen at all unless you borked something else.

> net.inet.ip.portfirst=32768
> net.inet.ip.redirect=0

This has no effect unless you enable forwarding.

> net.inet.tcp.keepinittime=10
> net.inet.tcp.keepidle=30
> net.inet.tcp.keepintvl=30

These values are super aggressive especially the keepidle and keepintvl
values are doubtful for your test. Is your benchmark using SO_KEEPALIVE? I
doubt that and so these two values have no effect and are actually
counterproductive (you are sending more packets for idle sessions).

> net.inet.tcp.mssdflt=1452

This is another knob that should not be changed unless you really know
what you are doing. The mss calculation uses this value as safe default
that is always accepted. Pushing that up to this value may have unpleasant
sideeffects for people behind IPSec tunnels. The used mss is the max
between mssdflt and the MTU of the route to the host minus IP and TCP
header.

> net.inet.tcp.recvspace=65535
> net.inet.tcp.sendspace=65535
> net.inet.tcp.rstppslimit=400

> net.inet.tcp.synbucketlimit=420
> net.inet.tcp.syncachelimit=20510

If you need to tune the syncache in such extrem ways you should consider
to adjust TCP_SYN_HASH_SIZE and leave synbucketlimit as is. The
synbucketlimit is here to limit attacks to the hash list by overloading
the bucket list. On your system it may be necessary to traverse 420 nodes
on a lookup. Honestly the syncachelimit and synbucketlimit knob are totaly
useless. If anything we should allow to resize the hash and calculate the
both limits from there.

-- 
:wq Claudio

Reply via email to