On 2010-06-14, Pete Vickers <p...@systemnet.no> wrote:
> I expect I'll get shot down for this, but this is what I run (after a good
> deal of trail & error) on my squid boxes ( ~700 users). YMMV.

fairly sensible settings, and not just a blind copy-and-paste of
my least favourite page on calomel.org, I won't shoot this down (:

> net.inet.ip.ifq.maxlen=512

if you see an increase in net.inet.ip.ifq.drops you probably want to
increase this value, otherwise it's ok to leave alone.

> net.inet.tcp.recvspace=262144
> net.inet.tcp.sendspace=262144
> net.inet.udp.recvspace=262144
> net.inet.udp.sendspace=262144

udp probably not necessary for this, tcp recvspace probably yes (though
if you're busy, maybe this needs less than the full 256KB),

the ideal setting for tcp sendspace depends how distant the proxy's
clients are (if it's all LAN clients this probably won't need much
increase from defaults, but with more distant clients, increasing it
according to expected bandwidth*delay product can be helpful).

likewise for recvspace and the servers you're fetching from (though
this is usually less predictable than clients).

> kern.maxfiles=8192
>         :openfiles=5000:\

maybe tweak according to what's actually used (see kern.nfiles / fstat).

> kern.maxclusters=8192

compare max and peak in netstat -m with the default value.

tcp.sendspace and especially tcp.recvspace likely have the biggest
effect out of these settings, but I think are also probably the most
dangerous on a very busy system because they're scaled by the number
of connections..

the other settings may help some, depending on the system.

increasing any of these will increase kernel memory use, use too much
and you crash, so it makes sense to only increase where needed.

Reply via email to