I like to run stupid benchmarks (http_load) and found the same problem
lots of other people complained about on the lists - "no buffer space
available" and similar on the requests generating machine.

I wanted to let the http_load run for long time so that the number of
Apache processes stabilizes for high load.

But no matter what I configured to kern.maxusers,
net.inet.tcp.{send,recv}space, kern.ipc.somaxconn, kern.ipc.maxsockets,
kern.ipc.nmbclusters, kern.maxfiles, client always starts to fail after
having net.inet.tcp.pcbcount about ~4000 (close to the number of sockets
seen with 'netstat -anf inet'). Output of 'netstat -m' isn't anywhere
close to mbuf exhaustion.

My main observation:
>From the names of OIDs I thought the high limit on net.inet.tcp.pcbcount
could be somehow controlled by kern.ipc.maxsockets. That seems to be true
on CURRENT but not on STABLE.

I understand that it's not very common to have more than ~4000 sockets but
I think it should be possible. It can be the reason of others' failures I
guess.


-- 
Michal Mertl
[EMAIL PROTECTED]




To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to