On January 4, 2005 11:18 am, Sean Chittenden wrote: > > PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU CPU > > COMMAND > > 474 squid 96 0 68276K 62480K select 0 53:38 16.80% 16.80% > > squid > > 311 bind 20 0 10628K 6016K kserel 0 12:28 0.00% 0.00% > > named > > > > It's actually so good that one machine can now handle all traffic > > (around 180 Mb/s) at < %50 cpu utilization. Seems like something in > > the > > network stack is responsible for the high %system cpu util... > > OH!!!! Wow, I should've noticed that earlier. Your hint about someone > on Linux having the same problem tipped me off to look at the process > state again. Anyway, you nearly answered your own question, save you > probably aren't familiar with select(2)'s lack of scalability. Read > these: > > http://www.kegel.com/c10k.html > > > Specifically the four methods mentioned here: > > http://www.kegel.com/c10k.html#nb.select > > > Then look at the benchmarks done using libevent(3): > > http://www.monkey.org/~provos/libevent/ > > > Dime to dollar you're spending all of your time copying file descriptor > arrays in and out of the kernel because squid uses select(2) instead of > kqueue(2). Might be an interesting project for someone to take up to > convert that to kqueue(2). Until then, any local TCP load balancer > that uses kqueue(2) would also solve your problem (I'm not aware of any > off the top of my head... pound(8) does, but it is only used for HTTP > and is not a reverse proxy) and would likely prevent you from having > your problems. -sc
squid-dev has kqueue support already. ./configure --disable-select --disable-poll --enable-kqueue -- Darcy Buskermolen Wavefire Technologies Corp. ph: 250.717.0200 fx: 250.763.1759 http://www.wavefire.com _______________________________________________ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "[EMAIL PROTECTED]"