I was running a stress test on a site that I run called
StupidCensorship.com which frequently slows to a crawl due to high
traffic. From running a stress test on it using "ab" that sent 1,000
concurrent requests to the site, I found that the number of running
instances of /usr/sbin/httpd would rise from its initial default number of
22, up to 258, and then stay steady at 258. While the number was between
22 and 258, the site performance was still OK, but once it hit 258, the
response time was a lot slower. I'm guessing this has something to do with
the fact that while the number is climbing, the machine can just spawn a
new instance of httpd to handle the request, but once it hits the maximum
(due to hardware limits, I guess), new requests just get queued.
Do these symptoms suggest any obvious way to improve performance, besides
getting more RAM? (And even more RAM would, I assume, only raise the limit
of "httpd" instances that could run, but it would still plateau once it hit
that limit.)
One possibility: I noticed that even after the stress test was over, the
number of running 'httpd' instances would fall very slowly, about one per
second, until it got back down to 22. I thought they were keeping the
connection open, but my httpd.conf has KeepAlive set to Off. If I could
somehow get the httpd instances to just exit memory once they were done,
instead of hanging around, would that solve the performance problem without
any negative side effects?
-Bennett
[EMAIL PROTECTED] http://www.peacefire.org
(425) 497 9002
---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
" from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]