On 11/3/2010 6:32 PM, da...@lang.hm wrote:
> On Wed, 3 Nov 2010, Paul Graydon wrote:
>
>> I'm facing an interesting challenge at the moment.  Our Apache httpd
>> based load balancers are starting to show signs of strain.  Nothing too
>> bad but a good indicator that as the amount of traffic to our sites
>> increases there will come a point when they can't cope.  I've been
>> expecting this but at the moment as a "Standalone" sysadmin I've got too
>> much on my plate to even get on to anything pro-active that requires
>> more than a few hours work.. with inevitable consequences, though I'm
>> making favourable progress.  Load is now reaching a stage where it's
>> spawning enough httpd sessions to be of some concern and at a level that
>> seems to be resulting in latency for requests.
>
> a couple quick comments
>
> 1. Nginx is single-threaded, so while it's screaming fast, it won't 
> use more than one core.

Hmm.. given we use 2 IP addresses for what I'm assuming are historical 
reasons (1 virtually does nothing), I suppose I could do an ugly manual 
load balance and run two instances of nginx, but that's not so ideal!

>
> 2. If you are doing a lot of SSL operations, consider adding a SSL 
> accelerator card, that can effectivly eliminate the overhead of SSL.
>
> 3. how tuned is your apache instance?
>
> I've seen 10x performance improvements by doing things like compiling 
> the modules I need in (instead of using .so modules) and not loading 
> any modules that I don't need. Combined with newer hardware (two 
> sockets will get you 12 cores nowdays), you could easily scale quite a 
> bit from your existing capibilities without having to take the risk of 
> changing technologies.

At the moment it's using stock CentOS packages, I was hoping to avoid 
compiling from source but if that's the best bet and will have that kind 
of an impact it'll be worth the trade off.

> apache is pretty inefficient in how it logs, try logging to a ramdisk 
> and see if that makes any difference.
Hmm.. I'd really rather not run any risk of losing logs, but one of the 
logs could probably go that way.
>
> check what you have set for your ssl session cache, if it's not in 
> shared memory, move it there (the overhead of filesystem operations 
> for shared disk, even if you almost always operate in ram disk 
> buffers, can be noticable at high traffic levels
Hmm.. /var/cache/mod_ssl .  That's definitely something that can be 
easily moved.  Thanks.
>
> Definantly measure where your latency is happening. It could be that 
> apache is the problem, but it could also be that you are running into 
> something else.
>
> how many processes are you seeing that is making you concerned?

I couldn't give you an solid figure, but based on memory usage compared 
to current I'd guesstimate at 120+ and I swear we're not doing that much 
traffic.  I've added that to zabbix so I'll have a better idea 
tomorrow.  Even now during what is a quiet time for us I'm bouncing 
between 50 and 80, tuned:

StartServers       8
MinSpareServers    5
MaxSpareServers   20
ServerLimit      256
MaxClients       512
MaxRequestsPerChild  20000


Paul
_______________________________________________
Tech mailing list
Tech@lists.lopsa.org
http://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to