Hey there

It strikes me that your problem is directly related to memory usage. Linux machines have a bad problem in that if your apache children get swapped out ->for any reason at all<-, they will lose any shared memory that they may have had before being swapped out. When they are swapped back in, they take up more RAM than before, so the system swaps out another Apache process, and so on, until the system enters the "downward spiral of death".

You need to memory-limit your apache processes using Apache::SizeLimit to make sure that they NEVER go over a certain size. Then you have to make sure that

MaxClients * Apache::SizeLimit < Available core memory

which will make you miserable because you have MySQL running on the same machine. I would recommend moving MySQL off if you can, which will greatly simplify your memory calculations. If you only have 768MB on your machine, then you need to subtract the OS's basic footprint, and anything that's running, subtract a bit more for breathing room, then divide what's left by the Apache::SizeLimit number that you have found (through experimentation and observation) to be the stable size of your apache children, and that will give you the MaxClient number.

Then, you pre-load all of your perl modules and as many of your perl scripts as possible, and you should be fine. I have seen this happen in almost every mod_perl (<1.99, admittedly) environment I have ever worked on. It takes some tweaking but once you find the magic numbers, it will not give you any more problems.

Also, I don't know if this is an option for you, but switching to FreeBSD made our lives much much easier, from almost any way you look at it. Food for thought.

I also highly recommend the idea of having two separate servers, a lightweight tiny apache that handles static content, and proxies dynamic requests to a heavier, mod_perl apache. You can save huge amounts of memory that way, and it tends to increase performance significantly. It's also incredibly easy to set up.

Hope this helps!

Kyle
Central Park Software


We have recently added a site to our server which hosts 30 other sites.
The new site uses modperl & MySQL. After adding the new site, the server
we were on (shared server at a national server farm) had major resource
problems (too many connections & load avg up to 30 not infrequently) &
eventually crashed. We moved to a new dedicated server which gave us more
resources, but we're still experiencing intermittent sluggishness - load
avg approaching 10 - and this is before this new site really becomes
active. Maybe 20 logins per day currently - in 10 days we'll have ~ 4000.


I've made changes to Apache config, following suggestions in the mod_perl
performance tuning docs:
MinSpareServers 5
MaxSpareServers 10
MaxClients=75
MaxRequestsPerChild 500


I've made efforts to optimize the database & queries, and have made
changes to the my.cnf file following suggestions in the High Performance
MySQL O'Reilly book & a couple of postings that seemed to have some
similarity to our situation:
set-variable = key_buffer_size=128M
set-variable = table_cache=1024
set-variable = join_buffer=1M
set-variable = sort_buffer=2M
set-variable = record_buffer=1M
set-variable = wait_timeout=20
set-variable = thread_cache=8


But before I go any further with testing & modifying configurations &
perhaps code, I'd like to have some good idea that the server we currently
are running on should be able to handle this new site & the 30 others - 2
running WebGUI, 12 use MySQL (4-5 fairly heavily), 4 modperl.


The server is a Pentium 3 800 MHz with 768M RAM (of which we can use ~640,
according to the server sysadmin)


It's running Ensim 4.0 with Fedora Core 1 with Embedded Perl version
v5.8.1 for Apache/1.3.31 (Unix) (Red-Hat/Linux) mod_perl/1.29 PHP/4.3.8
FrontPage/5.0.2.2635 mod_ssl/2.8.18 OpenSSL/0.9.7a
MySQL version 3.23.58

Any constructive suggestions are very welcome.

Thanks.

Sys




Reply via email to