Am 06.03.2013 22:30, schrieb Alan McKinnon:
> On 06/03/2013 23:22, Michael Mol wrote:
>> On 03/06/2013 04:07 PM, Alan McKinnon wrote:
>>> On 06/03/2013 22:59, Michael Mol wrote:
>>>> On 03/06/2013 03:54 PM, Grant wrote:
>>>>> I lowered my MaxClients setting in apache a long time ago after
>>>>> running out of memory a couple times.  I recently optimized my
>>>>> website's code and sped the site way up, and now I find myself
>>>>> periodically up against MaxClients.  Is a RAM upgrade the only
>>>>> practical way to solve this sort of problem?
>>>>
>>>> Use a reverse proxy in caching mode.
>>>>
>>>> A request served up by the proxy server is a request not served up by
>>>> Apache.
>>>>
>>>> Squid, nginx and varnish are all decent for the purpose, though squid
>>>> and nginx are probably the more polished than varnish.
>>>>
>>>
>>> Grant,
>>>
>>> If you optimized the site well, I would imagine your RAM needs per page
>>> request would go down and you could possibly increase MaxClients again.
>>> Have you given it a try since the optimization? Increase it slowly bit
>>> by bit comparing the current performance with what it used to be, and
>>> make your judgement call.
>>>
>>> Is there some reason why you can't just add more memory to the server?
>>> It's a fast and very cheap and very effective performance booster with
>>> very little downtime. But if your slots are full and you need new
>>> hardware, that's a different story.
>>>
>>> Michael's proxy suggestion is excellent too - I use nginx for this a
>>> lot. It's amazingly easy to set up, a complete breath of fresh air after
>>> the gigantic do-all beast that is apache. Performance depends a lot on
>>> what your sites actually do, if every page is dynamic with changing
>>> content then a reverse proxy doesn't help much. Only you know what your
>>> page content is like.
>>
>> The thing to remember is that clients request a *lot* of static content,
>> too. CSS styles, small images, large images...these cache very well, and
>> (IME) represent the bulk of the request numbers.
> 
> <bang head>
> Yes, of course. You are perfectly correct, I forget all about that
> "invisible" stuff in the background
> </bang head>
> 
> 
> 
>>
>> Unfortunately, with the way mod_php and friends work with Apache,
>> resources consumed by static file requests aren't trivial once you
>> realize that the big problem is with the number of concurrent
>> requests...so it's best if those can be snapped up by something else, first.
>>
>> I've been running squid in front of my server for a few years. I've been
>> eyeing CloudFlare, though; they're a CDN that behaves like a reverse
>> proxy. You point their system at your server, your DNS at their system,
>> and they'll do the heavy lifting for you. (And far better than having
>> your own singular caching server would. I've worked at a CDN, and what
>> they accomplish is pretty slick.)
>>
>>
>>
> 
> 

To optimize the caching potential, there are a few tricks. There's an
older tech talk about that from a Yahoo guy [1]. Google's advices are
also worth reading [2] and for a quick and dirty solution, look at [3].

[1] https://www.youtube.com/watch?v=BTHvs3V8DBA
[2] https://developers.google.com/speed/docs/best-practices/caching?hl=de
[3] http://fennb.com/microcaching-speed-your-app-up-250x-with-no-n

BTW: What's the current status of MPM Worker or Event and PHP? Does it
work? Does it help?

Hope this helps,
Florian Philipp


Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to