Tim Starling wrote:
> Rasmus Lerdorf wrote:
>> For me, working in super high-load environments, this was never an issue
>> because memory was always way more plentiful than cpu.  You can only
>> slice a cpu in so many slices.  Even if you could run 1024 concurrent
>> Apache/PHP processes, you wouldn't want to unless you could somehow
>> shove 64 cpus into your machine.  For high-performance high-load
>> environments you want to get each request serviced as fast as possible
>> and attempting to handle too many concurrent requests works against you
>> here.
> 
> Maybe the tasks you do are usually with small data sets.

Well, I was referring to Yahoo-sized stuff.  So no, the datasets are
rather huge, but on a per-request basis you want to architect things so
you only load things you actually need on that one request.

If you really do need to play around with hundreds of thousands of
records of anything in memory on a single request, then you should
definitely be looking at writing an extension and doing that in a custom
data type streamlined for that particular type of data.

Keeping your Apache2 processes around 40M or below even for less than
efficient code was never much of a problem and that means you can do
about 50 processes in 2G of memory.  You probably don't want to go much
beyond 50 concurrent requests on a single quad-core cpu since there just
won't be enough juice for each one to finish in a timely manner.  Dual
quad-core and you can probably go to about 100, but you also tend to
have more ram in those.  You can of course crank up the concurrency if
you are willing to take the latency hit.

For my own stuff that doesn't use any heavy framework code I easily keep
my per-Apache incremental memory usage under 10M.

-Rasmus

-- 
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to