The problem with running a static amount of processes is trying to
figure out the right number. Also it is not elastic in any fashion.
Shared hosts would love the elasticity as it will allow sites to flex
up and down as needed without giving each individual user more
processes than they really need
A global max amount of processes or memory consumption metric might be
useful to throttle the entire daemon from spawning new children.
Sent from my iPhone
On Dec 7, 2009, at 10:26 AM, "Reinis Rozitis" <r...@roze.lv> wrote:
Correct. Biggest lacking feature.
While this maybe a bit out of topic, but just out of curiosity why
do you think that adaptive spawning is any good (trying to run more
processes in given time period than started) - "stepping" back to
how apache operates with its prefork mechanism (iirc there are even
people from php-dev community which suggested running apache with
start/maxservers identical so there is always constant number of
childs (for php processing) to avoid unwanted/unexpected resource
consumption?
There should be reasons why it was also dropped from the other
external process manager lighty/spawn-fcgi and never planned in
webservers like nginx ..
I would rather want to see one day php (master process) return
FCGI_OVERLOADED for the webserver/application to decide what to do
next.
rr
--
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php
--
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php