On Dec 19, 2003, at 11:33 AM, Dan Anderson wrote: [..]
[..]I'll do something dumb, like fork in a loop while $number_forks < $fork_this_many.
You might want to have a WAY different strategy!
Remember both sides of the fork get a copy of the code space - and as such the first forked child can think that it is the parent, and it too is going to do the same loop.... at which point each of them are trying to spawn off MORE children, and no one remembers to exit...
Go back and peek at: <http://www.wetware.com/drieux/pbl/Sys/gen_sym_big_dog.txt> The child processes are going to be executing a specific body of Code and not 'all' of the code.
If you peek at the runner() method, I have made it a bit clearer that any of the dispatched code is suppose to have it's own 'exit()' call - and that on the child side of the call IF that 'fails' the next piece of code that would be called would be
exit(-1);
so that we have at least some gurantee that the children will not go running around forking like mad.
So while it IS true that we have the dispatcher
$process{$cmd}($cmd,$args);
inside the infinite loop "while(1) BLOCK" one of the processes is a 'quit' and there are some that will fork children. But the code that the child can execute is limited.
The other idea you might really want to deal with is what is known as a 'co-operative multi-tasking group' - so that rather than fork N processes you have an init script style solution that will spawn a process as the process leader, that will spawn and delegate to N processes incoming new requests. One variant of this strategy is how most web-servers work, where there is httpd will spawn off N-processes, and pass the connection off to another process. This saves on the fork time as well as provide a more reasonable control over SDOS ( self denial of servicing ).
ciao drieux
---
-- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] <http://learn.perl.org/> <http://learn.perl.org/first-response>