On 6/15/06, David Schell <[EMAIL PROTECTED]> wrote:

I have a server that receives data from a client through a internet
socket.  When the connection is established, the server forks a
process to deal with the connection.  The child then reads data from
the connection, does some stuff, responds to the client and waits for
more data. The child dies when the connection is closed by the
client.   Works great on most of the systems, but I have one where
the connections never seem to close and the children never die, but
the client is opening new connections.

So, you're saying that the child is somehow not understanding that the
connection is closed, and is waiting forever for more data? That
sounds like a bug in Perl, or some lower level. (If I had to guess,
I'd check the way Perl is configured on that system. But it could be
anything.)

1) limit the number of children, killing the oldest once that limit
is reached.  It would be simple enough to keep an array of PIDs for
the children using push/shift and then kill to get rid of the oldest
child.  Killing processes seems messy though.

It's perfectly legitimate. But don't kill anything that might still be
active, just because it's the oldest. One simple strategy might be for
the active child process to periodically touch a file. The parent
process knows a child is hung if the file's mtime is more than N
seconds. It's probably sufficient for the parent to check for hung
processes only once in every five or ten connection attempts.

Hope this helps!

--Tom Phoenix
Stonehenge Perl Training

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
<http://learn.perl.org/> <http://learn.perl.org/first-response>


Reply via email to