In article <[EMAIL PROTECTED]>, John Nagle <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] wrote: > > I'm using the Python processing module. I've just run into a problem > > though. Actually, it's a more general problem that isn't specific to > > this module, but to the handling of Unix (Linux processes) in general. > > Suppose for instance that for some reason or another, after forking > > several child processes, the main process terminates or gets killed > > (or segfaults or whatever) and the child processes are orphaned. Is > > there any way to automatically arrange things so that they auto- > > terminate or, in other words, is there a way to make the child > > processes terminate when the parent terminates? > > > > Thank you. > > Put a thread in the child which reads stdin, and make stdin > connect to a pipe from the parent. When the parent terminates, > the child will get a SIGPIPE error and raise an exception. > > John Nagle That could work, but not precisely in that manner. You get SIGPIPE when you write to a closed pipe. When you read from one, you get end of file, i.e., a normal return with 0 bytes. When you test it, make sure to try a configuration with more than one child process. Since the parent holds the write end of the pipe, subsequently forked child processes could easily inherit it, and they'll hold it open and spoil the effect. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list