Cecil Westerhof <ce...@decebal.nl>: > At the moment I have the following code: > os.chdir(directory) > for document in documents: > subprocess.Popen(['evince', document]) > > With this I can open several documents at once. But there is no way to > know when those documents are going to be closed. This could/will lead > to zombie processes. (I run it on Linux.) What is the best solution to > circumvent this? > > I was thinking about putting all Popen instances in a list. And then > every five minutes walk through the list and check with poll if the > process has terminated. If it has it can be released from the list. > Of-course I need to synchronise those events. Is that a good way to do > it?
If you don't care to know when child processes exit, you can simply ignore the SIGCHLD signal: import signal signal.signal(signal.SIGCHLD, signal.SIG_IGN) That will prevent zombies from appearing. On the other hand, if you want to know when a process exits, you have several options: * What you propose would work, but is all but elegant. You want to react as soon as child processes die. * You could actually trap the SIGCHLD signal by setting a signal handler. You should not do any actual processing in the signal handler itself but rather convert the signal into a file descriptor event (with a pipe; Python doesn't seem to support signal file descriptors or event file descriptors). * Instead of a using a signal handler, you could capture the output of the child process. At its simplest, you capture the standard output, but you could also open an extra pipe for the purpose. You keep reading the standard output and as soon as you read an EOF, you wait the process out. Marko -- https://mail.python.org/mailman/listinfo/python-list