STINNER Victor added the comment:

> One potential problem is how to provide for people who really want to let the 
> child continue to run in the background or as a daemon without waiting for 
> it, even if the parent exits. Perhaps a special method proc.detach() or 
> whatever?

Maybe my heuristic to decide if ResourceWarning must be emitted is wrong.

If stdout and/or stderr is redirected to a pipe and the process is still alive 
when the destructor is called, it sounds more likely like a bug, because it's 
better to explicitly close these pipes.

If no stream is redirected, yeah, it's ok to pass the pid to a different 
function which will handle the child process. The risk here is not never called 
waitpid() to read the child exit status and so create zombi processes.

For daemons, I disagree: the daemon must use double fork, so the parent will 
quickly see its direct child process to exit. Ignoring the status of the first 
child status is a bug (we must call waitpid().

I have to think about the detach() idea and check if some applications use it, 
or even some parts of the stdlib.

Note: The ResourceWarning idea comes from asyncio.subprocess transport which 
also raises a ResourceWarning. I also had the idea when I read the issue #25942 
and the old issue #12494.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue26741>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to