Charles-François Natali <[email protected]> added the comment:
There's indeed a leak in your code, because you don't close the stdout and
stderr file descriptors.
Try with:
subp.terminate()
subp.stdout.close()
subp.stderr.close()
return True
And you'll be just fine.
The reason why this works with a short delay is subtle however, and there's
actually a bug:
- when a subprocess.Popen is created, its stdout and stderr are wrapped with
os.fdopen()
- file objects returned by fdopen() are automatically closed when the object is
deallocated (i.e. when the file object goes out of scope in cPython)
- so in theory, when your subprocess.Popen goes out of scope, the corresponding
FDs should be closed, and you shoudn't have a leak (note that this assumes
cPython, and will maybe not work with other implementations - you shouldn't
rely on this)
- the problem is that when subprocess.Popen's __del__ method is called before
the process has exited (i.e. if you return from leaktest() before the process
has exited), the Popen object is added to a list (subprocess._active), so that
it can be waited (to avoid zombies)
- Popen objects from this list are collected (i.e. the process is waited for,
and if terminated it's removed from the list) synchronously when a new Popen()
object is created (_cleanup funtion)
The problem is that there's a bug in the collection code:
def _cleanup():
for inst in _active[:]:
res = inst._internal_poll(_deadstate=sys.maxint)
if res is not None and res >= 0:
try:
_active.remove(inst)
except ValueError:
# This can happen if two threads create a new Popen instance.
# It's harmless that it was already removed, so ignore.
pass
res = inst._internal_poll(_deadstate=sys.maxint)
if res is not None and res >= 0:
If the process exit code is negative (in your case, -SIGTERM) then the Popen
object is not removed from the list.
Two consequences:
- the object is not deallocated, the corresponding FDs neither, and you hit
RLIMIT_NOFILE
- even if stdout and stderr are closed manually, the object itself is not
deallocated, so you have an ever-growing _active list, and a memory leak (and
walking the list takes longer and longer)
Changing
if res is not None and res >= 0:
to
if res is not None:
fixes this.
I'll look at this in more detail and provide a patch.
----------
nosy: +neologix
_______________________________________
Python tracker <[email protected]>
<http://bugs.python.org/issue12650>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com