Richard Oudkerk added the comment:

I don't think this is a bug -- processes started with fork() should nearly 
always be exited with _exit().  And anyway, using sys.exit() does *not* 
guarantee that all deallocators will be called.  To be sure of cleanup at exit 
you could use (the undocumented) multiprocessing.util.Finalize().

Note that Python 3.4 on Unix will probably offer the choice of using 
os.fork()/os._exit() or _posixsubprocess.fork_exec()/sys.exit() for 
starting/exiting processes on Unix.

Sturla's scheme for doing reference counting of shared memory is also flawed 
because reference counts can fall to zero while a shared memory object is in a 
pipe/queue, causing the memory to be prematurely deallocated.

I think a more reliable scheme would be to use fds created using shm_open(), 
immediately unlinking the name with shm_unlink().  Then one could use the 
existing infrastructure for fd passing and let the operating system handle the 
reference counting.  This would prevent leaked shared memory (unless the 
process is killed in between shm_open() and shm_unlink()).  I would like to add 
something like this to multiprocessing.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue6653>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to