Charles-François Natali <neolo...@free.fr> added the comment:

# A lock taken from the current thread should stay taken in the
        # child process.

Note that I'm not sure of how to implement this.
After a fork, even releasing the lock can be unsafe, it must be re-initialized, 
see following comment in glibc's malloc implementation:
/* In NPTL, unlocking a mutex in the child process after a
   fork() is currently unsafe, whereas re-initializing it is safe and
   does not leak resources.  Therefore, a special atfork handler is
   installed for the child. */

Note that this means that even the current code allocating new locks after fork 
(in Lib/threading.py, _after_fork and _reset_internal_locks) is unsafe, because 
the old locks will be deallocated, and the lock deallocation tries to acquire 
and release the lock before destroying it (in issue #11148 the OP experienced a 
segfault on OS-X when locking a mutex, but I'm not sure of the exact context).

Also, this would imply keeping track of the thread currently owning the lock, 
and doesn't match the typical pthread_atfork idiom (acquire locks just before 
fork, release just after in parent and child, or just reinit them in the child 
process)

Finally, IMHO, forking while holding a lock and expecting it to be usable after 
fork doesn't make much sense, since a lock is acquired by a thread, and this 
threads doesn't exist in the child process. It's explicitely described as 
"undefined" by POSIX, see 
http://pubs.opengroup.org/onlinepubs/007908799/xsh/sem_init.html :
"""
The use of the semaphore by threads other than those created in the same 
process is undefined.
"""

So I'm not sure whether it's feasable/wise to provide such a guarantee.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue6721>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to