lesha <pybug.20.le...@xoxy.net> added the comment:

Re "threading locks cannot be used to protect things outside of a single 
process":

The Python standard library already violates this, in that the "logging" module 
uses such a lock to protect the file/socket/whatever, to which it is writing.

If I had a magic wand that could fix all the places in the world where people 
do this, I'd accept your argument.

In practice, threading locks are abused in this way all the time.

Most people don't even think about the interaction of fork and threads until 
they hit a bug of this nature.


Right now, we are discussing a patch that will take broken code, and instead of 
having it deadlock, make it actually destroy data. 

I think this is a bad idea. That is all I am arguing.

I am glad that my processes deadlocked instead of corrupting files. A deadlock 
is easier to diagnose.


You are right: subprocess does do a hard exit on exceptions. However, the 
preexec_fn and os.fork() cases definitely happen in the wild. I've done both of 
these before.


I'm arguing for a simple thing: let's not increase the price of error. A 
deadlock sucks, but corrupted data sucks much worse -- it's both harder to 
debug, and harder to fix.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue6721>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to