David Naylor <[EMAIL PROTECTED]> added the comment: I'm currently developing a script that makes extensive use of threads and Popen, with threads being created dynamically and each thread creating a large number of Popen processes.
If I limit the thread count to 2 (main + worker) then the problem appears to disappear (or is just intermittent) however if I run with more than 2 threads or from within winpdb then the dead lock occurres rather consistently (and in the case of winpdb, always) According to winpdb the script hangs on line 1086 of subprocess.py (from 2.5.2), strangely all remaining worker threads hand at this point: # Wait for exec to fail or succeed; possibly raising exception ==> data = os.read(errpipe_read, 1048576) # Exceptions limited to 1 MB os.close(errpipe_read) if data != "": os.waitpid(self.pid, 0) child_exception = pickle.loads(data) raise child_exception I tried the suggestion of adding close_fds=True or using a global lock but the script still hangs under winpdb. A solution that did appear to work was having both a global lock and adding close_fds=True to the call list. Running the script under pdb or cProfile appears to fix the problem as well... NOTE: winpdb appears to bring out the worst case scenario and reliably reproduces the problem. This is running on FreeBSD 8-Current amd64 (from early August) with 2 cores. ---------- nosy: +DragonSA _______________________________________ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2320> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com