New submission from Arun Babu Neelicattu: The task/worker handler threads in the multiprocessing.pool.Pool class are (in accordance to posix standards) not copied over when the process containing the pool is forked.
This leads to a situation where the Pool keeps receiving tasks but the tasks never get handled. This could potentially lead to deadlocks if AsyncResult.wait() is called. Not sure if this should be considered as a bug, or an invalid use case. However, this becomes a problem when importing modules that use pools and the main code uses multiprocessing too. [BAD] Workaround: Reassigning Pool._task_handler to a new instance of threading.Thread after the fork seems to work in the case highlighted in the example. Environment: Fedora 18 Linux 3.7.8-202.fc18.x86_64 #1 SMP Fri Feb 15 17:33:07 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux python3-3.3.0-1.fc18.x86_64 An example of this issue is shown below: from multiprocessing import Pool, Process def t2(): # We expect the pool to handle this print('t2: Hello!') pool = Pool() def t1(): # We assign a task to the pool pool.apply_async(t2) print('t1: Hello!') if __name__ == '__main__': # Process() forks the main process containing the pool Process(target=t1).start() ---------- components: Library (Lib) files: pool_forking.py messages: 182647 nosy: abn priority: normal severity: normal status: open title: multiprocessing.pool.Pool task/worker handlers are not fork safe type: behavior versions: Python 3.3 Added file: http://bugs.python.org/file29157/pool_forking.py _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue17273> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com