Hello, I'm trying to use multiprocessing to parallelize a code. There is a number of tasks (usually 12) that can be run independently. Each task produces a numpy array, and at the end, those arrays must be combined. I implemented this using Queues (multiprocessing.Queue): one for input and another for output. But the code blocks. And it must be related to the size of the item I put on the Queue: if I put a small array, the code works well; if the array is realistically large (in my case if can vary from 160kB to 1MB), the code blocks apparently forever. I have tried this: http://www.bryceboe.com/2011/01/28/the-python-multiprocessing-queue-and-large- objects/ but it didn't work (especifically I put a None sentinel at the end for each worker).
Before I change the implementation, is there a way to bypass this problem with multiprocessing.Queue? Should I post the code (or a sketchy version of it)? TIA, David -- http://mail.python.org/mailman/listinfo/python-list