On 2016-05-24 23:17, Noah wrote:

Hi,

I am using this example:
http://spartanideas.msu.edu/2014/06/20/an-introduction-to-parallel-programming-using-pythons-multiprocessing-module/

I am sending and receiving communication from the worker processes.

Two issues. the join is only getting to the process and waiting.
When I comment out the .join() process the output.get() appends the
previous process and therefore the returned output keeps getting longer
and longer after each process returns its output.

hostnames is an array of hostnames.

here is my code from main():

     # Define an output queue
     output = mp.Queue()

     # Setup a list of processes that we want to run
     processes = [mp.Process(target=worker, args=(hostnames[x], output))
for x in range(len(hostnames))]

     # Run processes
     for p in processes:
         print "start: {}".format(p)
         p.start()

     time.sleep(6)
     print "processes: {}".format(processes)

     # Exit the completed processes
     '''for p in processes:
         print "join: {}".format(p)
         p.join()'''

     print "got here"
     # Get process results from the output queue
     # results = [output.get() for p in processes]

     io = StringIO()
     count = 0
     for p in processes:
         found_output = output.get()
         print "returned {}".format(p)
         io.write (found_output)
         zipArchive.writestr(hostnames[count] + "." +
content['sitename'] + '.config.txt', io.getvalue())
         count = count + 1
     io.close()


def worker(hostname, output):
.
.
.
         output.put(template_output)


It says in the docs (Python 2.7, 16.6.2.2. Pipes and Queues):

"""Warning
As mentioned above, if a child process has put items on a queue (and it has not used JoinableQueue.cancel_join_thread), then that process will not terminate until all buffered items have been flushed to the pipe.

This means that *if you try joining that process you may get a deadlock unless you are sure that all items which have been put on the queue have been consumed*. Similarly, if the child process is non-daemonic then the parent process may hang on exit when it tries to join all its non-daemonic children.

Note that a queue created using a manager does not have this issue. See Programming guidelines.
"""

(Asterisks added to highlight.)

You should try to consume the output from a process before trying to join it (or, at least, join it without a timeout).

--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to