Thanks for the responses. I will create another thread to supply a more realistic example.
On Tue, Sep 29, 2015 at 10:12 AM, Oscar Benjamin <oscar.j.benja...@gmail.com > wrote: > On Tue, 29 Sep 2015 at 02:22 Rita <rmorgan...@gmail.com> wrote: > >> I am using the multiprocessing with apply_async to do some work. Each >> task takes a few seconds but I have several thousand tasks. I was wondering >> if there is a more efficient method and especially when I plan to operate >> on a large memory arrays (numpy) >> > >> Here is what I have now >> > import multiprocessing as mp >> import random >> >> def f(x): >> count=0 >> for i in range(x): >> x=random.random() >> y=random.random() >> if x*x + y*y<=1: >> count+=1 >> >> return count >> > > I assume you're using the code shown as a toy example of playing with the > multiprocessing module? If not then the function f can be made much more > efficient. > > The problem is that while it's good that you have distilled your problem > into a simple program for testing it's not really possible to find a more > efficient way without finding the bottleneck which means looking at the > full problem. > > -- > Oscar > -- --- Get your facts first, then you can distort them as you please.--
-- https://mail.python.org/mailman/listinfo/python-list