The first thing that is generically tried when wishing to measure how long certain parts take is to record your own time snapshots in the code yourself. take the current time before an operation, take it after, subract, report it to yourself.

Also, try working with an array that is actually big, so that you can see meaningful differences in approaches.

Shailendra wrote:
Hi All,
I have a following situation.
==================PSUDO CODE START==================
class holds_big_array:
    big_array  #has a big array

    def get_some_element(self, cond) # return some data from the array
from the big array
==================PSUDO CODE END====================
I wanted to use multiprocessing module to parallelise calling
"get_some_element". I used following kind of code

==================PSUDO CODE START==================
pool = Pool(processes=2)
holder =holds_big_array() #class instantiation
def callback_f(result):
         do something with result
loop many times
   pool.apply_async(holder.get_some_element,args,callback=callback_f)
pool.close()
pool.join()
==================PSUDO CODE END====================
Note: Had to do something to enable instance method being pickled...

I tested this with less than realistic size of big_array . My parallel
version works much slower than than the normal serial version (10-20
sec vs 7-8 min). I was wonder what could be the possible reason. Is it
something to do that it is a instance method and some locking will
make other process wait for the locks. Any idea how to trace where the
program is spending time?

Let me know if the information give is inadequate.

Thanks in advance.
Shailendra Vikas

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to