Hi Jean,

Below is the code where I am creating multiple processes:

if __name__ == '__main__':
    # List all files in the games directory
    files = list_sgf_files()

    # Read board configurations
    (intermediateBoards, finalizedBoards) = read_boards(files)

    # Initialize parameters
    param = Param()

    # Run maxItr iterations of gradient descent
    for itr in range(maxItr):
        # Each process analyzes one single data point
        # They dump their gradient calculations in queue q
        # Queue in Python is process safe
        start_time = time.time()
        q = Queue()
        jobs = []
        # Create a process for each game board
        for i in range(len(files)):
p = Process(target=TrainGoCRFIsingGibbs, args=(intermediateBoards[i], finalizedBoards[i], param, q))
            p.start()
            jobs.append(p)
        # Blocking wait for each process to finish
        for p in jobs:
            p.join()
        elapsed_time = time.time() - start_time
        print 'Iteration: ', itr, '\tElapsed time: ', elapsed_time

As you recommended, I'll use the profiler to see which part of the code is slow.

Thanks,
Abhinav

On 03/11/2013 04:14 AM, Jean-Michel Pichavant wrote:
----- Original Message -----

Dear all,
I need some advice regarding use of the multiprocessing module.
Following is the scenario:
* I am running gradient descent to estimate parameters of a pairwise
grid CRF (or a grid based graphical model). There are 106 data
points. Each data point can be analyzed in parallel.
* To calculate gradient for each data point, I need to perform
approximate inference since this is a loopy model. I am using Gibbs
sampling.
* My grid is 9x9 so there are 81 variables that I am sampling in one
sweep of Gibbs sampling. I perform 1000 iterations of Gibbs
sampling.
* My laptop has quad-core Intel i5 processor, so I thought using
multiprocessing module I can parallelize my code (basically
calculate gradient in parallel on multiple cores simultaneously).
* I did not use the multi-threading library because of GIL issues,
GIL does not allow multiple threads to run at a time.
* As a result I end up creating a process for each data point
(instead of a thread that I would ideally like to do, so as to avoid
process creation overhead).
* I am using basic NumPy array functionalities.
Previously I was running this code in MATLAB. It runs quite faster,
one iteration of gradient descent takes around 14 sec in MATLAB
using parfor loop (parallel loop - data points is analyzed within
parallel loop). However same program takes almost 215 sec in Python.
I am quite amazed at the slowness of multiprocessing module. Is this
because of process creation overhead for each data point?
Please keep my email in the replies as I am not a member of this
mailing list.
Thanks,
Abhinav
Hi,

Can you post some code, especially the part where you're create/running the 
processes ? If it's not too big, the process function as well.

Either multiprocess is slow like you stated, or you did something wrong.

Alternatively, if posting code is an issue, you can profile your python code, 
it's very easy and effective at finding which the code is slowing down everyone.
http://docs.python.org/2/library/profile.html

Cheers,

JM


-- IMPORTANT NOTICE:

The contents of this email and any attachments are confidential and may also be 
privileged. If you are not the intended recipient, please notify the sender 
immediately and do not disclose the contents to any other person, use it for 
any purpose, or store or copy the information in any medium. Thank you.

-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to