Hi,

thanks for your response.

I checked out multiprocessing.value, however from what I can make out, it works 
with object of only a very limited type. Is there a way to do this for more 
complex objects? (In reality, my object is a large multi-dimensional numpy 
array).

Thanks,

Elsa.

Date: Wed, 6 Apr 2011 22:20:06 -0700
Subject: Re: multiprocessing
From: drsali...@gmail.com
To: kerensael...@hotmail.com
CC: python-list@python.org


On Wed, Apr 6, 2011 at 9:06 PM, elsa <kerensael...@hotmail.com> wrote:

Hi guys,



I want to try out some pooling of processors, but I'm not sure if it

is possible to do what I want to do. Basically, I want to have a

global object, that is updated during the execution of a function, and

I want to be able to run this function several times on parallel

processors. The order in which the function runs doesn't matter, and

the value of the object doesn't matter to the function, but I do want

the processors to take turns 'nicely' when updating the object, so

there are no collisions. Here is an extremely simplified and trivial

example of what I have in mind:



from multiprocessing import Pool

import random



p=Pool(4)

myDict={}



def update(value):

    global myDict

    index=random.random()

    myDict[index]+=value



total=1000



p.map(update,range(total))





After, I would also like to be able to use several processors to

access the global object (but not modify it). Again, order doesn't

matter:



p1=Pool(4)



def getValues(index):

    global myDict

    print myDict[index]



p1.map(getValues,keys.myDict)



Is there a way to do this 
This should give you a synchronized wrapper around an object in shared memory:

http://docs.python.org/library/multiprocessing.html#multiprocessing.Value


                                          
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to