There is a trick that I use when data transfer is the performance killer. Just
save your big array first (for instance on and .hdf5 file) and send to the
workers the indices to retrieve the portion of the array you are interested in
instead of the actual subarray.
Anyway there are cases where m
Yes, it is a simplification and I am using numpy at lower layers. You correctly
observe that it's a simple operation, but it's not a shift it's actually
multidimensional vector algebra in numpy. So the - is more conceptual and takes
the place of hundreds of subtractions. But the example dies dem
Correct me if I'm wrong, but at a high level you appear to basically
just have a mapping of strings to values and you are then shifting all
of those values by a fixed constant (in this case, `z = 5`). Why are you
using a dict at all? It would be better to use something like a numpy
array or a serie
I refactored the map call to break dict_keys into cpu_count() chunks, (so each
f() call gets to run continuously over n/cpu_count() items) virtually the same
results. pool map is much slower (4x) than regular map, and I don't know why.
--
https://mail.python.org/mailman/listinfo/python-list
On Wed, Oct 18, 2017 at 10:21 AM, Jason wrote:
> On Wednesday, October 18, 2017 at 12:14:30 PM UTC-4, Ian wrote:
>> On Wed, Oct 18, 2017 at 9:46 AM, Jason wrote:
>> > #When I change line19 to True to use the multiprocessing stuff it all
>> > slows down.
>> >
>> > from multiprocessing import Proc
On Wed, Oct 18, 2017 at 10:13 AM, Ian Kelly wrote:
> On Wed, Oct 18, 2017 at 9:46 AM, Jason wrote:
>> #When I change line19 to True to use the multiprocessing stuff it all slows
>> down.
>>
>> from multiprocessing import Process, Manager, Pool, cpu_count
>> from timeit import default_timer as ti
On Wednesday, October 18, 2017 at 12:14:30 PM UTC-4, Ian wrote:
> On Wed, Oct 18, 2017 at 9:46 AM, Jason wrote:
> > #When I change line19 to True to use the multiprocessing stuff it all slows
> > down.
> >
> > from multiprocessing import Process, Manager, Pool, cpu_count
> > from timeit import de
On Wed, Oct 18, 2017 at 9:46 AM, Jason wrote:
> #When I change line19 to True to use the multiprocessing stuff it all slows
> down.
>
> from multiprocessing import Process, Manager, Pool, cpu_count
> from timeit import default_timer as timer
>
> def f(a,b):
> return dict_words[a]-b
Since
#When I change line19 to True to use the multiprocessing stuff it all slows
down.
from multiprocessing import Process, Manager, Pool, cpu_count
from timeit import default_timer as timer
def f(a,b):
return dict_words[a]-b
def f_unpack(args):
return f(*args)
def init():
On 10/18/2017 05:10 PM, Jason wrote:
> I've read the docs several times, but I still have questions.
> I've even used multiprocessing before, but not map() from it.
>
> I am not sure if map() will let me use a common object (via a manager) and if
> so, how to set that up.
>
As I said earlier, y
I've read the docs several times, but I still have questions.
I've even used multiprocessing before, but not map() from it.
I am not sure if map() will let me use a common object (via a manager) and if
so, how to set that up.
--
https://mail.python.org/mailman/listinfo/python-list
On 10/17/2017 10:52 AM, Jason wrote:
I've got problem that I thought would scale well across cores.
What OS?
def f(t):
return t[0]-d[ t[1] ]
d= {k: np.array(k) for k in entries_16k }
e = np.array()
pool.map(f, [(e, k) for k in d]
*Every* multiprocessing example in the doc intentiona
Could you post a full code snippet? If the lists of 16k numpy arrays are
fixed (say you read them from a file), you could just generate random
values that could be fed into the code as your list would.
It's hard to say how things could be sped up without a bit more specificity.
Cheers,
Thomas
--
I've got problem that I thought would scale well across cores.
def f(t):
return t[0]-d[ t[1] ]
d= {k: np.array(k) for k in entries_16k }
e = np.array()
pool.map(f, [(e, k) for k in d]
At the heart of it is a list of ~16k numpy arrays (32 3D points) which are
stored in a single dict. Using
14 matches
Mail list logo