On 10/17/2017 10:52 AM, Jason wrote:
I've got problem that I thought would scale well across cores.
What OS?
def f(t):
return t[0]-d[ t[1] ]
d= {k: np.array(k) for k in entries_16k }
e = np.array()
pool.map(f, [(e, k) for k in d]
*Every* multiprocessing example in the doc intentionally protects
multiprocessing calls with
if __name__ == '__main__':
"Safe importing of main module
Make sure that the main module can be safely imported by a new
Python interpreter without causing unintended side effects (such a
starting a new process)."
At the heart of it is a list of ~16k numpy arrays (32 3D points) which are
stored in a single dict. Using pool.map() I pass the single item of 32 3D
Points to be evaluated again the 16k entries. In theory, this would be a
speedup proportional to the number of physical cores, but I see all 4 cores
being maxed out and results in a regular map time.
How can I use pool.map better?
Try following the doc and see what happens.
--
Terry Jan Reedy
--
https://mail.python.org/mailman/listinfo/python-list