John wrote: > I want to do something like this: > > for i = 1 in range(0,N): > for j = 1 in range(0,N): > D[i][j] = calculate(i,j) > > I would like to now do this using a fixed number of threads, say 10 > threads.
Why do you want to run this in 10 threads? Do you have 10 CPUs? If you are concerned about CPU time, you should not be using threads (regardless of language) as they are often implemented with the assumption that they stay idle most of the time (e.g. win32 threads and pthreads). In addition, CPython has a global interpreter lock (GIL) that prevents the interpreter from running on several processors in parallel. It means that python threads are a tool for things like writing non-blocking i/o and maintaining responsiveness in a GUI'. But that is what threads are implemented to do anyway, so it doesn't matter. IronPython and Jython do not have a GIL. In order to speed up computation you should run multiple processes and do some sort of IPC. Take a look at MPI (e.g. mpi4py.scipy.org) or 'parallel python'. MPI is the de facto industry standard for dealing with CPU bound problems on systems with multiple processors, whether the memory is shared or distributed does not matter. Contrary to common belief, this approach is more efficient than running multiple threads, sharing memory and synchronizong with mutexes and event objects - even if you are using a system unimpeded by a GIL. The number of parallel tasks should be equal to the number of available CPU units, not more, as you will get excessive context shifts if the number of busy threads or processes exceed the number of computational units. If you only have two logical CPUs (e.g. one dual-core processor) you should only run two parallel tasks - not ten. If you try to parallelize using additional tasks (e.g. 8 more), you will just waste time doing more context shifts, more cache misses, etc. But if you are a lucky bastard with access to a 10-way server, sure run 10 tasks in parallel. -- http://mail.python.org/mailman/listinfo/python-list