robert wrote:
> I'd like to use multiple CPU cores for selected time consuming Python 
> computations (incl. numpy/scipy) in a frictionless manner.
>
> Interprocess communication is tedious and out of question, so I thought about 
> simply using a more Python interpreter instances (Py_NewInterpreter) with 
> extra GIL in the same process.
> I expect to be able to directly push around Python Object-Trees between the 2 
> (or more) interpreters by doing some careful locking.

I don't want to discourage you but what about reference counting/memory
management for shared objects? Doesn't seem fun for me.


Take a look at IPython1 and it's parallel computing capabilities [1,
2]. It is designed to run on multiple systems or a single system with
multiple CPU/multi-core. It's worker interpreters (engines) are loosely
coupled and can utilize several MPI modules, so there is no low-level
messing with GIL. Although it is work in progress it already looks
quite awesome.

[1] http://ipython.scipy.org/moin/Parallel_Computing
[2] http://ipython.scipy.org/moin/Parallel_Computing/Tutorial

fw

-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to