Filip Wasilewski wrote:
> robert wrote:
>> I'd like to use multiple CPU cores for selected time consuming Python 
>> computations (incl. numpy/scipy) in a frictionless manner.
>>
>> Interprocess communication is tedious and out of question, so I thought 
>> about simply using a more Python interpreter instances (Py_NewInterpreter) 
>> with extra GIL in the same process.
>> I expect to be able to directly push around Python Object-Trees between the 
>> 2 (or more) interpreters by doing some careful locking.
> 
> I don't want to discourage you but what about reference counting/memory
> management for shared objects? Doesn't seem fun for me.

in combination with some simple locking (anyway necessary) I don't see a 
problem in ref-counting.

If at least any interpreter branch has a pointer to the (root) object in 
question the ref-count is >0. 


---- 
Question Besides:  do concurrent INC/DEC machine OP-commands execute atomically 
on Multi-Cores as they do in Single-Core threads?

Example:

obj=Obj()

In a read-only phase (e.g. after computations) without locking, 2 Interpreters 
would for example both access the obj (and change around the refcount but no 
data).
The CPU will execute 2 [INC/DEC @refcount] OP-codes on different cores 
concurrently. Is it guaranteed that the counts sum up correctly?


> Take a look at IPython1 and it's parallel computing capabilities [1,
> 2]. It is designed to run on multiple systems or a single system with
> multiple CPU/multi-core. It's worker interpreters (engines) are loosely
> coupled and can utilize several MPI modules, so there is no low-level
> messing with GIL. Although it is work in progress it already looks
> quite awesome.
> 
> [1] http://ipython.scipy.org/moin/Parallel_Computing
> [2] http://ipython.scipy.org/moin/Parallel_Computing/Tutorial
 

there are some MPI methods around. (This IPython method seems to be only on the 
level of an artefact of the interactive terminal connections.)

Yet with its burden of expensive data sync thats far away from my requirements. 
Thats for massive parallel computing and in sci areas.

I do selected things with interprocess shared memory techniques already. Thats 
medium efficent. 

Multiple interpreters inside one process seem to be most promising for seemless 
multi-core programming. As all Python objects share the same malloc space - 
thats the key requirement in order to get the main effect. 
As soon as with have object pickling in between its well away from this very 
discussion.


robert
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to