[EMAIL PROTECTED]
That's a pity, since when we have to run parallel, with single
processor is really  not efficient. To use more computers I think is
cheaper than to buy super computer in developt country.

Although cpython has a GIL that prevents multiple python threads *in the same python process* from running *inside the python interpreter* at the same time (I/O is not affected, for example), this can be gotten around by using multiple processes, each bound to a different CPU, and using some form of IPC (pyro, CORBA, bespoke, etc) to communicate between those processes.


This solution is not ideal, because it will probably involve restructuring your app. Also, all of the de/serialization involved in the IPC will slow things down, unless you're using POSH, a shared memory based system that requires System V IPC.

http://poshmodule.sf.net

Alternatively, you could simply use either jython or ironpython, both of which have no central interpreter lock (because they rely on JVM/CLR garbage collection), and thus will support transparent migration of threads to multiple processors in a multi-cpu system, if the underlying VM supports that.

http://www.jython.org
http://www.ironpython.com

And you shouldn't have to restructure your code, assuming that it is already thread-safe?

For interest, I thought I'd mention PyLinda, a distributed object system that takes a completely different, higher level, approach to object distribution: it creates "tuple space", where objects live. The objects can be located and sent messages. But (Py)Linda hides most of gory details of how objects actually get distributed, and the mechanics of actually connecting with those remote objects.

http://www-users.cs.york.ac.uk/~aw/pylinda/

HTH,

--
alan kennedy
------------------------------------------------------
email alan:              http://xhaus.com/contact/alan
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to