Hi,
That is definitely possible!
To achieve the best performance split your calculation either into 128
equal parts or int >>128 part of any size (then load balancing will
spread workload equally). Let us know the results, if need any help
with parallelization feel free to request it here:
http://
On Jan 12, 11:52 am, Neal Becker <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] wrote:
> > Has anybody tried to runparallelpythonapplications?
> > It appears that if your application is computation-bound using 'thread'
> > or 'threading' modules will not get you any speedup. That is because
> >pyth
John wrote:
> Thanks. Does it matter if I call shell commands os.system...etc in
> calculate?
>
> Thanks,
> --j
The os.system command neglects important changes in the environment
(redirected streams) and would not work with current version of ppsmp.
Although there is a very simple workaround:
pri
John wrote:
> I want to do something like this:
>
> for i = 1 in range(0,N):
> for j = 1 in range(0,N):
>D[i][j] = calculate(i,j)
>
> I would like to now do this using a fixed number of threads, say 10
> threads.
> What is the easiest way to do the "parfor" in python?
>
> Thanks in advance for
> Looks interesting, but is there any way to use this for a cluster of
> machines over a network (not smp)?
Networking capabilities will be included in the next release of
Parallel Python software (http://www.parallelpython.com), which is
coming soon.
> Couldn't you just provide similar convenie
>
> Thus there are different levels of parallelization:
>
> 1 file/database based; multiple batch jobs
> 2 Message Passing, IPC, RPC, ...
> 3 Object Sharing
> 4 Sharing of global data space (Threads)
> 5 Local parallelism / Vector computing, MMX, 3DNow,...
>
> There are good reasons for all of thes
sturlamolden wrote:
> [EMAIL PROTECTED] wrote:
>
> >That's right. ppsmp starts multiple interpreters in separate
> > processes and organize communication between them through IPC.
>
> Thus you are basically reinventing MPI.
>
> http://mpi4py.scipy.org/
> http://en.wikipedia.org/wiki/Message_Pas
> I always thought that if you use multiple processes (e.g. os.fork) then
> Python can take advantage of multiple processors. I think the GIL locks
> one processor only. The problem is that one interpreted can be run on
> one processor only. Am I not right? Is your ppm module runs the same
> interp
Has anybody tried to run parallel python applications?
It appears that if your application is computation-bound using 'thread'
or 'threading' modules will not get you any speedup. That is because
python interpreter uses GIL(Global Interpreter Lock) for internal
bookkeeping. The later allows only on