In article <[EMAIL PROTECTED]>,
robert <[EMAIL PROTECTED]> writes:
|> 
|> Thus there are different levels of parallelization:
|> 
|> 1 file/database based; multiple batch jobs
|> 2 Message Passing, IPC, RPC, ...
|> 3 Object Sharing 
|> 4 Sharing of global data space (Threads)
|> 5 Local parallelism / Vector computing, MMX, 3DNow,...
|> 
|> There are good reasons for all of these levels.

Well, yes, but to call them "levels" is misleading, as they are closer
to communication methods of a comparable level.

|> > This does not mean that MPI is inherently slower than threads however,
|> > as there are overhead associated with thread synchronization as well.
|> 
|> level 2 communication is slower. Just for selected apps it won't matter a 
lot.

That is false.  It used to be true, but that was a long time ago.  The
reasons why what seems to be a more heavyweight mechanism (message
passing) can be faster than an apparently lightweight one (data sharing)
are both subtle and complicated.


Regards,
Nick Maclaren.
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to