On Dec 12, 2011, at 8:42 AM, amjad ali wrote:

> Thanking you all very much for the reply.
>  
> I would request to have some reference about what Tim Prince  & Andreas has 
> said.
>  
> Tim said that OpenMPI has had effective shared memory message passing. Is 
> that anything to do with --enable-MPI-threads switch while installing OpeMPI?
>  
> regards,
> AA 
> 

Hi A    mjad

I think this is just the 'sm' [shared memory] 'btl' [byte transport layer] of 
OpenMPI,
which uses shared memory inside a node to pass messages [unless you turn it 
off].
If I remember right, the OpenMPI  sm btl is built by default in an SMP computer 
[like yours], and 
used by default if two processes live in the same computer/node.

As a practical matter, if you plan to run your program in larger problems, say, 
that 
do not fit the memory of a single node, it is wise to use MPI to begin with, 
because your programming effort is preserved, 
and you can pretty much use for the large problem in multiple nodes
the very same code that you developed for the small problem in a single 
computer.
You cannot do this with OpenMP, which requires shared memory to start with.

Given the many answers from the OpenMPI pros so far, it is clear that you 
provoked 
an interesting discussion!

I wonder if it is fair at all to make comparisons between MPI and OpenMP.
They are quite different programming models, and assume different hardware 
and memory layouts.
The techniques used to design algorithms in each case are quite different as 
well.
Both have pros and cons, but I can hardly imagine a fair comparison between 
them 
in real world problems.
For instance, if one has a PDE to solve, say, the wave equation, in 1D, 2D, or 
3D. 
The typical approach in OpenMP is to parallelize the inner loop[s].
The typical approach in MPI is to use domain decomposition.
The typical approach in hybrid mode [MPI + OpenMP] is to do both.
Could somebody tell me how these things can be fairly compared to each other, 
if at all?

Thank you,
Gus Correa




Reply via email to