Two comments interwoven below...

Cristobal Navarro wrote:

i was not aware that openMPI internally uses shared memory in case two
proceses reside on the same node,
which is perfect.

The ways OMP uses shared memory and Open MPI (or most other MPI implementations) uses shared memory are very different. For each "phase" of computation, with OMP you can use a different rule for what thread computes which grid values. Any thread can read any data values it needs. In contrast, with MPI, a process needs to have locally all the data it needs to compute. So, you don't want to change "ownership" of data very often and shared values must be communicated explicitly. The fact that this data sharing is implemented with shared memory is almost irrelevent. That's a detail that's mostly hidden from the application programmer.

Again: how OMP and MPI use "shared memory" are radically different. From an application programmer's point of view, there really is no similarity.

On Thu, Jul 22, 2010 at 7:11 PM, Gus Correa <g...@ldeo.columbia.edu> wrote:
Cristobal Navarro wrote:
i've alwyas wondered if its a
good idea to model a solution on the following way, using both openMP
and openMPI.
2) Most modern MPI implementations (and OpenMPI is an example) use shared
memory mechanisms to communicate between processes that reside
in a single physical node/computer;

By contrast, MPI requires more effort to program, but it takes advantage
of shared memory and networked environments
(and perhaps extended grids too).

Because MPI forces a strict decomposition of the data, it can be more difficult from an application programmer's point of view. Further, it can force more data copying, even if that copying is made relatively fast by using "shared memory." On the other hand, an MPI program can therefore have better data locality and therefore be more efficient... such program reorganization can also benefit OMP codes.

Mostly, "it depends" on your particular application.

Reply via email to