On Sun, Mar 13, 2016 at 2:02 PM, Matthew Larkin <lar...@yahoo.com> wrote:

> Hello,
>
> My understanding is Open MPI can utilize shared and/or distributed memory
> architecture (parallel programming model). OpenMP is soley for shared
> memory systems.
>
>
The MPI-3 standard provides both explicitly shared and distributed memory
semantics.  See MPI_Win_allocate_shared for the shared memory feature.

In addition to the explicit semantic, all reasonable MPI implementations
exploit shared-memory internally, which is why Send-Recv within a node is
usually higher bandwidth than between nodes.


> I believe Open MPI incorporates OpenMP from some of the other archives I
> glanced at.
>
>
Some implementations use OS threads (e.g. POSIX threads) internally but not
for the type of concurrency that OpenMP provides.

OpenMP is usually a bad choice for use inside of an MPI library because it
does not generally compose well with other threading models.


> Is this a true statement? If so, is there any need to create a hybrid
> model that uses both OpenMP and Open MPI?
>
>
Various people, including me, have argued that MPI+OpenMP hybrid
programming is not necessary and even harmful:
http://www.orau.gov/hpcor2015/whitepapers/Exascale_Computing_without_Threads-Barry_Smith.pdf
http://www.cs.utexas.edu/users/flame/BLISRetreat2014/slides/hammond-blis-2014.pdf
http://scisoftdays.org/pdf/2016_slides/hammond.pdf

However, this does not mean that flat MPI is close to optimal.  The
statement here is only that MPI+MPI is more effective than MPI+OpenMP when
the programmer devotes equivalent effort to both (and handles SIMD via some
mechanism)

Best,

Jeff


> Thanks!
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/03/28696.php
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/

Reply via email to