For two-sided Open MPI uses CMA, XPMEM, or KNEM for single-copy shared memory if available. Otherwise it does two copies.
-Nathan On Mon, Apr 11, 2016 at 09:02:38AM -0700, Jeff Hammond wrote: > MPI-3 shared memory gives you direct access, meaning potentially zero > copies if you eg just read shared state. > Optimizing intranode MPI comm just reduces copies. Since MPI > comm semantics require one copy, you can't do better in RMA. In Send-Recv, > I guess you can do only one copy with a CMA implementation, else probably > two copies (to and from shared memory). > So there is definitely an advantage to MPI shared memory. > Jeff > On Monday, April 11, 2016, Tom Rosmond <rosm...@reachone.com> wrote: > > Hello, > > I have been looking into the MPI-3 extensions that added ways to do > direct memory copying on multi-core 'nodes' that share memory. > Architectures constructed from these nodes are universal now, so > improved ways to exploit them are certainly needed. However, it is my > understanding that Open-MPI and other widely used MPI implementations, > e.g. Intel, MPICH, use hardware locality to identify shared memory > regions and do direct memory copies between processes in these cases, > rather than network communication. If this is the case, are there any > additional advantages from explicit programming of memory copying using > MPI-3 shared memory features? > > T. Rosmond > _______________________________________________ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/04/28915.php > > -- > Jeff Hammond > jeff.scie...@gmail.com > http://jeffhammond.github.io/ > _______________________________________________ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/04/28916.php
pgpmXnjWAmCJo.pgp
Description: PGP signature