Without going too much in details collective communications can be implemented as a collection of point-to-point. Open MPI uses point-to-point messages for collective communications inside the node-boundaries, so if your intra-node BTL is vader you will benefit from it not only on the point-to-point communications but also during collective.
George. On Tue, Sep 1, 2015 at 12:57 AM, Saliya Ekanayake <esal...@gmail.com> wrote: > One more question. I found this blog from Jeff [1] on vader and I got the > impression that it's used only for peer-to-peer communications and not for > collectives. Is this true or did I misunderstand? > > > [1] > http://blogs.cisco.com/performance/the-vader-shared-memory-transport-in-open-mpi-now-featuring-3-flavors-of-zero-copy > > On Tue, Sep 1, 2015 at 12:40 AM, Gilles Gouaillardet <gil...@rist.or.jp> > wrote: > >> you can try >> mpirun --mca btl_base_verbose 100 ... >> >> or you can simply blacklist the btl you do *not* want to use, for example >> mpirun --mca btl ^sm >> if you want to use vader >> >> you can run >> ompi_info --all | grep vader >> to check the btl parameters, >> of course, reading the source code is the best way to understand what the >> vader btl can do and how >> >> Cheers, >> >> Gilles >> >> >> >> On 9/1/2015 1:28 PM, Saliya Ekanayake wrote: >> >> Thank you Gilles. Is there some documentation on vader btl and how I can >> check which (sm or vader) is being used? >> >> On Tue, Sep 1, 2015 at 12:18 AM, Gilles Gouaillardet <gil...@rist.or.jp> >> wrote: >> >>> Saliya, >>> >>> OpenMPI uses btl for point to point communication, and automatically >>> selects the best one per pair. >>> Typically, the openib or tcp btl is used for inter node communication, >>> and the sm or vader btl for >>> intra node. >>> note the vader btl uses the knem kernel module when available for even >>> more optimized configurations. >>> >>> Cheers, >>> >>> Gilles >>> >>> >>> On 9/1/2015 5:59 AM, Saliya Ekanayake wrote: >>> >>> Hi, >>> >>> Just trying to see if there are any optimizations (or options) in >>> OpenMPI to improve communication between intra node processes. For example >>> do they use something like shared memory? >>> >>> Thank you, >>> Saliya >>> >>> -- >>> Saliya Ekanayake >>> Ph.D. Candidate | Research Assistant >>> School of Informatics and Computing | Digital Science Center >>> Indiana University, Bloomington >>> Cell 812-391-4914 >>> <http://saliya.org>http://saliya.org >>> >>> >>> _______________________________________________ >>> users mailing listus...@open-mpi.org >>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >>> Link to this post: >>> http://www.open-mpi.org/community/lists/users/2015/08/27513.php >>> >>> >>> >>> _______________________________________________ >>> users mailing list >>> us...@open-mpi.org >>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >>> Searchable archives: >>> http://www.open-mpi.org/community/lists/users/2015/09/27514.php >>> >> >> >> >> -- >> Saliya Ekanayake >> Ph.D. Candidate | Research Assistant >> School of Informatics and Computing | Digital Science Center >> Indiana University, Bloomington >> Cell 812-391-4914 >> http://saliya.org >> >> >> _______________________________________________ >> users mailing listus...@open-mpi.org >> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >> >> Link to this post: >> http://www.open-mpi.org/community/lists/users/2015/09/27515.php >> >> >> >> _______________________________________________ >> users mailing list >> us...@open-mpi.org >> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >> Link to this post: >> http://www.open-mpi.org/community/lists/users/2015/09/27516.php >> > > > > -- > Saliya Ekanayake > Ph.D. Candidate | Research Assistant > School of Informatics and Computing | Digital Science Center > Indiana University, Bloomington > Cell 812-391-4914 > http://saliya.org > > _______________________________________________ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2015/09/27517.php >