Thank you George. This is what I was trying to find out after your reply
yesterday.
On Tue, Sep 1, 2015 at 1:32 AM, George Bosilca wrote:
> The sm collective module has a priority of 0, which guarantees that it
> never gets called. If you want to give it a try you should
> set coll_sm_priority t
The sm collective module has a priority of 0, which guarantees that it
never gets called. If you want to give it a try you should
set coll_sm_priority to any value over 30.
George.
On Tue, Sep 1, 2015 at 1:06 AM, Gilles Gouaillardet
wrote:
> Saliya,
>
> btl is a point to point thing only.
>
Saliya,
btl is a point to point thing only.
collectives are implemented by the coll mca
the sm coll mca is optimized for shared memory, but support intra node
communicators only.
the ml and hierarch coll have some optimizations for intra node
communications.
as far as i know, none of these a
Without going too much in details collective communications can be
implemented as a collection of point-to-point. Open MPI uses point-to-point
messages for collective communications inside the node-boundaries, so if
your intra-node BTL is vader you will benefit from it not only on the
point-to-poin
One more question. I found this blog from Jeff [1] on vader and I got the
impression that it's used only for peer-to-peer communications and not for
collectives. Is this true or did I misunderstand?
[1]
http://blogs.cisco.com/performance/the-vader-shared-memory-transport-in-open-mpi-now-featuring
you can try
mpirun --mca btl_base_verbose 100 ...
or you can simply blacklist the btl you do *not* want to use, for example
mpirun --mca btl ^sm
if you want to use vader
you can run
ompi_info --all | grep vader
to check the btl parameters,
of course, reading the source code is the best way to un
Thank you Gilles. Is there some documentation on vader btl and how I can
check which (sm or vader) is being used?
On Tue, Sep 1, 2015 at 12:18 AM, Gilles Gouaillardet
wrote:
> Saliya,
>
> OpenMPI uses btl for point to point communication, and automatically
> selects the best one per pair.
> Typi
Saliya,
OpenMPI uses btl for point to point communication, and automatically
selects the best one per pair.
Typically, the openib or tcp btl is used for inter node communication,
and the sm or vader btl for
intra node.
note the vader btl uses the knem kernel module when available for even
mor
Hi,
Just trying to see if there are any optimizations (or options) in OpenMPI
to improve communication between intra node processes. For example do they
use something like shared memory?
Thank you,
Saliya
--
Saliya Ekanayake
Ph.D. Candidate | Research Assistant
School of Informatics and Computi