Ludovic,

in order to figure out which interconnect is used, you can

mpirun --mca pml_base_verbose 10 --mca mtl_base_verbose 10 --mca
btl_base_verbose 10 ...

the output might be a bit verbose, so here are a few tips on how to
get it step by step

first, mpirun --mca pml_base_verbose 10 ...
in order to figure out which pml component is used. there are
typically 3 components (by decreasing default priority)
- pml/ucx : UCX will be used for all point to point communications
- pml/cm: one mtl component will be used for all point to point communications
- pml/ob1: one or more btl will be used for point to point communications.

PSM and PSM2 are generally faster with mtl/psm and mtl/psm2, and they
are only used by pml/cm
if pml/ucx is a match, you can either blacklist it or give it a lower
priority so pml/cm is picked
mpirun --mca pml ^ucx ...
mpirun --mca pml_ucx_priority 1 ...
or force pml/cm
mpirun --mca pml cm ...
note there might be an option to tell UCX it should not try to do
PSM/PSM2, and in that case, pml/ucx would not be
selected on an Infinipath/Omnipath network.

if pml/ob1 is used, a btl component is used on a pair basis.
For example, btl/vader will be used for intra node communications (as
long as you do no MPI_Comm_spawn(), and btl/tcp will be used for inter
node communications over TCP/IP.

Cheers,

Gilles

On Fri, Dec 20, 2019 at 7:22 PM Ludovic Courtès via users
<users@lists.open-mpi.org> wrote:
>
> Hello,
>
> I’ve written about our experience packaging Open MPI for GNU Guix in a
> way that works out-of-the-box on different high-speed interconnects:
>
>   
> https://hpc.guix.info/blog/2019/12/optimized-and-portable-open-mpi-packaging/
>
> I’m new to the list and in fact quite new to Open MPI, so I’d love to
> read the feedback and suggestions you may have!
>
> Thanks,
> Ludo’.

Reply via email to