In addition to what Gilles said, I usually advise users in ambiguous situations 
to explicitly choose the transport.  For example, you might want to explicitly 
choose using the UCX PML:

mpirun --mca pil ucx ...

This way, you are 100% sure that Open MPI chose the UCX PML (if it can't choose 
the UCX PML for some reason, it will fail/abort -- it won't fall back to some 
other transport).

As a sysadmin, you can also set this in the site-wide openmpi-mca-params.conf 
file, if you wish.  Most users won't notice / care, but if someone does want to 
override that value, the command line has a higher precedence than the 
openmpi-mca-params.conf file.



> On Oct 20, 2019, at 8:26 PM, Gilles Gouaillardet via users 
> <users@lists.open-mpi.org> wrote:
> 
> Raymond,
> 
> 
> In the case of UCX, you can
> 
> mpirun --mca pml_base_verbose 10 ...
> 
> If the pml/ucx component is used, then your app will run over UCX.
> 
> If the pml/ob1 component is used, then you can
> 
> mpirun --mca btl_base_verbose 10 ...
> 
> btl/self should be used for communications to itself.
> 
> if btl/uct is used for inter node communications, it means your job is 
> running over UCX.
> 
> fwiw, it seems the default is to use both btl/vader and btl/uct for intra 
> node communications.
> 
> 
> Cheers,
> 
> 
> Gilles
> 
> 
> 
> On 10/20/2019 6:28 AM, Raymond Muno via users wrote:
>> Is there a way to determine, at run time, as to what choices OpenMPI made in 
>> terms of transports that are being utilized?  We want to verify we are 
>> running UCX over Infiniband.
>> 
>> I have two users, executing the identical code, with the same mpirun 
>> options, getting vastly different execution times on the same cluster.
>> 


-- 
Jeff Squyres
jsquy...@cisco.com

Reply via email to