Dear Gilles,

Despite my last answer (see below), I am noticing that 
some tests with a coarray fortran code on a laptop show a 
performance drop of the order of 20% using the 4.1.1 version 
(with --mca pml ucx disabled), versus the 4.1.0 one 
(with --mca pml ucx enabled).

I would like to experiment with pml/ucx framework using the 4.1.0 
version on that laptop. Then, please, how do I manually re-enable 
those providers? (e.g. perhaps, is it during the construction 
stage?) or where can I find out how to do it? Thanks in advance.

Regards.
Jorge.

----- Mensaje original -----
> De: "Open MPI Users" <users@lists.open-mpi.org>
> Para: "Open MPI Users" <users@lists.open-mpi.org>
> CC: "Jorge D'Elia" 
> Enviado: Sábado, 29 de Mayo 2021 7:18:23
> Asunto: Re: [OMPI users] (Fedora 34, x86_64-pc-linux-gnu, 
> openmpi-4.1.1.tar.gz): PML ucx cannot be selected
>
> Dear Gilles,
> 
> Ahhh ... now the new behavior is better understood.
> The intention of using pml/ucx was simply for preliminary
> testing, and does not merit re-enabling these providers in
> this notebook.
> 
> Thank you very much for the clarification.
> 
> Regards,
> Jorge.
> 
> ----- Mensaje original -----
>> De: "Gilles Gouaillardet"
>> Para: "Jorge D'Elia" , "Open MPI Users" <users@lists.open-mpi.org>
>> Enviado: Viernes, 28 de Mayo 2021 23:35:37
>> Asunto: Re: [OMPI users] (Fedora 34, x86_64-pc-linux-gnu, 
>> openmpi-4.1.1.tar.gz):
>> PML ucx cannot be selected
>>
>> Jorge,
>> 
>> pml/ucx used to be selected when no fast interconnect were detected
>> (since ucx provides driver for both TCP and shared memory).
>> These providers are now disabled by default, so unless your machine
>> has a supported fast interconnect (such as Infiniband),
>> pml/ucx cannot be used out of the box anymore.
>> 
>> if you really want to use pml/ucx on your notebook, you need to
>> manually re-enable these providers.
>> 
>> That being said, your best choice here is really not to force any pml,
>> and let Open MPI use pml/ob1
>> (that has support for both TCP and shared memory)
>> 
>> Cheers,
>> 
>> Gilles
>> 
>> On Sat, May 29, 2021 at 11:19 AM Jorge D'Elia via users
>> <users@lists.open-mpi.org> wrote:
>>>
>>> Hi,
>>>
>>> We routinely build OpenMPI on x86_64-pc-linux-gnu machines from
>>> the sources using gcc and usually everything works fine.
>>>
>>> In one case we recently installed Fedora 34 from scratch on an
>>> ASUS G53SX notebook (Intel Core i7-2630QM CPU 2.00GHz ×4 cores,
>>> without any IB device). Next we build OpenMPI using the file
>>> openmpi-4.1.1.tar.gz and the GCC 12.0.0 20210524 (experimental)
>>> compiler.
>>>
>>> However, when trying to experiment OpenMPI using UCX
>>> with a simple test, we get the runtime errors:
>>>
>>>   No components were able to be opened in the btl framework.
>>>   PML ucx cannot be selected
>>>
>>> while the test worked fine until Fedora 33 on the same
>>> machine using the same OpenMPI configuration.
>>>
>>> We attach below some info about a simple test run.
>>>
>>> Please, any clues where to check or maybe something is missing?
>>> Thanks in advance.
>>>
>>> Regards
>>> Jorge.
>>>
>>> --
>>> $ cat /proc/version
>>> Linux version 5.12.7-300.fc34.x86_64
>>> (mockbu...@bkernel01.iad2.fedoraproject.org) (gcc (GCC) 11.1.1 20210428 (Red
>>> Hat 11.1.1-1), GNU ld version 2.35.1-41.fc34) #1 SMP Wed May 26 12:58:58 UTC
>>> 2021
>>>
>>> $ mpifort --version
>>> GNU Fortran (GCC) 12.0.0 20210524 (experimental)
>>> Copyright (C) 2021 Free Software Foundation, Inc.
>>>
>>> $ which mpifort
>>> /usr/beta/openmpi/bin/mpifort
>>>
>>> $ mpifort -o hello_usempi_f08.exe hello_usempi_f08.f90
>>>
>>> $ mpirun --mca orte_base_help_aggregate 0 --mca btl self,vader,tcp --map-by 
>>> node
>>> --report-bindings --machinefile ~/machi-openmpi.dat --np 2
>>> hello_usempi_f08.exe
>>> [verne:200650] MCW rank 0 bound to socket 0[core 0[hwt 0]]: [B/././.]
>>> [verne:200650] MCW rank 1 bound to socket 0[core 1[hwt 0]]: [./B/./.]
>>> Hello, world, I am  0 of  2: Open MPI v4.1.1, package: Open MPI 
>>> bigpack@verne
>>> Distribution, ident: 4.1.1, repo rev: v4.1.1, Apr 24, 2021
>>> Hello, world, I am  1 of  2: Open MPI v4.1.1, package: Open MPI 
>>> bigpack@verne
>>> Distribution, ident: 4.1.1, repo rev: v4.1.1, Apr 24, 2021
>>>
>>> $ mpirun --mca orte_base_help_aggregate 0 --mca pml ucx --mca btl
>>> ^self,vader,tcp --map-by node --report-bindings --machinefile
>>> ~/machi-openmpi.dat --np 2  hello_usempi_f08.exe
>>> [verne:200772] MCW rank 0 bound to socket 0[core 0[hwt 0]]: [B/././.]
>>> [verne:200772] MCW rank 1 bound to socket 0[core 1[hwt 0]]: [./B/./.]
>>> --------------------------------------------------------------------------
>>> No components were able to be opened in the btl framework.
>>>
>>> This typically means that either no components of this type were
>>> installed, or none of the installed components can be loaded.
>>> Sometimes this means that shared libraries required by these
>>> components are unable to be found/loaded.
>>>
>>>   Host:      verne
>>>   Framework: btl
>>> --------------------------------------------------------------------------
>>> --------------------------------------------------------------------------
>>> No components were able to be opened in the btl framework.
>>>
>>> This typically means that either no components of this type were
>>> installed, or none of the installed components can be loaded.
>>> Sometimes this means that shared libraries required by these
>>> components are unable to be found/loaded.
>>>
>>>   Host:      verne
>>>   Framework: btl
>>> --------------------------------------------------------------------------
>>> --------------------------------------------------------------------------
>>> No components were able to be opened in the pml framework.
>>>
>>> This typically means that either no components of this type were
>>> installed, or none of the installed components can be loaded.
>>> Sometimes this means that shared libraries required by these
>>> components are unable to be found/loaded.
>>>
>>>   Host:      verne
>>>   Framework: pml
>>> --------------------------------------------------------------------------
>>> [verne:200777] PML ucx cannot be selected
>>> --------------------------------------------------------------------------
>>> No components were able to be opened in the pml framework.
>>>
>>> This typically means that either no components of this type were
>>> installed, or none of the installed components can be loaded.
>>> Sometimes this means that shared libraries required by these
>>> components are unable to be found/loaded.
>>>
>>>   Host:      verne
>>>   Framework: pml
>>> --------------------------------------------------------------------------
>>> [verne:200772] PMIX ERROR: UNREACHABLE in file
>>> ../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at 
>>> line
>>> 2198
>>>
>>>
>>> $ ompi_info | grep ucx
>>>   Configure command line: '--enable-ipv6' '--enable-sparse-groups'
>>>   '--enable-mpi-ext' '--enable-mpi-cxx' '--enable-oshmem'
>>>   '--with-libevent=internal' '--with-ucx' '--with-pmix=internal'
>>>   '--without-libfabric' '--prefix=/usr/beta/openmpi'
>>>                  MCA osc: ucx (MCA v2.1.0, API v3.0.0, Component v4.1.1)
>>>                  MCA pml: ucx (MCA v2.1.0, API v2.0.0, Component
>>>                  v4.1.1)
>>>
>>> $ ompi_info --param all all --level 9 | grep ucx
>>>                  MCA osc: ucx (MCA v2.1.0, API v3.0.0, Component v4.1.1)
>>>                  MCA pml: ucx (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>              MCA osc ucx: 
>>> ---------------------------------------------------
>>>              MCA osc ucx: parameter "osc_ucx_priority" (current value: 
>>> "60", data source:
>>>              default, level: 3 user/all, type: unsigned_int)
>>>                           Priority of the osc/ucx component (default: 60)
>>>              MCA osc ucx: parameter "osc_ucx_verbose" (current value: "0", 
>>> data source:
>>>              default, level: 3 user/all, type: int, synonym of: 
>>> opal_common_ucx_verbose)
>>>              MCA osc ucx: parameter "osc_ucx_progress_iterations" (current 
>>> value: "100", data
>>>              source: default, level: 3 user/all, type: int, synonym of:
>>>              opal_common_ucx_progress_iterations)
>>>              MCA osc ucx: parameter "osc_ucx_opal_mem_hooks" (current 
>>> value: "false", data
>>>              source: default, level: 3 user/all, type: bool, synonym of:
>>>              opal_common_ucx_opal_mem_hooks)
>>>              MCA osc ucx: parameter "osc_ucx_tls" (current value:
>>>              "rc_verbs,ud_verbs,rc_mlx5,dc_mlx5,cuda_ipc,rocm_ipc", data 
>>> source: default,
>>>              level: 3 user/all, type: string, synonym of: 
>>> opal_common_ucx_tls)
>>>              MCA osc ucx: parameter "osc_ucx_devices" (current value: 
>>> "mlx*", data source:
>>>              default, level: 3 user/all, type: string, synonym of: 
>>> opal_common_ucx_devices)
>>>              MCA pml ucx: 
>>> ---------------------------------------------------
>>>              MCA pml ucx: parameter "pml_ucx_priority" (current value: 
>>> "51", data source:
>>>              default, level: 3 user/all, type: int)
>>>              MCA pml ucx: parameter "pml_ucx_num_disconnect" (current 
>>> value: "1", data
>>>              source: default, level: 3 user/all, type: int)
>>>              MCA pml ucx: parameter "pml_ucx_verbose" (current value: "0", 
>>> data source:
>>>              default, level: 3 user/all, type: int, synonym of: 
>>> opal_common_ucx_verbose)
>>>              MCA pml ucx: parameter "pml_ucx_progress_iterations" (current 
>>> value: "100", data
>>>              source: default, level: 3 user/all, type: int, synonym of:
>>>              opal_common_ucx_progress_iterations)
>>>              MCA pml ucx: parameter "pml_ucx_opal_mem_hooks" (current 
>>> value: "false", data
>>>              source: default, level: 3 user/all, type: bool, synonym of:
>>>              opal_common_ucx_opal_mem_hooks)
>>>              MCA pml ucx: parameter "pml_ucx_tls" (current value:
>>>              "rc_verbs,ud_verbs,rc_mlx5,dc_mlx5,cuda_ipc,rocm_ipc", data 
>>> source: default,
>>>              level: 3 user/all, type: string, synonym of: 
>>> opal_common_ucx_tls)
>>>              MCA pml ucx: parameter "pml_ucx_devices" (current value: 
>>> "mlx*", data source:
>>>              default, level: 3 user/all, type: string, synonym of: 
>>> opal_common_ucx_devices)
>>>
>>> $ ompi_info
>>>                  Package: Open MPI bigpack@verne Distribution
>>>                 Open MPI: 4.1.1
>>>   Open MPI repo revision: v4.1.1
>>>    Open MPI release date: Apr 24, 2021
>>>                 Open RTE: 4.1.1
>>>   Open RTE repo revision: v4.1.1
>>>    Open RTE release date: Apr 24, 2021
>>>                     OPAL: 4.1.1
>>>       OPAL repo revision: v4.1.1
>>>        OPAL release date: Apr 24, 2021
>>>                  MPI API: 3.1.0
>>>             Ident string: 4.1.1
>>>                   Prefix: /usr/beta/openmpi
>>>  Configured architecture: x86_64-pc-linux-gnu
>>>           Configure host: verne
>>>            Configured by: bigpack
>>>            Configured on: Tue May 25 17:16:38 UTC 2021
>>>           Configure host: verne
>>>   Configure command line: '--enable-ipv6' '--enable-sparse-groups'
>>>                           '--enable-mpi-ext' '--enable-mpi-cxx'
>>>                           '--enable-oshmem' '--with-libevent=internal'
>>>                           '--with-ucx' '--with-pmix=internal'
>>>                           '--without-libfabric' '--prefix=/usr/beta/openmpi'
>>>                 Built by: bigpack
>>>                 Built on: Tue 25 May 17:57:46 UTC 2021
>>>               Built host: verne
>>>               C bindings: yes
>>>             C++ bindings: yes
>>>              Fort mpif.h: yes (all)
>>>             Fort use mpi: yes (full: ignore TKR)
>>>        Fort use mpi size: deprecated-ompi-info-value
>>>         Fort use mpi_f08: yes
>>>  Fort mpi_f08 compliance: The mpi_f08 module is available, but due to
>>>                           limitations in the gfortran compiler and/or Open
>>>                           MPI, does not support the following: array
>>>                           subsections, direct passthru (where possible) to
>>>                           underlying Open MPI's C functionality
>>>   Fort mpi_f08 subarrays: no
>>>            Java bindings: no
>>>   Wrapper compiler rpath: runpath
>>>               C compiler: gcc
>>>      C compiler absolute: /usr/beta/gcc-trunk/bin/gcc
>>>   C compiler family name: GNU
>>>       C compiler version: 12.0.0
>>>             C++ compiler: g++
>>>    C++ compiler absolute: /usr/beta/gcc-trunk/bin/g++
>>>            Fort compiler: gfortran
>>>        Fort compiler abs: /usr/beta/gcc-trunk/bin/gfortran
>>>          Fort ignore TKR: yes (!GCC$ ATTRIBUTES NO_ARG_CHECK ::)
>>>    Fort 08 assumed shape: yes
>>>       Fort optional args: yes
>>>           Fort INTERFACE: yes
>>>     Fort ISO_FORTRAN_ENV: yes
>>>        Fort STORAGE_SIZE: yes
>>>       Fort BIND(C) (all): yes
>>>       Fort ISO_C_BINDING: yes
>>>  Fort SUBROUTINE BIND(C): yes
>>>        Fort TYPE,BIND(C): yes
>>>  Fort T,BIND(C,name="a"): yes
>>>             Fort PRIVATE: yes
>>>           Fort PROTECTED: yes
>>>            Fort ABSTRACT: yes
>>>        Fort ASYNCHRONOUS: yes
>>>           Fort PROCEDURE: yes
>>>          Fort USE...ONLY: yes
>>>            Fort C_FUNLOC: yes
>>>  Fort f08 using wrappers: yes
>>>          Fort MPI_SIZEOF: yes
>>>              C profiling: yes
>>>            C++ profiling: yes
>>>    Fort mpif.h profiling: yes
>>>   Fort use mpi profiling: yes
>>>    Fort use mpi_f08 prof: yes
>>>           C++ exceptions: no
>>>           Thread support: posix (MPI_THREAD_MULTIPLE: yes, OPAL support: 
>>> yes,
>>>                           OMPI progress: no, ORTE progress: yes, Event lib:
>>>                           yes)
>>>            Sparse Groups: yes
>>>   Internal debug support: no
>>>   MPI interface warnings: yes
>>>      MPI parameter check: runtime
>>> Memory profiling support: no
>>> Memory debugging support: no
>>>               dl support: yes
>>>    Heterogeneous support: no
>>>  mpirun default --prefix: no
>>>        MPI_WTIME support: native
>>>      Symbol vis. support: yes
>>>    Host topology support: yes
>>>             IPv6 support: yes
>>>       MPI1 compatibility: no
>>>           MPI extensions: affinity, cuda, pcollreq
>>>    FT Checkpoint support: no (checkpoint thread: no)
>>>    C/R Enabled Debugging: no
>>>   MPI_MAX_PROCESSOR_NAME: 256
>>>     MPI_MAX_ERROR_STRING: 256
>>>      MPI_MAX_OBJECT_NAME: 64
>>>         MPI_MAX_INFO_KEY: 36
>>>         MPI_MAX_INFO_VAL: 256
>>>        MPI_MAX_PORT_NAME: 1024
>>>   MPI_MAX_DATAREP_STRING: 128
>>>            MCA allocator: basic (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>            MCA allocator: bucket (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>            MCA backtrace: execinfo (MCA v2.1.0, API v2.0.0, Component 
>>> v4.1.1)
>>>                  MCA btl: self (MCA v2.1.0, API v3.1.0, Component v4.1.1)
>>>                  MCA btl: openib (MCA v2.1.0, API v3.1.0, Component v4.1.1)
>>>                  MCA btl: tcp (MCA v2.1.0, API v3.1.0, Component v4.1.1)
>>>                  MCA btl: vader (MCA v2.1.0, API v3.1.0, Component v4.1.1)
>>>             MCA compress: bzip (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>             MCA compress: gzip (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                  MCA crs: none (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                   MCA dl: dlopen (MCA v2.1.0, API v1.0.0, Component v4.1.1)
>>>                MCA event: libevent2022 (MCA v2.1.0, API v2.0.0, Component
>>>                           v4.1.1)
>>>                MCA hwloc: external (MCA v2.1.0, API v2.0.0, Component 
>>> v4.1.1)
>>>                   MCA if: linux_ipv6 (MCA v2.1.0, API v2.0.0, Component
>>>                           v4.1.1)
>>>                   MCA if: posix_ipv4 (MCA v2.1.0, API v2.0.0, Component
>>>                           v4.1.1)
>>>          MCA installdirs: env (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>          MCA installdirs: config (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>               MCA memory: patcher (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                MCA mpool: hugepage (MCA v2.1.0, API v3.0.0, Component 
>>> v4.1.1)
>>>              MCA patcher: overwrite (MCA v2.1.0, API v1.0.0, Component
>>>                           v4.1.1)
>>>                 MCA pmix: isolated (MCA v2.1.0, API v2.0.0, Component 
>>> v4.1.1)
>>>                 MCA pmix: flux (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                 MCA pmix: pmix3x (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                MCA pstat: linux (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>               MCA rcache: grdma (MCA v2.1.0, API v3.3.0, Component v4.1.1)
>>>            MCA reachable: weighted (MCA v2.1.0, API v2.0.0, Component 
>>> v4.1.1)
>>>                MCA shmem: mmap (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                MCA shmem: posix (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                MCA shmem: sysv (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                MCA timer: linux (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>               MCA errmgr: default_app (MCA v2.1.0, API v3.0.0, Component
>>>                           v4.1.1)
>>>               MCA errmgr: default_hnp (MCA v2.1.0, API v3.0.0, Component
>>>                           v4.1.1)
>>>               MCA errmgr: default_orted (MCA v2.1.0, API v3.0.0, Component
>>>                           v4.1.1)
>>>               MCA errmgr: default_tool (MCA v2.1.0, API v3.0.0, Component
>>>                           v4.1.1)
>>>                  MCA ess: env (MCA v2.1.0, API v3.0.0, Component v4.1.1)
>>>                  MCA ess: hnp (MCA v2.1.0, API v3.0.0, Component v4.1.1)
>>>                  MCA ess: pmi (MCA v2.1.0, API v3.0.0, Component v4.1.1)
>>>                  MCA ess: singleton (MCA v2.1.0, API v3.0.0, Component
>>>                           v4.1.1)
>>>                  MCA ess: tool (MCA v2.1.0, API v3.0.0, Component v4.1.1)
>>>                  MCA ess: slurm (MCA v2.1.0, API v3.0.0, Component v4.1.1)
>>>                MCA filem: raw (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>              MCA grpcomm: direct (MCA v2.1.0, API v3.0.0, Component v4.1.1)
>>>                  MCA iof: hnp (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                  MCA iof: orted (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                  MCA iof: tool (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                 MCA odls: default (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                 MCA odls: pspawn (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                  MCA oob: tcp (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                  MCA plm: isolated (MCA v2.1.0, API v2.0.0, Component 
>>> v4.1.1)
>>>                  MCA plm: rsh (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                  MCA plm: slurm (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                  MCA ras: simulator (MCA v2.1.0, API v2.0.0, Component
>>>                           v4.1.1)
>>>                  MCA ras: slurm (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                 MCA regx: fwd (MCA v2.1.0, API v1.0.0, Component v4.1.1)
>>>                 MCA regx: naive (MCA v2.1.0, API v1.0.0, Component v4.1.1)
>>>                 MCA regx: reverse (MCA v2.1.0, API v1.0.0, Component v4.1.1)
>>>                MCA rmaps: mindist (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                MCA rmaps: ppr (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                MCA rmaps: rank_file (MCA v2.1.0, API v2.0.0, Component
>>>                           v4.1.1)
>>>                MCA rmaps: resilient (MCA v2.1.0, API v2.0.0, Component
>>>                           v4.1.1)
>>>                MCA rmaps: round_robin (MCA v2.1.0, API v2.0.0, Component
>>>                           v4.1.1)
>>>                MCA rmaps: seq (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                  MCA rml: oob (MCA v2.1.0, API v3.0.0, Component v4.1.1)
>>>               MCA routed: binomial (MCA v2.1.0, API v3.0.0, Component 
>>> v4.1.1)
>>>               MCA routed: direct (MCA v2.1.0, API v3.0.0, Component v4.1.1)
>>>               MCA routed: radix (MCA v2.1.0, API v3.0.0, Component v4.1.1)
>>>                  MCA rtc: hwloc (MCA v2.1.0, API v1.0.0, Component v4.1.1)
>>>               MCA schizo: flux (MCA v2.1.0, API v1.0.0, Component v4.1.1)
>>>               MCA schizo: ompi (MCA v2.1.0, API v1.0.0, Component v4.1.1)
>>>               MCA schizo: orte (MCA v2.1.0, API v1.0.0, Component v4.1.1)
>>>               MCA schizo: jsm (MCA v2.1.0, API v1.0.0, Component v4.1.1)
>>>               MCA schizo: slurm (MCA v2.1.0, API v1.0.0, Component v4.1.1)
>>>                MCA state: app (MCA v2.1.0, API v1.0.0, Component v4.1.1)
>>>                MCA state: hnp (MCA v2.1.0, API v1.0.0, Component v4.1.1)
>>>                MCA state: novm (MCA v2.1.0, API v1.0.0, Component v4.1.1)
>>>                MCA state: orted (MCA v2.1.0, API v1.0.0, Component v4.1.1)
>>>                MCA state: tool (MCA v2.1.0, API v1.0.0, Component v4.1.1)
>>>                  MCA bml: r2 (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                 MCA coll: adapt (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                 MCA coll: basic (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                 MCA coll: han (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                 MCA coll: inter (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                 MCA coll: libnbc (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                 MCA coll: self (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                 MCA coll: sm (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                 MCA coll: sync (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                 MCA coll: tuned (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                 MCA coll: monitoring (MCA v2.1.0, API v2.0.0, Component
>>>                           v4.1.1)
>>>                 MCA fbtl: posix (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                MCA fcoll: dynamic (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                MCA fcoll: dynamic_gen2 (MCA v2.1.0, API v2.0.0, Component
>>>                           v4.1.1)
>>>                MCA fcoll: individual (MCA v2.1.0, API v2.0.0, Component
>>>                           v4.1.1)
>>>                MCA fcoll: two_phase (MCA v2.1.0, API v2.0.0, Component
>>>                           v4.1.1)
>>>                MCA fcoll: vulcan (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                   MCA fs: ufs (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                   MCA io: ompio (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                   MCA io: romio321 (MCA v2.1.0, API v2.0.0, Component 
>>> v4.1.1)
>>>                   MCA op: avx (MCA v2.1.0, API v1.0.0, Component v4.1.1)
>>>                  MCA osc: sm (MCA v2.1.0, API v3.0.0, Component v4.1.1)
>>>                  MCA osc: monitoring (MCA v2.1.0, API v3.0.0, Component
>>>                           v4.1.1)
>>>                  MCA osc: pt2pt (MCA v2.1.0, API v3.0.0, Component v4.1.1)
>>>                  MCA osc: rdma (MCA v2.1.0, API v3.0.0, Component v4.1.1)
>>>                  MCA osc: ucx (MCA v2.1.0, API v3.0.0, Component v4.1.1)
>>>                  MCA pml: v (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                  MCA pml: cm (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                  MCA pml: monitoring (MCA v2.1.0, API v2.0.0, Component
>>>                           v4.1.1)
>>>                  MCA pml: ob1 (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                  MCA pml: ucx (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                  MCA rte: orte (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>             MCA sharedfp: individual (MCA v2.1.0, API v2.0.0, Component
>>>                           v4.1.1)
>>>             MCA sharedfp: lockedfile (MCA v2.1.0, API v2.0.0, Component
>>>                           v4.1.1)
>>>             MCA sharedfp: sm (MCA v2.1.0, API v2.0.0, Component v4.1.1)
>>>                 MCA topo: basic (MCA v2.1.0, API v2.2.0, Component v4.1.1)
>>>                 MCA topo: treematch (MCA v2.1.0, API v2.2.0, Component
>>>                           v4.1.1)
>>>            MCA vprotocol: pessimist (MCA v2.1.0, API v2.0.0, Component
>>>                           v4.1.1)
>>>
> > > #end

Reply via email to