Hi Gilles,
Thanks for the suggestion.
If I do exclude sm in the command line in the following way it seems to
work.


@chimera examples]$ mpirun -np 1 --mca btl ^sm  hello_c
libibverbs: Warning: no userspace device-specific driver found for
/sys/class/infiniband_verbs/uverbs0
libibverbs: Warning: no userspace device-specific driver found for
/sys/class/infiniband_verbs/uverbs0
--------------------------------------------------------------------------
[[20190,1],0]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
  Host: chimera

Another transport will be used instead, although this may result in
lower performance.

NOTE: You can disable this warning by setting the MCA parameter
btl_base_warn_component_unused to 0.
--------------------------------------------------------------------------
Hello, world, I am 0 of 1, (Open MPI v3.1.3, package: Open MPI
u...@chimera.ncl.res.in Distribution, ident: 3.1.3, repo rev: v3.1.3, Oct
29, 2018, 119)

Will you suggest any way to permanently disable it. I know you have
suggested checking openmpi-mca-params.conf file. But I am unable to locate
it. If you can mention the steps it will be a huge help for me.
Thanks
Srijan


On Mon, 13 Jul 2020 at 06:35, Gilles Gouaillardet via users <
users@lists.open-mpi.org> wrote:

> Srijan,
>
> The logs suggest you explicitly request the btl/sm component, and this
> typically occurs
> via a openmpi-mca-params.conf (that contains a line such as btl =
> sm,openib,self), or the OMPI_MCA_btl environment variable
>
> Cheers,
>
> Gilles
>
> On Mon, Jul 13, 2020 at 1:50 AM Srijan Chatterjee via users <
> users@lists.open-mpi.org> wrote:
>
>> Dear Open MPI users,
>> I am using the following system
>> CentOS release 6.6
>> Rocks 6.2
>> I have been trying to install openmpi-3.1.3.
>> After installing it, in the example folder if want to test run I got the
>> following error
>>
>> I run command mpirun -np 1 hello_c
>>
>> [user@chimera examples]$ mpirun -np 1 hello_c
>> libibverbs: Warning: no userspace device-specific driver found for
>> /sys/class/infiniband_verbs/uverbs0
>> libibverbs: Warning: no userspace device-specific driver found for
>> /sys/class/infiniband_verbs/uverbs0
>> --------------------------------------------------------------------------
>> As of version 3.0.0, the "sm" BTL is no longer available in Open MPI.
>>
>> Efficient, high-speed same-node shared memory communication support in
>> Open MPI is available in the "vader" BTL.  To use the vader BTL, you
>> can re-run your job with:
>>
>>     mpirun --mca btl vader,self,... your_mpi_application
>> --------------------------------------------------------------------------
>> --------------------------------------------------------------------------
>> A requested component was not found, or was unable to be opened.  This
>> means that this component is either not installed or is unable to be
>> used on your system (e.g., sometimes this means that shared libraries
>> that the component requires are unable to be found/loaded).  Note that
>> Open MPI stopped checking at the first component that it did not find.
>>
>> Host:      chimera.pnc.res.in
>> Framework: btl
>> Component: sm
>> --------------------------------------------------------------------------
>> --------------------------------------------------------------------------
>> It looks like MPI_INIT failed for some reason; your parallel process is
>> likely to abort.  There are many reasons that a parallel process can
>> fail during MPI_INIT; some of which are due to configuration or
>> environment
>> problems.  This failure appears to be an internal failure; here's some
>> additional information (which may only be relevant to an Open MPI
>> developer):
>>
>>   mca_bml_base_open() failed
>>   --> Returned "Not found" (-13) instead of "Success" (0)
>> --------------------------------------------------------------------------
>> [chimera:04271] *** An error occurred in MPI_Init
>> [chimera:04271] *** reported by process [1310326785,0]
>> [chimera:04271] *** on a NULL communicator
>> [chimera:04271] *** Unknown error
>> [chimera:04271] *** MPI_ERRORS_ARE_FATAL (processes in this communicator
>> will now abort,
>> [chimera:04271] ***    and potentially your MPI job)
>>
>>
>>
>> I am quite new to it, so please inform me if you need additional
>> information about the system.
>> Any advice is welcomed.
>>
>> Srijan
>>
>>
>>
>>
>>
>>
>> --
>> SRIJAN CHATTERJEE
>>
>

-- 
SRIJAN CHATTERJEE

Reply via email to