If you don't have control over the MPI version/versions/implementations installed, you probably can still verify if your environment is consistently pointing to the same MPI implementation and version.

It is not uncommon to have more than one implementation and version
installed on a computer, on a cluster, or worse, different versions+implementations on different cluster nodes.
Mixed-up environment variables can produce very confusing results.

Commands such as:

which mpiexec
which mpicc
which mpif90

and also

mpiexec --version
mpicc --show
etc

may help diagnose that.

Likewise,

env |grep PATH

and

env |grep LD_LIBRARY_PATH

may hint if you have a mixed environment and mixed MPI implementations and versions.

I hope this helps,
Gus Correa

PS - BTW, unless your company's policies forbid,
you can install OpenMPI on a user directory, say, your /home directory. This will work if that directory is shared across the cluster (e.g. via NFS), and as long as you set your PATH and LD_LIBRARY_PATH to point to its bin and lib subdirectories.

https://www.open-mpi.org/faq/?category=running#adding-ompi-to-path

On 06/27/2014 01:56 PM, Jeffrey A Cummings wrote:
I appreciate your response and I understand the logic behind your
suggestion, but you and the other regular expert contributors to this
list are frequently working under a misapprehension.  Many of your
openMPI users don't have any control over what version of openMPI is
available on their system.  I'm stuck with whatever version my IT people
choose to bless, which in general is the (possibly old and/or moldy)
version that is bundled with some larger package (i.e., Rocks, Linux).
  The fact that I'm only now seeing this 1.4 to 1.6 problem illustrates
the situation I'm in.  I really need someone to did into their memory
archives to see if they can come up with a clue for me.

Jeffrey A. Cummings
Engineering Specialist
Performance Modeling and Analysis Department
Systems Analysis and Simulation Subdivision
Systems Engineering Division
Engineering and Technology Group
The Aerospace Corporation
571-307-4220
jeffrey.a.cummi...@aero.org



From: Gus Correa <g...@ldeo.columbia.edu>
To: Open MPI Users <us...@open-mpi.org>,
Date: 06/27/2014 01:45 PM
Subject: Re: [OMPI users] Problem moving from 1.4 to 1.6
Sent by: "users" <users-boun...@open-mpi.org>
------------------------------------------------------------------------



It may be easier to install the latest OMPI from the tarball,
rather than trying to sort out the error.

http://www.open-mpi.org/software/ompi/v1.8/

The packaged built of (somewhat old) OMPI 1.6.2 that came with
Linux may not have built against the same IB libraries, hardware,
and configuration you have.
[The error message reference to udapl is ominous.]

 > The mpirun command line contains the argument '--mca btl ^openib', which
 > I thought told mpi to not look for the ib interface.

As you said, the mca parameter above tells OMPI not to use openib,
although it may not be the only cause of the problem.
If you want to use openib switch to
--mca btl openib,sm,self

Another thing to check is whether there is a mixup of enviroment
variables, PATH and LD_LIBRARY_PATH perhaps pointing to the old OMPI
version you may have installed.

My two cents,
Gus Correa

On 06/27/2014 12:53 PM, Jeffrey A Cummings wrote:
 > We have recently upgraded our cluster to a version of Linux which comes
 > with openMPI version 1.6.2.
 >
 > An application which ran previously (using some version of 1.4) now
 > errors out with the following messages:
 >
 >          librdmacm: Fatal: no RDMA devices found
 >          librdmacm: Fatal: no RDMA devices found
 >          librdmacm: Fatal: no RDMA devices found
 >
 >
--------------------------------------------------------------------------
 >          WARNING: Failed to open "OpenIB-cma" [DAT_INTERNAL_ERROR:].
 >          This may be a real error or it may be an invalid entry in the
 > uDAPL
 >          Registry which is contained in the dat.conf file. Contact your
 > local
 >          System Administrator to confirm the availability of the
 > interfaces in
 >          the dat.conf file.
 >
 >
--------------------------------------------------------------------------
 >          [tupile:25363] 2 more processes have sent help message
 > help-mpi-btl-udapl.txt / dat_ia_open fail
 >          [tupile:25363] Set MCA parameter "orte_base_help_aggregate" to
 > 0 to see all help / error messages
 >
 > The mpirun command line contains the argument '--mca btl ^openib', which
 > I thought told mpi to not look for the ib interface.
 >
 > Can anyone suggest what the problem might be?  Did the relevant syntax
 > change between versions 1.4 and 1.6?
 >
 >
 > Jeffrey A. Cummings
 > Engineering Specialist
 > Performance Modeling and Analysis Department
 > Systems Analysis and Simulation Subdivision
 > Systems Engineering Division
 > Engineering and Technology Group
 > The Aerospace Corporation
 > 571-307-4220
 > jeffrey.a.cummi...@aero.org
 >
 >
 > _______________________________________________
 > users mailing list
 > us...@open-mpi.org
 > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
 > Link to this post:
http://www.open-mpi.org/community/lists/users/2014/06/24721.php
 >

_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2014/06/24722.php



_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2014/06/24723.php


Reply via email to