Are you sure your InfiniBand network is up and running? What kind of
output do you get if you run the command 'ibv_devinfo'?

Sincerely,
Rusty Dekema

On Wed, Jul 26, 2017 at 2:40 PM, Sajesh Singh <ssi...@amnh.org> wrote:
> OS: Centos 7
>
> Infiniband Packages from OS repos
>
> Mellanox HCA
>
>
>
>
>
> Compiled openmpi 1.10.7 on centos7 with the following config
>
>
>
> ./configure --prefix=/usr/local/software/OpenMPI/openmpi-1.10.7
> --with-tm=/opt/pbs --with-verbs
>
>
>
> Snippet from config.log seems to indicate that the infiniband header files
> were located
>
>
>
> btl_openib_CPPFLAGS=' -I/usr/include/infiniband'
>
> common_verbs_CPPFLAGS=' -I/usr/include/infiniband'
>
> oshmem_verbs_CPPFLAGS=' -I/usr/include/infiniband'
>
>
>
> Everthing seems to have compiled correctly, but when I try to run any
> program using mpirun I am receiving the following error:
>
>
>
> mpirun -np 8 ./a.out
>
> --------------------------------------------------------------------------
>
> [[18431,1],2]: A high-performance Open MPI point-to-point messaging module
>
> was unable to find any relevant network interfaces:
>
>
>
> Module: OpenFabrics (openib)
>
>   Host: host-name
>
>
>
> Another transport will be used instead, although this may result in
>
> lower performance.
>
> --------------------------------------------------------------------------
>
> [:13959] 7 more processes have sent help message
> help-mpi-btl-base.txt / btl:no-nics
>
> [:13959] Set MCA parameter "orte_base_help_aggre
> gate" to 0 to see all help / error messages
>
>
>
>
>
> I am unsure as to where to go from here. Any help would be appreciated as to
> how to troubleshoot this issue.
>
>
>
> Thank you,
>
>
>
> Sajesh
>
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to