Hallo,

I´m trying to get openmpi(1.6.5) running with/over infiniband.
My system is a centOS 6.3. I have installed the Mellanox OFED driver
(2.0) and everything seems working. ibhosts shows all hosts and the switch.
A "hca_self_test.ofed" shows:

---- Performing Adapter Device Self Test ----
Number of CAs Detected ................. 1
PCI Device Check ....................... PASS
Kernel Arch ............................ x86_64
Host Driver Version .................... MLNX_OFED_LINUX-2.0-2.0.5
(OFED-2.0-2.0.5): 2.6.32-279.el6.x86_64
Host Driver RPM Check .................. PASS
Firmware on CA #0 VPI .................. v2.11.500
Firmware Check on CA #0 (VPI) .......... PASS
Host Driver Initialization ............. PASS"
Number of CA Ports Active .............. 1
Port State of Port #1 on CA #0 (VPI)..... UP 4X QDR (InfiniBand)
Error Counter Check on CA #0 (VPI)...... PASS
Kernel Syslog Check .................... PASS
Node GUID on CA #0 (VPI) ............... 00:02:c9:03:00:1f:a4:e0


A "ompi_info | grep openib" shows:
                 MCA btl: openib (MCA v2.0, API v2.0, Component v1.6.5)

So I now compiled openmpi with the option "--with-openib" and tried to
run the intel MPI test. But it still uses the Ethernet interface to
communicate. Only when I configure ipoib (ib0) and start my job with
"--mca btl ^openib --mca btl_tcp_if_include ib0" it runs with
infiniband. But when I´m right, it should work without the ib0 interface.
I´m quiet new to infiniband so maybe I forgot something.
I'm grateful for any information that help me solving this problem.

Thank you,

Christian

Reply via email to