On Feb 2, 2007, at 11:22 AM, Alex Tumanov wrote:

That really did fix it, George:

# mpirun --prefix $MPIHOME -hostfile ~/testdir/hosts --mca btl
tcp,self --mca btl_tcp_if_exclude ib0,ib1 ~/testdir/hello
Hello from Alex' MPI test program
Process 0 on dr11.lsf.platform.com out of 2
Hello from Alex' MPI test program
Process 1 on compute-0-0.local out of 2

It never occurred to me that the headnode would try to communicate
with the slave using infiniband interfaces... Orthogonally, what are

The problem here is that since your IB IP addresses are "public" (meaning that they're not in the IETF defined ranges for private IP addresses), Open MPI assumes that they can be used to communicate with your back-end nodes on the IPoIB network. See this FAQ entry for details:

http://www.open-mpi.org/faq/?category=tcp#tcp-routability

If you update your IP addresses to be in the private range, Open MPI should do the Right routability computations and you shouldn't need to exclude anything.

the industry standard OpenMPI benchmark tests I could run to perform a
real test?

Just about anything will work -- NetPIPE, the Intel Benchmarks, ...etc.

--
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems

Reply via email to