On Feb 6, 2007, at 12:38 PM, Alex Tumanov wrote:
http://www.open-mpi.org/faq/?category=tcp#tcp-routability
The pointer was rather informative. We do have to use non-standard
ranges for IB interfaces, because we're performing automatic IP over
IB configuration based on the eth0 IP and netmask.
Thanks for your reply, Jeff.
> It never occurred to me that the headnode would try to communicate
> with the slave using infiniband interfaces... Orthogonally, what are
The problem here is that since your IB IP addresses are
"public" (meaning that they're not in the IETF defined ranges for
priv
On Feb 2, 2007, at 11:22 AM, Alex Tumanov wrote:
That really did fix it, George:
# mpirun --prefix $MPIHOME -hostfile ~/testdir/hosts --mca btl
tcp,self --mca btl_tcp_if_exclude ib0,ib1 ~/testdir/hello
Hello from Alex' MPI test program
Process 0 on dr11.lsf.platform.com out of 2
Hello from Alex
That really did fix it, George:
# mpirun --prefix $MPIHOME -hostfile ~/testdir/hosts --mca btl
tcp,self --mca btl_tcp_if_exclude ib0,ib1 ~/testdir/hello
Hello from Alex' MPI test program
Process 0 on dr11.lsf.platform.com out of 2
Hello from Alex' MPI test program
Process 1 on compute-0-0.local o
Alex,
Can should try to limit the ethernet devices used by Open MPI during
the execution. Please add "--mca btl_tcp_if_exclude eth1,ib0,ib1" to
your mpirun command line and give it a try.
Thanks,
george.
On Feb 1, 2007, at 10:29 PM, Alex Tumanov wrote:
On 2/1/07, Galen Shipman wro
On 2/1/07, Galen Shipman wrote:
What does ifconfig report on both nodes?
Hi Galen,
On headnode:
# ifconfig
eth0 Link encap:Ethernet HWaddr 00:11:43:EF:5D:6C
inet addr:10.1.1.11 Bcast:10.1.1.255 Mask:255.255.255.0
inet6 addr: fe80::211:43ff:feef:5d6c/64 Scope:Link
What does ifconfig report on both nodes?
- Galen
On Feb 1, 2007, at 2:50 PM, Alex Tumanov wrote:
Hi,
I have kept doing my own investigation and recompiled OpenMPI to have
only the barebones functionality with no support for any interconnects
other than ethernet:
# rpmbuild --rebuild --define=
Hi,
I have kept doing my own investigation and recompiled OpenMPI to have
only the barebones functionality with no support for any interconnects
other than ethernet:
# rpmbuild --rebuild --define="configure_options
--prefix=/opt/openmpi/1.1.4" --define="install_in_opt 1"
--define="mflags all" ope
Hello,
I have tried a very basic test on a 2 node "cluster" consisting of 2
dell boxes. One of them is dual CPU Intel(R) Xeon(TM) CPU 2.80GHz with
1GB of RAM and the slave node is quad-CPU Intel(R) Xeon(TM) CPU
3.40GHz with 2GB of RAM. Both have Infiniband cards and Gig-E. The
slave node is conne