On Nov 15, 2005, at 4:10 AM, Allan Menezes wrote:
Here are last night's reults of the following command on my 15 node
cluster. One node is down from 16.
mpirun --mca pml teg --mca btl_tcp_if_include eth1,eth0 --hostfile aa
-np 15 ./xhpl
TEG does not use the BTLs; thats why you got no errors.
On Nov 14, 2005, at 8:21 PM, Allan Menezes wrote:
I think the confusion was my fault because --mca pml teg did not
produce errors and gave almost the same performance as Mpich2 v 1.02p1.
The reason why I cannot do what you suggest below is because the
.openmpi/mca-params.conf file if I am
Hi Jeff,
Here are last night's reults of the following command on my 15 node
cluster. One node is down from 16.
mpirun --mca pml teg --mca btl_tcp_if_include eth1,eth0 --hostfile aa
-np 15 ./xhpl
No errors were spewed out to stdout as per my previous post when using
btl tcp and btl_tcp_if_inc
ge Bosilca
Subject: Re: [O-MPI users] HPL and TCP
To: Open MPI Users
Message-ID:
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
Allan,
If there are 2 Ethernet cards it's better if you can point to the one you
want to use. For that you can modify the .openmpi/mca-params.conf fil
ge Bosilca
Subject: Re: [O-MPI users] HPL and TCP
To: Open MPI Users
Message-ID:
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
Allan,
If there are 2 Ethernet cards it's better if you can point to the one you
want to use. For that you can modify the .openmpi/mca-params.conf fil
Dear Jeff, I reorganized my cluster and ran the following test with 15
nodes: [allan@a1 bench]$ mpirun -mca btl tcp --mca btl_tcp_if_include
eth1 --prefix /home/allan/openmpi -hostfile aa -np 15 ./xhpl
[0,1,11][btl_tcp_component.c:342:mca_btl_tcp_component_create_instances]
invalid interface "e
Allan,
If there are 2 Ethernet cards it's better if you can point to the one you
want to use. For that you can modify the .openmpi/mca-params.conf file in
your home directory. All of the options can go in this file so you will
not have to specify them on the mpirun command every time.
I give
Dear Jeff, Sorry I could not test the cluster earlier but I am having
problems with one compute node.(I will have to replace it!). So I will
have to repeat this test with 15 nodes. Yes I had 4 NIC cards on the
head node and it was only eth3 that was the gigabit NIC which was
communicating to ot