Sorry we couldn't figure out it -- let us know if you resume your
Open MPI testing.
On Apr 19, 2007, at 6:24 PM, stephen mulcahy wrote:
Hi,
I only have access to this test system for another 24 hours or so
so I'm
not sure it's worth any more of your efforts. Coupled with the fact
that
Hi,
I only have access to this test system for another 24 hours or so so I'm
not sure it's worth any more of your efforts. Coupled with the fact that
I don't have root on the system in question it could be more work too
figure out whats going on than its worth.
Thanks for your help so far,
Yes, this is sounding more mysterious. Please send the output listed
here:
http://www.open-mpi.org/community/help/
On Apr 19, 2007, at 8:15 AM, stephen mulcahy wrote:
Jeff Squyres wrote:
That's truly odd -- I can't imagine why you wouldn't get the TCP
transport with the above command
Jeff Squyres wrote:
That's truly odd -- I can't imagine why you wouldn't get the TCP
transport with the above command line. But the latencies, as you
mentioned, are far too low for TCP.
To be absolutely certain that you're not getting the IB transport, go
to the $prefix/lib/openmpi direct
On Apr 18, 2007, at 8:44 AM, stephen mulcahy wrote:
~/openmpi-1.2/bin/mpirun --mca btl_tcp_if_include eth0 --mca btl
tcp,self --bynode -np 2 --hostfile ~/openmpi.hosts.80
~/IMB/IMB-MPI1-openmpi -npmin 2 pingpong
Neither one resulted in a significantly different benchmark.
That's truly odd --
Hi,
Thanks. I'd actually come across that and tried it also .. but just to
be sure .. here's what I just tried
[smulcahy@foo ~]$ ~/openmpi-1.2/bin/mpirun -v --display-map --mca btl
^openib,mvapi --bynode -np 2 --hostfile ~/openmpi.hosts.2only
~/IMB/IMB-MPI1-openmpi -npmin 2 pingpong
and h
Look here:
http://www.open-mpi.org/faq/?category=tuning#selecting-components
General idea
mpirun -np 2 --mca btl ^tcp (to exclude ethernet) replace with
^openib (or ^mvapi) to exclude infiniband.
Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985
On Apr 18, 2007,