On Aug 23, 2007, at 11:27 PM, Jeff Pummill wrote:
Nope, no error messages during the execution. Plus, there were no
errors
when I built Open MPI, so I guess I am good.
FWIW, you should also get *much* higher latencies and *much* lower
bandwidths if you're not using the IB network. Running
Hey Brock,
Nope, no error messages during the execution. Plus, there were no errors
when I built Open MPI, so I guess I am good.
Thanks for the info. I appreciate it.
Jeff F. Pummill
University of Arkansas
Fayetteville, Arkansas 72701
(479) 575 - 4590
http://hpc.uark.edu
Brock Palen wro
You will know if it doesn't, You will have a bunch of messages about
not finding a ib card and that openMPI is falling back to another
transport.
Do all your nodes have infiniband?
Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985
On Aug 23, 2007, at 9:27 PM, Jeff P
I have successfully compiled Open MPI 1.2.3 against Intel 8.1 compiler
suite and old (3 years) mvapi stack using the following configure:
configure --prefix=/nfsutil/openmpi-1.2.3
--with-mvapi=/usr/local/topspin/ CC=icc CXX=icpc F77=ifort FC=ifort
Do I need to assign any particular flags to t