Hey Brock,

Nope, no error messages during the execution. Plus, there were no errors when I built Open MPI, so I guess I am good.

Thanks for the info. I appreciate it.



Jeff F. Pummill
University of Arkansas
Fayetteville, Arkansas 72701
(479) 575 - 4590
http://hpc.uark.edu




Brock Palen wrote:
You will know if it doesn't, You will have a bunch of messages about not finding a ib card and that openMPI is falling back to another transport.

Do all your nodes have infiniband?

Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


On Aug 23, 2007, at 9:27 PM, Jeff Pummill wrote:

I have successfully compiled Open MPI 1.2.3 against Intel 8.1 compiler
suite and old (3 years) mvapi stack using the following configure:

configure --prefix=/nfsutil/openmpi-1.2.3
--with-mvapi=/usr/local/topspin/ CC=icc CXX=icpc F77=ifort FC=ifort

Do I need to assign any particular flags to the command line submission
to ensure that it is using the IB network instead of the TCP? Or
possibly disable the Gig-E with ^tcp to see if it still runs successfully?

I just want to be sure that Open MPI is actually USING the IB network
and mvapi.

Thanks!

Jeff Pummill

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to