It *should* work. We stopped developing for the Cisco (mVAPI) stack a while ago, but as far as we know, it still works fine. See:

    http://www.open-mpi.org/faq/?category=openfabrics#vapi-support

That being said, your approach of "it ain't broke, don't fix it" is certainly quite reasonable.


On Jul 23, 2007, at 4:51 PM, Jeff Pummill wrote:

Hmmm...compilation SEEMED to go OK with the following .configure...

./configure --prefix=/nfsutil/openmpi-1.2.3 --with-mvapi=/usr/local/ topspin/ CC=icc CXX=icpc F77=ifort FC=ifort CFLAGS=-m64 CXXFLAGS=- m64 FFLAGS=-m64 FCFLAGS=-m64

And the following looks promising...

./ompi_info | grep mvapi
MCA btl: mvapi (MCA v1.0, API v1.0.1, Component v1.2.3)

I have a post-doc that will test some application code in the next day or so. Maybe the old stuff worked just fine!


Jeff F. Pummill
Senior Linux Cluster Administrator
University of Arkansas
Fayetteville, Arkansas 72701



Jeff Pummill wrote:
Good morning all, I have been very impressed so far with OpenMPI on one of our smaller clusters running Gnu compilers and Gig-E interconnects, so I am considering a build on our large cluster. The potential problem is that the compilers are Intel 8.1 versions and the Infiniband is supported by three year old Topspin (now Cisco) drivers and libraries. Basically, this is a cluster that runs a very heavy workload using MVAPICH, thus we have adopted the "if it ain't broke, don't fix it" methodology...thus all of the drivers, libraries, and compilers are approximately 3 years old. Would it be reasonable to expect OpenMPI 1.2.3 to build and run in such an environment? Thanks! Jeff Pummill University of Arkansas _______________________________________________ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
Cisco Systems

Reply via email to