On Aug 23, 2005, at 6:48 AM, Peter Kjellström wrote:

First I'd like to say that I'm really happy and excited that public access to
svn is now open :-)

Thanks! We're glad it's finally open too -- FWIW, at least a good portion of the loooong delays in opening up the code were because we were making sure we had all the licensing issues worked out properly (open source does not automatically mean easy licensing!). Open MPI is using the BSD license -- see http://www.open-mpi.org/community/license.php -- we just had to get all the paperwork done properly before we could open the code base to the world.

Here is what went fine: check-out, autogen, configure, make, ompi_info and
simple mpi app (both build and run!!!)

Excellent!

Now I'd like to control over which channels/transports/networks the data flows... I configured and built ompi against mvapi (mellanox ibgd-1.8.0) and as far as I can tell it went well. Judging by the behaviour of the tests I have done it defaults to tcp (over ethernet in my case). How do I select
mvapi?

The Open MPI guys working on IB (from Los Alamos) are at the IB workshop this week, and their responses may be a bit slow. They're the ones who can give the definitive answers, but I'll take a shot...

I'm a little surprised that tcp was used -- OMPI should "prefer" the low latency interconnects (such as mvapi) to tcp and automatically use them.

I see from your ompi_info output that the 2 mvapi components were built and installed properly, so that's good.

A little background:

As you know, Open MPI is built upon a component architecture (think plug-ins). See the FAQ for a little more info on this. The lowest-layer component dealing with the different interconnects is the Byte Transfer Layer (BTL). You can tell Open MPI which BTLs it should use at run time via Modular Component Architecture (MCA) parameters. See:

        mpirun --mca btl_base_include self,mvapi -np 4 a.out

This will tell OMPI that you want to use the "self" (i.e., loopback) and "mvapi" BTLs, and no others.

Try this and see if you get better results.

--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/


Reply via email to