On Tuesday 23 August 2005 14.52, Jeff Squyres wrote:
> ...
> > Now I'd like to control over which channels/transports/networks the
> > data
> > flows... I configured and built ompi against mvapi (mellanox
> > ibgd-1.8.0) and
> > as far as I can tell it went well. Judging by the behaviour of the
> > tests I
> > have done it defaults to tcp (over ethernet in my case). How do I
> > select
> > mvapi?
>
> The Open MPI guys working on IB (from Los Alamos) are at the IB
> workshop this week, and their responses may be a bit slow.  They're the
> ones who can give the definitive answers, but I'll take a shot...

That's ok, I was both interested in the general way to handle transport 
selection and specifically how to fix this one.

>
> I'm a little surprised that tcp was used -- OMPI should "prefer" the
> low latency interconnects (such as mvapi) to tcp and automatically use
> them.

One main thing I'd like is a "working" --verbose (adding it to mpirun doesn't 
give me a single extra line of output) that would tell me something like:
..looking for transports, found: self, mvapi, tcp.
..testing trasnports: self ok, mvapi fail, tcp ok.
..assembeling final list of transports: self, tcp.
...

>
> I see from your ompi_info output that the 2 mvapi components were built
> and installed properly, so that's good.
>
> ...
>       mpirun --mca btl_base_include self,mvapi -np 4 a.out
>
> This will tell OMPI that you want to use the "self" (i.e., loopback)
> and "mvapi" BTLs, and no others.
>
> Try this and see if you get better results.

Nope, no errors, no extra output, but same ethernet-tcp-like performance (32 
us and 116 MiB/s).

/Peter

-- 
------------------------------------------------------------
  Peter Kjellström               |
  National Supercomputer Centre  |
  Sweden                         | http://www.nsc.liu.se

Attachment: pgpeGpF7t2XrA.pgp
Description: PGP signature

Reply via email to