On 7/14/06 10:40 AM, "Michael Kluskens" <mklusk...@ieee.org> wrote:

> I've looked through the documentation but I haven't found the
> discussion about what each BTL device is, for example, I have:
> 
> MCA btl: self (MCA v1.0, API v1.0, Component v1.2)

This is the "loopback" Open MPI device.  It is used exclusively for sending
and receiving from one process to the same process.  I.e., message passing
is effected by memcpy's in the same process -- no network is involved (not
even shared memory, because it's within a single process).

We do this not for optimization, but rather for software engineering reasons
-- by having a "self" BTL, all the other BTLs can assume that they never
have to handle the special case of "sending/receiving to self".

> MCA btl: sm (MCA v1.0, API v1.0, Component v1.2)

This is shared memory.  It is used to communicate between processes on the
same node.

> MCA btl: tcp (MCA v1.0, API v1.0, Component v1.0)

I think this one is pretty obvious.  ;-)

> I found a PDF presentation that describes a few:
> 
> € tcp - TCP/IP
> € openib ­ Infiniband OpenIB Stack
> € gm/mx- Myrinet GM/MX
> € mvapi - Infiniband Mellanox Verbs
> € sm - Shared Memory
> 
> Are there any others I may see when interacting with other people's
> computers?

These are the main ones for now.  There may be more in the future.

> I assume that if a machine has Myrinet and I don't see MCA btl: gm or
> MCA btl: mx then I have to explain the problem to the sysadm's.

Correct.

> The second question is should I see both gm & mx, or only one or the
> other.

Probably just one or the other; I *believe* that you cannot have both
installed on the same node.  That being said, you can have the *support
libraries* for both installed on the same node and therefore Open MPI can
build support for it and show that those btl's exist in the output of
ompi_info.  But only one will *run* at a time.

Sorry for the delay on the answer -- hope this helps!

-- 
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems

Reply via email to