On Tue, Aug 25, 2009 at 09:44:29PM +0530, Jayanta Roy wrote:
> 
>    Hi,
>    I am using Openmpi (version 1.2.2) for MPI data transfer using
>    non-blocking MPI calls like MPI_Isend, MPI_Irecv etc. I am using "--mca
>    btl_tcp_if_include eth0,eth1" to use both the eth link for data
>    transfer within 48 nodes.  Now I have added eth2 and eth3 links on the
>    32 compute nodes. My aim is to share the high speed data within 32
>    compute nodes through eth2 and eth3. But I can't include this as part
>    of "mca" as the rest of 16 nodes do not have these additional
>    interfaces. In MPI/Openmp can one specify explicit routing table within
>    a set of nodes? Such that I can edit /etc/host for additional hostname
>    with these new interfaces and add these hosts in the mpi hostfile.
>    Regards,
>    Jayanta

Since you are using btl_tcp you need to look at TCP/IP routing at the
system level to accomplish this.   With multiple links it is necessary
to be aware of the routing tables and how host names are resolved.
If you work with the concept that it is the interfaces that have names
(not the host) a number of things might get simplified.

For a modest number of hosts, host routing tricks might work
but you must build the routes by hand.

Do pay attention to subnets.   Subnet routing can simplify things.

Do organize host names to identify the associated interface.
CNAMES might let you specify which interface traffic should 
arrive on.  Something like this...

    192.168.0.5 host5-eth0   host5      
    192.168.1.5 host5-eth1
    192.168.2.5 host5-eth2
    192.168.3.5 host5-eth3
    192.168.0.6 host6-eth0
    192.168.1.6 host6-eth1   host6
    192.168.2.6 host6-eth2
    192.168.3.6 host6-eth3

Smart routing daemons can help or confuse.
Site/ campus routers that you do not control 
might be important.

-- 
        T o m  M i t c h e l l 
        Found me a new hat, now what?

Reply via email to