If you don't use btl_tcp_if_include, Open MPI should use all available ethernet devices, and *should* (although I haven't tested this recently) only use devices that are routable to specific peers. Specifically, if you're on a node with eth0-3, it should use all of them to connect to another peer that has eth0-3, but only use eth0-1 to connect to a peer that only has those 2 devices. (all of the above assume that all your eth0's are on one subnet, all your eth1's are on another subnet, ...etc.)

Does that work for you?


On Aug 25, 2009, at 7:14 PM, Jayanta Roy wrote:

Hi,

I am using Openmpi (version 1.2.2) for MPI data transfer using non- blocking MPI calls like MPI_Isend, MPI_Irecv etc. I am using "--mca btl_tcp_if_include eth0,eth1" to use both the eth link for data transfer within 48 nodes. Now I have added eth2 and eth3 links on the 32 compute nodes. My aim is to share the high speed data within 32 compute nodes through eth2 and eth3. But I can't include this as part of "mca" as the rest of 16 nodes do not have these additional interfaces. In MPI/Openmp can one specify explicit routing table within a set of nodes? Such that I can edit /etc/host for additional hostname with these new interfaces and add these hosts in the mpi hostfile.

Regards,
Jayanta _______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
jsquy...@cisco.com

Reply via email to