Hi Jeff,
Given the situation that 16 nodes have eth0,eth1 and eth2 interface, I tried
to run data transfer within themselves using mpirun, but without specifying
"btl_tcp_if_include". I got only 15% increase in uni-directional data
transfer rate when using 3 links. But if I run two such processes
If you don't use btl_tcp_if_include, Open MPI should use all available
ethernet devices, and *should* (although I haven't tested this
recently) only use devices that are routable to specific peers.
Specifically, if you're on a node with eth0-3, it should use all of
them to connect to anoth
On Tue, Aug 25, 2009 at 09:44:29PM +0530, Jayanta Roy wrote:
>
>Hi,
>I am using Openmpi (version 1.2.2) for MPI data transfer using
>non-blocking MPI calls like MPI_Isend, MPI_Irecv etc. I am using "--mca
>btl_tcp_if_include eth0,eth1" to use both the eth link for data
>transfe