Nope, just one ethernet interface:

$ ifconfig
eth0      Link encap:Ethernet  HWaddr 0E:47:0E:0B:59:27
          inet addr:xxx.xxx.xxx.xxx  Bcast:xxx.xxx.xxx.xxx
Mask:255.255.252.0
          inet6 addr: fe80::c47:eff:fe0b:5927/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
          RX packets:16962 errors:0 dropped:0 overruns:0 frame:0
          TX packets:11564 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:28613867 (27.2 MiB)  TX bytes:1092650 (1.0 MiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:68 errors:0 dropped:0 overruns:0 frame:0
          TX packets:68 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:6647 (6.4 KiB)  TX bytes:6647 (6.4 KiB)


-- 
Gary Jackson




From:  users <users-boun...@open-mpi.org> on behalf of Gilles Gouaillardet
<gilles.gouaillar...@gmail.com>
Reply-To:  Open MPI Users <us...@open-mpi.org>
List-Post: users@lists.open-mpi.org
Date:  Tuesday, March 8, 2016 at 9:39 AM
To:  Open MPI Users <us...@open-mpi.org>
Subject:  Re: [OMPI users] Poor performance on Amazon EC2 with TCP


Jason,

how many Ethernet interfaces are there ?
if several, can you try again with one only
mpirun --mca btl_tcp_if_include eth0 ...

Cheers,

Gilles

On Tuesday, March 8, 2016, Jackson, Gary L. <gary.jack...@jhuapl.edu>
wrote:


I've built OpenMPI 1.10.1 on Amazon EC2. Using NetPIPE, I'm seeing about
half the performance for MPI over TCP as I do with raw TCP. Before I start
digging in to this more deeply, does anyone know what might cause that?

For what it's worth, I see the same issues with MPICH, but I do not see it
with Intel MPI.

-- 
Gary Jackson

Reply via email to