Thanks Jeff,
It worked. Now latency and bandwidth benchmarks are in performing as
expected for both Ethernet and InfiniBand.
--Ansar
On Wed, Sep 10, 2014 at 3:34 PM, Jeff Squyres (jsquyres) wrote:
> Are you inadvertently using the MXM MTL? That's an alternate Mellanox
> transport that may acti
Are you inadvertently using the MXM MTL? That's an alternate Mellanox
transport that may activate itself, even if you've disabled the openib BTL.
Try this:
mpirun --mca pml ob1 --mca btl ^openib ...
This forces the use of the ob1 PML (which forces the use of the BTLs, not the
MTLs), and th
RX packets:66889 errors:0 dropped:0 overruns:0 frame:0
>> TX packets:66889 errors:0 dropped:0 overruns:0 carrier:0
>> collisions:0 txqueuelen:0
>> RX bytes:19005445 (18.1 MiB) TX bytes:19005445 (18.1 MiB)
>>
>>
>>
>>
>&g
889 errors:0 dropped:0 overruns:0 frame:0
> TX packets:66889 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:0
> RX bytes:19005445 (18.1 MiB) TX bytes:19005445 (18.1 MiB)
>
>
>
>
>
>
>> Date: Wed, 10 Sep 2014 0
MiB) TX bytes:19005445 (18.1 MiB)
> Date: Wed, 10 Sep 2014 00:06:51 +0900
> From: George Bosilca
> To: Open MPI Users
> Subject: Re: [OMPI users] Forcing OpenMPI to use Ethernet interconnect
> instead of InfiniBand
>
> Look at your ifconfig output and selec
Look at your ifconfig output and select the Ethernet device (instead of the
IPoIB one). Traditionally the name lack any fanciness, most distributions
using eth0 as a default.
George.
On Tue, Sep 9, 2014 at 11:24 PM, Muhammad Ansar Javed <
muhammad.an...@seecs.edu.pk> wrote:
> Hi,
>
> I am cur
Hi,
I am currently conducting some testing on system with Gigabit and
InfiniBand interconnects. Both Latency and Bandwidth benchmarks are doing
well as expected on InfiniBand interconnects but Ethernet interconnect is
achieving very high performance from expectations. Ethernet and InfiniBand
both