Hi,
I am trying to benchmark Open MPI performance on 10G Ethernet network
between two hosts. The performance numbers of benchmarks are less than
expected. The maximum bandwidth achieved by OMPI-C is 5678 Mbps and I was
expecting around 9000+ Mbps. Moreover latency is also quite higher than
expected
to be sure it does so - it's
> supposed to do that by default, but might as well be sure. You can check by
> adding --report-binding to the cmd line.
>
>
> On Apr 14, 2014, at 11:10 PM, Muhammad Ansar Javed <
> muhammad.an...@seecs.edu.pk> wrote:
>
> Hi,
>
standard benchmarks, and so we should
> first ensure we aren't chasing a ghost.
>
>
>
>
>
> On Wed, Apr 16, 2014 at 1:41 AM, Muhammad Ansar Javed <
> muhammad.an...@seecs.edu.pk> wrote:
>
>> Yes, I have tried NetPipe-Java and iperf for bandwidth and configurati
he following (but here I’m more skeptical). Try
> pushing the value of btl_tcp_endpoint_cache up. This parameter is not to be
> used eagerly in real applications with a complete communication pattern,
> but for a benchmark it might be a good use.
>
> George.
>
> On Apr 16
No, I have not tried multi-link.
On Mon, Apr 21, 2014 at 11:50 PM, George Bosilca wrote:
> Have you tried the multi-link? Did it helped?
>
> George.
>
>
> On Apr 21, 2014, at 10:34 , Muhammad Ansar Javed <
> muhammad.an...@seecs.edu.pk> wrote:
>
> I am abl
Hi,
I am currently conducting some testing on system with Gigabit and
InfiniBand interconnects. Both Latency and Bandwidth benchmarks are doing
well as expected on InfiniBand interconnects but Ethernet interconnect is
achieving very high performance from expectations. Ethernet and InfiniBand
both
t the Ethernet device (instead of the
> IPoIB one). Traditionally the name lack any fanciness, most distributions
> using eth0 as a default.
>
> George.
>
>
> On Tue, Sep 9, 2014 at 11:24 PM, Muhammad Ansar Javed <
> muhammad.an...@seecs.edu.pk> wrote:
>
> >
t;
>>
>>> Date: Wed, 10 Sep 2014 00:06:51 +0900
>>> From: George Bosilca
>>> To: Open MPI Users
>>> Subject: Re: [OMPI users] Forcing OpenMPI to use Ethernet interconnect
>>> instead of InfiniBand
>>>
>>>
>>>
rt that may activate itself, even if you've disabled the openib
> BTL. Try this:
>
> mpirun --mca pml ob1 --mca btl ^openib ...
>
> This forces the use of the ob1 PML (which forces the use of the BTLs, not
> the MTLs), and then disables the openib BTL.
>
>
> On Sep