No, I have not tried multi-link.
On Mon, Apr 21, 2014 at 11:50 PM, George Bosilca wrote:
> Have you tried the multi-link? Did it helped?
>
> George.
>
>
> On Apr 21, 2014, at 10:34 , Muhammad Ansar Javed <
> muhammad.an...@seecs.edu.pk> wrote:
>
> I am able to achieve around 90% ( maximum 939
Have you tried the multi-link? Did it helped?
George.
On Apr 21, 2014, at 10:34 , Muhammad Ansar Javed
wrote:
> I am able to achieve around 90% ( maximum 9390 Mbps) bandwidth on 10GE. There
> were configuration issues disabling Intel Speedstep and Interrupt coalescing
> helped in achievin
I am able to achieve around 90% ( maximum 9390 Mbps) bandwidth on 10GE.
There were configuration issues disabling Intel Speedstep and Interrupt
coalescing helped in achieving expected network bandwidth. Varying send and
recv buffer sizes from 128 KB to 1 MB added just 50 Mbps with maximum
bandwidth
Muhammad,
Our configuration of TCP is tailored for 1Gbs networks, so it’s performance on
10G might be sub-optimal. That being said, the remaining of this email will be
speculation as I do not have access to a 10G system to test it.
There are two things that I would test to see if I can improve
Hi Ralph,
Yes, you are right. I should have also tested NetPipe-MPI version earlier.
I ran NetPipe-MPI version on 10G Ethernet and maximum bandwidth achieved is
5872 Mbps. Moreover, maximum bandwidth achieved by osu_bw test is 6080
Mbps. I have used OSU-Micro-Benchmarks version 4.3.
On Wed, Apr 1
I apologize, but I am now confused. Let me see if I can translate:
* you ran the non-MPI version of the NetPipe benchmark and got 9.5Gps on a
10Gps network
* you ran iperf and got 9.61Gps - however, this has nothing to do with MPI.
Just tests your TCP stack
* you tested your bandwidth program on
Yes, I have tried NetPipe-Java and iperf for bandwidth and configuration
test. NetPipe Java achieves maximum 9.40 Gbps while iperf achieves maximum
9.61 Gbps bandwidth. I have also tested my bandwidth program on 1Gbps
Ethernet connection and it achieves 901 Mbps bandwidth. I am using the same
progr
Have you tried a typical benchmark (e.g., NetPipe or OMB) to ensure the problem
isn't in your program? Outside of that, you might want to explicitly tell it to
--bind-to core just to be sure it does so - it's supposed to do that by
default, but might as well be sure. You can check by adding --re
Hi,
I am trying to benchmark Open MPI performance on 10G Ethernet network
between two hosts. The performance numbers of benchmarks are less than
expected. The maximum bandwidth achieved by OMPI-C is 5678 Mbps and I was
expecting around 9000+ Mbps. Moreover latency is also quite higher than
expected