Re: [OMPI users] Problem in Open MPI (v1.8) Performance on 10G Ethernet

2014-04-21 Thread Muhammad Ansar Javed
No, I have not tried multi-link. On Mon, Apr 21, 2014 at 11:50 PM, George Bosilca wrote: > Have you tried the multi-link? Did it helped? > > George. > > > On Apr 21, 2014, at 10:34 , Muhammad Ansar Javed < > muhammad.an...@seecs.edu.pk> wrote: > > I am able to achieve around 90% ( maximum 939

Re: [OMPI users] Problem in Open MPI (v1.8) Performance on 10G Ethernet

2014-04-21 Thread George Bosilca
Have you tried the multi-link? Did it helped? George. On Apr 21, 2014, at 10:34 , Muhammad Ansar Javed wrote: > I am able to achieve around 90% ( maximum 9390 Mbps) bandwidth on 10GE. There > were configuration issues disabling Intel Speedstep and Interrupt coalescing > helped in achievin

Re: [OMPI users] Problem in Open MPI (v1.8) Performance on 10G Ethernet

2014-04-21 Thread Muhammad Ansar Javed
I am able to achieve around 90% ( maximum 9390 Mbps) bandwidth on 10GE. There were configuration issues disabling Intel Speedstep and Interrupt coalescing helped in achieving expected network bandwidth. Varying send and recv buffer sizes from 128 KB to 1 MB added just 50 Mbps with maximum bandwidth

[OMPI users] MPI one-sided communication questions

2014-04-21 Thread Tobias Burnus
Dear all, I would like to do one-sided communication as implementation of a Fortran coarray library. "MPI provides three synchronization mechanisms: "1. The MPI_WIN_FENCE collective synchronization call supports a simple synchronization pattern that is often used in parallel computations: na