Il giorno 11/ott/07, alle ore 07:16, Neeraj Chourasia ha scritto:
Dear All,
Could anyone tell me the important tuning parameters in openmpi
with IB interconnect? I tried setting eager_rdma, min_rdma_size,
mpi_leave_pinned parameters from the mpirun command line on 38
nodes cluster (38*2 processors) but in vain. I found simple mpirun
with no mca parameters performing better. I conducted test on P2P
send/receive with data size of 8MB.
Similarly i patched HPL linpack code with libnbc(non blocking
collectives) and found no performance benefits. I went through its
patch and found that, its probably not overlapping computation with
communication.
Any help in this direction would be appreciated.
-Neeraj
Hi!
I'm Matteo, and I work for a company that produces HPC systems, in
Italy.
I'm new in that company and I'm looking for some help, and this
thread seems to be good :)
In the last days we're benchmarking a system, and I'm interested in
some performance
scores of the infiniband interconnect.
The nodes are dual dual-core opteron machines and we use the PCI-X IB
interfaces Mellanox Cougar Cub.
Machines have the 8111 system controller and the 8131 PCI-X bridge.
We reach a rate of about 600 MB/s in the point-to-point tests.
This rate (more or less) is reported both by the ib_*_bw benchmarks
and the IMB-MPI (sendrecv) benchmarks, version 3.
MPI implementation is, of course, openmpi.
I've read in a few places that a similar setup can reach about 800 MB/
s on machines similar to those descripted above.
Someone can confirm this? Someone have similar hardware and the
measured bandwidth is better than 600 MB/s?
Hints?Comments?
Thank you in advance,
Best regards,
---
Cicuttin Matteo
http://www.matteocicuttin.it
Black holes are where god divided by zero