With OpenMPI 1.3 / iWARP you should get around 8us latency using mpi pingpong tests.

Andy Georgi wrote:
Thanks again for all the answers. It seems that were was a bug in the driver in combination with Suse Linux Enterprise Server 10. It was fixed with version 1.0.146. Now we have 12us with NPtcp and 22us with NPmpi. This is still not fast enough but for the time acceptable. I will check the alternatives as soon as possible and look forward to OpenMPI 1.3. Then we will see what iWARP brings
;-).

Best regards,

Andy

Kozin, I (Igor) schrieb:
Thanks for the fast answer. So is this latency normal for TCP
communications over MPI!? Could RDMA maybe reduce the latency? It
should work with those cards but there are still problems with OFED.
iWARP is also one of the features they offer but if it works...

Hi Andy,
Yes, ~40us TCP latency is normal (it can be worse too).
If you need lower MPI latency you need to look elsewhere (but it's not
going to be TCP). Check SCore, OpenMX and Gamma. SCore is more mature of
the three but OpenMX looks promising too. We get less than 15 us using
SCore MPI and Intel NICs (IMB PingPong). Of course commercial MPI
libraries offer low latency too e.g. Scali MPI.

Best,
Igor

--

Dresden University of Technology
Center for Information Services
and High Performance Computing (ZIH)
D-01062 Dresden
Germany

e-mail: andy.geo...@zih.tu-dresden.de
WWW:    http://www.tu-dresden.de/zih

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

I. Kozin  (i.kozin at dl.ac.uk)
Computational Science and Engineering Dept.
STFC Daresbury Laboratory Daresbury Science and Innovation Centre
Daresbury, Warrington, WA4 4AD skype: in_kozin
tel: +44 (0) 1925 603308
fax: +44 (0) 1925 603634
http://www.cse.clrc.ac.uk/disco


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Reply via email to