The leave pinned will not help in this context. It can only help for
devices capable of real RMA operations and that require pinned memory,
which unfortunately is not the case for TCP. What is [really] strange
about your results is that you get a 4 times better bandwidth over TCP
than over shared memory. Over TCP there are 2 extra memory copies
(compared with sm) plus a bunch of syscalls, so there is absolutely no
reason to get better performance.
The Open MPI version is something you compiled or it came installed
with the OS? If you compiled it can you please provide us the
configure line?
Thanks,
george.
On Jul 29, 2009, at 13:55 , Dorian Krause wrote:
Hi,
--mca mpi_leave_pinned 1
might help. Take a look at the FAQ for various tuning parameters.
Michael Di Domenico wrote:
I'm not sure I understand what's actually happened here. I'm running
IMB on an HP superdome, just comparing the PingPong benchmark
HP-MPI v2.3
Max ~ 700-800MB/sec
OpenMPI v1.3
-mca btl self,sm - Max ~ 125-150MB/sec
-mca btl self,tcp - Max ~ 500-550MB/sec
Is this behavior expected? Are there any tunables to get the OpenMPI
sockets up near HP-MPI?
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users