Well, if there is no reuse in the application buffers, then the 2 approaches will give the same results. Because of our pipelined protocol it might happens that we will reach even better performance for large messages. If there is buffer reuse, the mpich-gm approach will lead to better performances, but to more pinned memory. In fact, the main problem is not the memory, but the memory hooks (like the ones in the lib c) that need to be take care by the MPI library in order to notice when one of the already registered memory location is released (freed) by the user.

At least with our approach the user have the choice. By default we turn them off, but it is really easy for any user to turn on the pinned memory registration, if he/she think that his MPI application require it. There is a paper to be published this year at Euro PVM/ MPI which shows that for some *real* applications (like the NAS benchmarks) there is no real difference. But it is definitively for the ping-pong benchmark ...

  Thanks,
    george.

On Jun 13, 2006, at 10:38 AM, Brock Palen wrote:

Ill provide new numbers soon with the --mac mpi_leave_pinned 1
I'm currious how does this affect real application performace?  This
ofcourse is a synthetic test using NetPipe.   For regular apps that
move decent amounts of data but want low latency more.
Will that be affected?

Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


On Jun 13, 2006, at 10:26 AM, George Bosilca wrote:

Unlike mpich-gm, Open MPI does not keep the memory pinned by default.
You can force this by ading the "--mca mpi_leave_pinned 1" to your
mpirun command or by adding it into the Open MPI configuration file
as specified on the FAQ (section performance). I think that should be
the main reason what you're seeing a such degradation of performances.

If this does not solve your problem, can you please provide the new
performance as well as the output of the command "ompi_info --param
all all".

   Thanks,
     george.

On Jun 13, 2006, at 10:01 AM, Brock Palen wrote:

I ran a test using openmpi-1.0.2 on OSX  vs mpich-1.2.6 from
mryicom and i get lacking results from OMPI,
at point point there is a small drop in bandwidth for both MPI
libs, but open mpi does not recover like mpich, and further on you
see a decreese in bandwidth for OMPI on gm.

I have attached in png  and the outputs from the test (there are
two for OMPI )
<bwMyrinet.png>
<bwOMPI.o1969>
<bwOMPI.o1979>
<bwMPICH.o1978>

Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

"Half of what I say is meaningless; but I say it so that the other half may reach you"
                                  Kahlil Gibran


Reply via email to