Hi Roman

Note that in 1.3.0 and 1.3.1 the default ("-mca mpi_leave_pinned 1")
had a glitch.  In my case it appeared as a memory leak.

See this:

http://www.open-mpi.org/community/lists/users/2009/05/9173.php
http://www.open-mpi.org/community/lists/announce/2009/03/0029.php

One workaround is to revert to
"-mca mpi_leave_pinned 0" (which is what I suggested to you)
when using 1.3.0 or 1.3.1.
The solution advocated by OpenMPI is to upgrade to 1.3.2.

You reported you used "1.3", besides 1.2.6 and 1.2.8.
If this means that you are using 1.3.0 or 1.3.1,
you may want to try the workaround or the upgrade,
regardless of any scaling performance expectations.

Gus Correa
---------------------------------------------------------------------
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------




Roman Martonak wrote:
I've been using --mca mpi_paffinity_alone 1 in all simulations. Concerning "-mca
 mpi_leave_pinned 1", I tried it with openmpi 1.2.X versions and it
makes no difference.

Best regards

Roman

On Mon, May 18, 2009 at 4:57 PM, Pavel Shamis (Pasha) <pash...@gmail.com> wrote:
1) I was told to add "-mca mpi_leave_pinned 0" to avoid problems with
Infinband.  This was with OpenMPI 1.3.1.  Not
Actually for 1.2.X version I will recommend you to enable leave pinned "-mca
mpi_leave_pinned 1"
sure if the problems were fixed on 1.3.2, but I am hanging on to that
setting just in case.
We had data corruption issue in 1.3.1 but it was resolved in 1.3.2. In 1.3.2
version leave_pinned is enabled by default.

If I remember correct mvapich enables affinity mode by default, so I can
recommend you to try to enable it too:
"--mca mpi_paffinity_alone 1". For more details please check FAQ -
http://www.open-mpi.org/faq/?category=tuning#using-paffinity

Thanks,
Pasha.
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to