Hi Josh,

I am not an expert in this area of the code, but I'll give it a shot. 

(I assume you are using linux due to your email address) When using the memory 
manager (which is the default on linux), we wrap malloc/realloc/etc with 
ptmalloc2 (which is the same allocator used in glibc 2.3.x).

What I believe is happening is that ptmalloc2 is requesting more memory than 
necessary from the OS, and then lazily releasing it back. Try looking in the 
ompi source at opal/mca/memory/ptmalloc2/README ( 
https://svn.open-mpi.org/trac/ompi/browser/tags/v1.2-series/v1.2.3/opal/mca/memory/ptmalloc2/README#L121
 ).

This mentions some environment variables that can be set to alter ptmalloc2's 
behavior, although I have no idea if they work.

Alternatively, if you are not using a high performance network, there is 
little reason to use the memory manager, so you could just disable it.

Tim

On Thursday 23 August 2007 10:18:45 am Josh Aune wrote:
> I have found that the infiniserv MPI that comes with our IB software
> distribution tracks the same behaviour as gcc (releaseing memory on
> realloc).  I have also found that building openmpi with
> --without-memory-manager makes openmpi track the same behaviour as
> glibc.   I'm guessing that there is a bug in the pinned pages caching
> code?
>
> On 8/21/07, Josh Aune <lu...@lnxi.com> wrote:
> > The realloc included with openmpi 1.2.3 is not releasing memory to the
> > OS and is causing apps to go into swap.  Attached is a little test
> > program that shows calls to realloc not releasing the memory when
> > compiled using mpicc, but when compiled directly with gcc (or icc)
> > calling realloc() frees any memory no longer needed.
> >
> > Is this a bug?
> >
> > If not, how can I force openmpi to free the memory that the allocator
> > is sitting on?
> >
> > Thanks,
> > Josh
> >
> > Sample output.  Note the delta between 'total' and 'malloc held' when
> > compiled with mpicc and how the gcc compiled versions track perfectly.
> >
> > $ mpicc -o realloc_test realloc_test.c
> > $ ./realloc_test
> > ...
> > malloc/realloc/free test
> > malloc()    50 MB, total   50 MB, malloc held   50 MB
> > realloc()    1 MB, total    1 MB, malloc held   50 MB
> > malloc()    50 MB, total   51 MB, malloc held  100 MB
> > realloc()    1 MB, total    2 MB, malloc held  100 MB
> > malloc()    50 MB, total   52 MB, malloc held  150 MB
> > realloc()    1 MB, total    3 MB, malloc held  150 MB
> > malloc()    50 MB, total   53 MB, malloc held  200 MB
> > realloc()    1 MB, total    4 MB, malloc held  200 MB
> > malloc()    50 MB, total   54 MB, malloc held  250 MB
> > realloc()    1 MB, total    5 MB, malloc held  250 MB
> > free()       1 MB, total    4 MB, malloc held  200 MB
> > free()       1 MB, total    3 MB, malloc held  150 MB
> > free()       1 MB, total    2 MB, malloc held  100 MB
> > free()       1 MB, total    1 MB, malloc held   50 MB
> > free()       1 MB, total    0 MB, malloc held    0 MB
> > ...
> >
> > $ gcc -o realloc_test realloc_test.c
> > $ ./realloc_test
> > ...
> > malloc/realloc/free test
> > malloc()    50 MB, total   50 MB, malloc held   50 MB
> > realloc()    1 MB, total    1 MB, malloc held    1 MB
> > malloc()    50 MB, total   51 MB, malloc held   51 MB
> > realloc()    1 MB, total    2 MB, malloc held    2 MB
> > malloc()    50 MB, total   52 MB, malloc held   52 MB
> > realloc()    1 MB, total    3 MB, malloc held    3 MB
> > malloc()    50 MB, total   53 MB, malloc held   53 MB
> > realloc()    1 MB, total    4 MB, malloc held    4 MB
> > malloc()    50 MB, total   54 MB, malloc held   54 MB
> > realloc()    1 MB, total    5 MB, malloc held    5 MB
> > free()       1 MB, total    4 MB, malloc held    4 MB
> > free()       1 MB, total    3 MB, malloc held    3 MB
> > free()       1 MB, total    2 MB, malloc held    2 MB
> > free()       1 MB, total    1 MB, malloc held    1 MB
> > free()       1 MB, total    0 MB, malloc held    0 MB
> > ...
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to