On Tue, May 20, 2008 at 12:17:02PM +1000, Terry Frankcombe wrote:
> To tell you all what noone wanted to tell me, yes, it does seem to be
> the memory manager.  Compiling everything with
> --with-memory-manager=none returns the vmem use to the more reasonable
> ~100MB per process (down from >8GB).
> 
> I take it this may affect my peak bandwidth over infiniband.  What's the
> general feeling about how bad this is?
You will not be able to use "-mca mpi_leave_pinned 1" parameter and your
micro benchmark performance will be bad. Real application will see the
difference only if it reuses communication buffers frequently.

> 
> 
> On Tue, 2008-05-13 at 13:12 +1000, Terry Frankcombe wrote:
> > Hi folks
> > 
> > I'm trying to run an MPI app on an infiniband cluster with OpenMPI
> > 1.2.6.
> > 
> > When run on a single node, this app is grabbing large chunks of memory
> > (total per process ~8.5GB, including strace showing a single 4GB grab)
> > but not using it.  The resident memory use is ~40MB per process.  When
> > this app is compiled in serial mode (with conditionals to remove the MPI
> > calls) the memory use is more like what you'd expect, 40MB res and
> > ~100MB vmem.
> > 
> > Now I didn't write it so I'm not sure what extra stuff the MPI version
> > does, and we haven't tracked down the large memory grabs.
> > 
> > Could it be that this vmem is being grabbed by the OpenMPI memory
> > manager rather than directly by the app?
> > 
> > Ciao
> > Terry
> > 
> > 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

--
                        Gleb.

Reply via email to