On Dec 22, 2005, at 9:15 AM, Christophe Peyret wrote:

I have compiled and install openmpi-1.0.1 on MacOSX Tiger 10.4.3. It has been configured to works with xfl_r and xlf95_r. I just change few lines of the xlf.cfg in order to build open-mpi as mentioned on the mail- list :

OMPI uses the fortran compiler to link a C program in order to test
the size of LOGICAL, etc. To get it to work with xlf, I had to add
-lSystemStubs to the gcc_libs entry for f77 in /etc/opt/ibmcmp/xlf/ 8.1/
xlf.cfg.

I also compile my program using mpif90 and it works !


Now, when I launch my program with open-mpi, The difference between the real and virtual memories is about 600 MO while I have only 50 MO using
lam-7.1.1.

I am looking for a solution to reduce that 'too' high level of virtual
memory.

Open MPI does (currently) use slightly more memory that LAM/MPI due to some choices we made in tradeoffs between memory usage and performance. The bulk of the difference, however, is due to differences in shared memory communication and the default component build mode.

LAM/MPI uses System V shared memory for shared memory communication. Mac OS X defaults to allowing only 4MB of SysV shared memory per user at any one time, so LAM/MPI allocates only that small chunk for all its shared memory communication. Open MPI, on the other hand, uses mmap'ed files for shared memory communication. Since the limits on mmaped files are much higher than SysV shared memory (basically, the limits of the virtual memory space), we default to using 512MB of space for shared memory communication. This can be tweaked by setting the MCA parameter mpool_sm_size (argument is number of bytes).

LAM/MPI defaults to building static libraries, with all components linked into the static libraries. Since the linker only "brings in" the parts of libmpi.a actually needed for your application and most people only use a small portion of MPI, this can result in a reasonable amount of libmpi.a and liblam.a never being linked into your application. Open MPI, on the other hand, defaults to building shared libraries, with all components loaded at runtime. The linker always maps the entirety of the shared library into virtual memory (although not all of it is loaded into physical memory). The dynamically loaded components each have about 1MB of overhead that is not there when components are linked into libmpi.{a,so} directly. You can enable static libraries for Open MPI (which will cause the build system to link components directly into libmpi.a) with the configure options --enable-static --disable-shared.

Brian

--
  Brian Barrett
  Open MPI developer
  http://www.open-mpi.org/

Reply via email to