Hi,

On 8/25/06, Sven Stork <st...@hlrs.de> wrote:

Hello Miguel,

this is caused by the shared memory mempool. Per default this shared
memory
mapping has a size of 512 MB. You can use the "mpool_sm_size" parameter to
reduce size e.g.

mpirun -mca mpool_sm_size <SIZE> ...



using
mpirun -mca mpool_sm_size 0
is acceptable ?
to what will it fallback ? sockets? pipes? tcp? smoke signals?

thankyou very much by the fast answer.

Thanks,
Sven

On Friday 25 August 2006 15:04, Miguel Figueiredo Mascarenhas Sousa Filipe
wrote:
> Hi there,
> I'm using openmpi-1.1 on a linux-amd64 machine and also a linux-32bit
x86
> chroot environment on that same machine.
> (distro is gentoo, compilers: gcc-4.1.1 and gcc-3.4.6)
>
> In both cases openmpi-1.1 shows a +/-400MB overhead in virtual memory
usage
> (virtual address space usage) for each MPI process.
>
> In my case this is quite troublesome because my application in 32bit
mode is
> counting on using the whole 4GB address space for the problem set size
and
> associated data.
> This means that I have a reduction in the size of the problems which it
can
> solve.
> (my aplication isn't 64bit safe yet, so I need to run in 32bit mode, and
use
> effectively the 4GB address space)
>
>
> Is there a way to tweak this overhead, by configuring openmpi to use
smaller
> buffers, or anything else ?
>
> I do not see this with mpich2.
>
> Best regards,
>
> --
> Miguel Sousa Filipe
>
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




--
Miguel Sousa Filipe

Reply via email to