Hi volks,we see an exeedingly *virtual* memory consumtion through MPI processes if "ulimit -s" (stack size)in profile configuration was setted higher.
Furthermore we believe, every mpi process started, wastes about the double size of `ulimit -s` value which will be set in a fresh console (that is, the value is configurated in e.g. .zshenv, *not* the value actually setted in the console from which the mpiexec runs).
Sun MPI 8.2.1, an empty mpi-HelloWorld program ! either if running both processes on the same host.. .zshenv: ulimit -s 10240 --> VmPeak: 180072 kB .zshenv: ulimit -s 102400 --> VmPeak: 364392 kB .zshenv: ulimit -s 1024000 --> VmPeak: 2207592 kB .zshenv: ulimit -s 2024000 --> VmPeak: 4207592 kB .zshenv: ulimit -s 20240000 --> VmPeak: 39.7 GB!!!!(see the attached files; the a.out binary is a mpi helloworld program running an never ending loop).
Normally, the stack size ulimit is set to some 10 MB by us, but we see a lot of codes which needs *a lot* of stack space, e.g. Fortran codes, OpenMP codes (and especially fortran OpenMP codes). Users tends to hard-code the setting-up the higher value for stack size ulimit.
Normally, the using of a lot of virtual memory is no problem, because there is a lot of this thing :-) But... If more than one person is allowed to work on a computer, you have to divide the ressources in such a way that nobody can crash the box. We do not know how to limit the real RAM used so we need to divide the RAM by means of setting virtual memory ulimit (in our batch system e.g.. That is, for us
"virtual memory consumption" = "real memory consumption". And real memory is not that way cheap than virtual memory. So, why consuming the *twice* amount of stack size for each process?And, why consuming the virtual memory at all? We guess this virtual memory is allocated for the stack (why else it will be related to the stack size ulimit). But, is such allocation really needed? Is there a way to avoid the vaste of virtual memory?
best regards, Paul Kapinos -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915
! Paul Kapinos 22.09.2009 - ! RZ RWTH Aachen, www.rz.rwth-aachen.de ! ! MPI-Hello-World ! PROGRAM PK_MPI_Test USE MPI IMPLICIT NONE ! INTEGER :: my_MPI_Rank, laenge, ierr CHARACTER*(MPI_MAX_PROCESSOR_NAME) my_Host ! !WRITE (*,*) "Jetz penn ich mal 30" !CALL Sleep(30) CALL MPI_INIT (ierr) ! !WRITE (*,*) "Nach MPI_INIT" !CALL Sleep(30) CALL MPI_COMM_RANK( MPI_COMM_WORLD, my_MPI_Rank, ierr ) !WRITE (*,*) "Nach MPI_COMM_RANK" CALL MPI_GET_PROCESSOR_NAME(my_Host, laenge, ierr) WRITE (*,*) "Prozessor ", my_MPI_Rank, "on Host: ", my_Host(1:laenge) ! sleeping or spinnig - the same behaviour !CALL Sleep(3) DO WHILE (.TRUE.) ENDDO !CALL Sleep(3) CALL MPI_FINALIZE(ierr) ! WRITE (*,*) "Daswars" ! END PROGRAM PK_MPI_Test
smime.p7s
Description: S/MIME Cryptographic Signature