Thanks for your help.
Here is attached the output of ompi_info in the file ompi_info.txt.
-----Message d'origine-----
De : [email protected] [mailto:[email protected]]De la
part de Tim Prins
Envoyé : jeudi 1 mars 2007 05:45
À : Open MPI Users
Objet : Re: [OMPI users] MPI_Comm_Spawn
I have tried to reproduce this but cannot. I have been able to run your test
program to over 100 spawns. So I can track this further, please send the
output of ompi_info.
Thanks,
Tim
On Tuesday 27 February 2007 10:15 am, [email protected] wrote:
> Do you know if there is a limit to the number of MPI_Comm_spawn we can use
> in order to launch a program? I want to start and stop a program several
> times (with the function MPI_Comm_spawn) but every time after 31
> MPI_Comm_spawn, I get a "segmentation fault". Could you give me your point
> of you to solve this problem?
> Thanks
>
> /*file .c : spawned the file Exe*/
> #include <stdio.h>
> #include <malloc.h>
> #include <unistd.h>
> #include "mpi.h"
> #include <pthread.h>
> #include <signal.h>
> #include <sys/time.h>
> #include <errno.h>
> #define EXE_TEST "/home/workspace/test_spaw1/src/Exe"
>
>
>
> int main( int argc, char **argv ) {
>
> long *lpBufferMpi;
> MPI_Comm lIntercom;
> int lErrcode;
> MPI_Comm lCommunicateur;
> int lRangMain,lRangExe,lMessageEnvoi,lIter,NiveauThreadVoulu,
> NiveauThreadObtenu,lTailleBuffer; int *lpMessageEnvoi=&lMessageEnvoi;
> MPI_Status lStatus; /*status de reception*/
>
> lIter=0;
>
>
> /* MPI environnement */
>
> printf("main*******************************\n");
> printf("main : Lancement MPI*\n");
>
> NiveauThreadVoulu = MPI_THREAD_MULTIPLE;
> MPI_Init_thread( &argc, &argv, NiveauThreadVoulu, &NiveauThreadObtenu
> ); lpBufferMpi = calloc( 10000, sizeof(long));
> MPI_Buffer_attach( (void*)lpBufferMpi, 10000 * sizeof(long) );
>
> while (lIter<1000){
> lIter ++;
> lIntercom=(MPI_Comm)-1 ;
>
> MPI_Comm_spawn( EXE_TEST, NULL, 1, MPI_INFO_NULL,
> 0, MPI_COMM_WORLD, &lIntercom, &lErrcode );
> printf( "%i main***MPI_Comm_spawn return : %d\n",lIter, lErrcode );
>
> if(lIntercom == (MPI_Comm)-1 ){
> printf("%i Intercom null\n",lIter);
> return 0;
> }
> MPI_Intercomm_merge(lIntercom, 0,&lCommunicateur );
> MPI_Comm_rank( lCommunicateur, &lRangMain);
> lRangExe=1-lRangMain;
>
> printf("%i main***Rang main : %i Rang exe : %i
> \n",lIter,(int)lRangMain,(int)lRangExe); sleep(2);
>
> }
>
>
> /* Arret de l'environnement MPI */
> lTailleBuffer=10000* sizeof(long);
> MPI_Buffer_detach( (void*)lpBufferMpi, &lTailleBuffer );
> MPI_Comm_free( &lCommunicateur );
> MPI_Finalize( );
> free( lpBufferMpi );
>
> printf( "Main = End .\n" );
> return 0;
>
> }
> /**************************************************************************
>**********************/ Exe:
> #include <string.h>
> #include <stdlib.h>
> #include <stdio.h>
> #include <malloc.h>
> #include <unistd.h> /* pour sleep() */
> #include <pthread.h>
> #include <semaphore.h>
> #include "mpi.h"
>
> int main( int argc, char **argv ) {
> /*1)pour communiaction MPI*/
> MPI_Comm lCommunicateur; /*communicateur du process*/
> MPI_Comm CommParent; /*Communiacteur parent à récupérer*/
> int lRank; /*rang du communicateur du process*/
> int lRangMain; /*rang du séquenceur si lancé en mode
> normal*/ int lTailleCommunicateur; /*taille du communicateur;*/
> long *lpBufferMpi; /*buffer pour message*/
> int lBufferSize; /*taille du buffer*/
>
> /*2) pour les thread*/
> int NiveauThreadVoulu, NiveauThreadObtenu;
>
>
> lCommunicateur = (MPI_Comm)-1;
> NiveauThreadVoulu = MPI_THREAD_MULTIPLE;
> int erreur = MPI_Init_thread( &argc, &argv, NiveauThreadVoulu,
> &NiveauThreadObtenu );
>
> if (erreur!=0){
> printf("erreur\n");
> free( lpBufferMpi );
> return -1;
> }
>
> /*2) Attachement à un buffer pour le message*/
> lBufferSize=10000 * sizeof(long);
> lpBufferMpi = calloc( 10000, sizeof(long));
> erreur = MPI_Buffer_attach( (void*)lpBufferMpi, lBufferSize );
>
> if (erreur!=0){
> printf("erreur\n");
> free( lpBufferMpi );
> return -1;
> }
>
> printf( "Exe : Lance \n" );
> MPI_Comm_get_parent(&CommParent);
> MPI_Intercomm_merge( CommParent, 1, &lCommunicateur );
> MPI_Comm_rank( lCommunicateur, &lRank );
> MPI_Comm_size( lCommunicateur, &lTailleCommunicateur );
> lRangMain =1-lRank;
> printf( "Exe: lRankExe = %d lRankMain = %d\n", lRank , lRangMain,
> lTailleCommunicateur);
>
> sleep(1);
> MPI_Buffer_detach( (void*)lpBufferMpi, &lBufferSize );
> MPI_Comm_free( &lCommunicateur );
> MPI_Finalize( );
> free( lpBufferMpi );
> printf( "Exe: Fin.\n\n\n" );
> }
>
>
> /**************************************************************************
>**********************/ result :
> main*******************************
> main : Lancement MPI*
> 1 main***MPI_Comm_spawn return : 0
> Exe : Lance
> 1 main***Rang main : 0 Rang exe : 1
> Exe: lRankExe = 1 lRankMain = 0
> Exe: Fin.
>
>
> 2 main***MPI_Comm_spawn return : 0
> Exe : Lance
> 2 main***Rang main : 0 Rang exe : 1
> Exe: lRankExe = 1 lRankMain = 0
> Exe: Fin.
>
>
> 3 main***MPI_Comm_spawn return : 0
> Exe : Lance
> 3 main***Rang main : 0 Rang exe : 1
> Exe: lRankExe = 1 lRankMain = 0
> Exe: Fin.
>
> ....
>
> 30 main***MPI_Comm_spawn return : 0
> Exe : Lance
> 30 main***Rang main : 0 Rang exe : 1
> Exe: lRankExe = 1 lRankMain = 0
> Exe: Fin.
>
>
> 31 main***MPI_Comm_spawn return : 0
> Exe : Lance
> 31 main***Rang main : 0 Rang exe : 1
> Exe: lRankExe = 1 lRankMain = 0
> Erreur de segmentation
>
>
>
> _______________________________________________
> users mailing list
> [email protected]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
[email protected]
http://www.open-mpi.org/mailman/listinfo.cgi/users
Open MPI: 1.1.1
Open MPI SVN revision: r11473
Open RTE: 1.1.1
Open RTE SVN revision: r11473
OPAL: 1.1.1
OPAL SVN revision: r11473
Prefix: /usr/local/Mpi/openmpi-1.1.1-noBproc
Configured architecture: i686-pc-linux-gnu
Configured by: setics
Configured on: Thu Sep 7 13:20:27 CEST 2006
Configure host: setics14
Built by: setics
Built on: jeu sep 7 13:29:13 CEST 2006
Built host: setics14
C bindings: yes
C++ bindings: yes
Fortran77 bindings: no
Fortran90 bindings: no
Fortran90 bindings size: na
C compiler: gcc
C compiler absolute: /usr/bin/gcc
C++ compiler: g++
C++ compiler absolute: /usr/bin/g++
Fortran77 compiler: none
Fortran77 compiler abs: none
Fortran90 compiler: none
Fortran90 compiler abs: none
C profiling: yes
C++ profiling: yes
Fortran77 profiling: no
Fortran90 profiling: no
C++ exceptions: no
Thread support: posix (mpi: yes, progress: yes)
Internal debug support: no
MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
libltdl support: yes
MCA memory: ptmalloc2 (MCA v1.0, API v1.0, Component v1.1.1)
MCA paffinity: linux (MCA v1.0, API v1.0, Component v1.1.1)
MCA maffinity: first_use (MCA v1.0, API v1.0, Component v1.1.1)
MCA timer: linux (MCA v1.0, API v1.0, Component v1.1.1)
MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0)
MCA coll: basic (MCA v1.0, API v1.0, Component v1.1.1)
MCA coll: hierarch (MCA v1.0, API v1.0, Component v1.1.1)
MCA coll: self (MCA v1.0, API v1.0, Component v1.1.1)
MCA coll: sm (MCA v1.0, API v1.0, Component v1.1.1)
MCA coll: tuned (MCA v1.0, API v1.0, Component v1.1.1)
MCA io: romio (MCA v1.0, API v1.0, Component v1.1.1)
MCA mpool: sm (MCA v1.0, API v1.0, Component v1.1.1)
MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.1.1)
MCA bml: r2 (MCA v1.0, API v1.0, Component v1.1.1)
MCA rcache: rb (MCA v1.0, API v1.0, Component v1.1.1)
MCA btl: self (MCA v1.0, API v1.0, Component v1.1.1)
MCA btl: sm (MCA v1.0, API v1.0, Component v1.1.1)
MCA btl: tcp (MCA v1.0, API v1.0, Component v1.0)
MCA topo: unity (MCA v1.0, API v1.0, Component v1.1.1)
MCA osc: pt2pt (MCA v1.0, API v1.0, Component v1.0)
MCA gpr: null (MCA v1.0, API v1.0, Component v1.1.1)
MCA gpr: proxy (MCA v1.0, API v1.0, Component v1.1.1)
MCA gpr: replica (MCA v1.0, API v1.0, Component v1.1.1)
MCA iof: proxy (MCA v1.0, API v1.0, Component v1.1.1)
MCA iof: svc (MCA v1.0, API v1.0, Component v1.1.1)
MCA ns: proxy (MCA v1.0, API v1.0, Component v1.1.1)
MCA ns: replica (MCA v1.0, API v1.0, Component v1.1.1)
MCA oob: tcp (MCA v1.0, API v1.0, Component v1.0)
MCA ras: dash_host (MCA v1.0, API v1.0, Component v1.1.1)
MCA ras: hostfile (MCA v1.0, API v1.0, Component v1.1.1)
MCA ras: localhost (MCA v1.0, API v1.0, Component v1.1.1)
MCA ras: slurm (MCA v1.0, API v1.0, Component v1.1.1)
MCA rds: hostfile (MCA v1.0, API v1.0, Component v1.1.1)
MCA rds: resfile (MCA v1.0, API v1.0, Component v1.1.1)
MCA rmaps: round_robin (MCA v1.0, API v1.0, Component v1.1.1)
MCA rmgr: proxy (MCA v1.0, API v1.0, Component v1.1.1)
MCA rmgr: urm (MCA v1.0, API v1.0, Component v1.1.1)
MCA rml: oob (MCA v1.0, API v1.0, Component v1.1.1)
MCA pls: fork (MCA v1.0, API v1.0, Component v1.1.1)
MCA pls: rsh (MCA v1.0, API v1.0, Component v1.1.1)
MCA pls: slurm (MCA v1.0, API v1.0, Component v1.1.1)
MCA sds: env (MCA v1.0, API v1.0, Component v1.1.1)
MCA sds: pipe (MCA v1.0, API v1.0, Component v1.1.1)
MCA sds: seed (MCA v1.0, API v1.0, Component v1.1.1)
MCA sds: singleton (MCA v1.0, API v1.0, Component v1.1.1)
MCA sds: slurm (MCA v1.0, API v1.0, Component v1.1.1)