Alex, just to make sure ... this is the behavior you expected, right ?
Cheers, Gilles On 2014/12/12 13:27, Alex A. Schmidt wrote: > Gilles, > > Ok, very nice! > > When I excute > > do rank=1,3 > call MPI_Comm_spawn('hello_world',' > ',5,MPI_INFO_NULL,rank,MPI_COMM_WORLD,my_intercomm,MPI_ERRCODES_IGNORE,status) > enddo > > I do get 15 instances of the 'hello_world' app running: 5 for each parent > rank 1, 2 and 3. > > Thanks a lot, Gilles. > > Best regargs, > > Alex > > > > > 2014-12-12 1:32 GMT-02:00 Gilles Gouaillardet <gilles.gouaillar...@iferc.org >> : >> >> Alex, >> >> just ask MPI_Comm_spawn to start (up to) 5 tasks via the maxprocs >> parameter : >> >> int MPI_Comm_spawn(char *command, char *argv[], int maxprocs, >> MPI_Info info, >> int root, MPI_Comm comm, MPI_Comm *intercomm, >> int array_of_errcodes[]) >> >> INPUT PARAMETERS >> maxprocs >> - maximum number of processes to start (integer, significant >> only at root) >> >> Cheers, >> >> Gilles >> >> >> On 2014/12/12 12:23, Alex A. Schmidt wrote: >> >> Hello Gilles, >> >> Thanks for your reply. The "env -i PATH=..." stuff seems to work!!! >> >> call system("sh -c 'env -i PATH=/usr/lib64/openmpi/bin:/bin mpirun -n 2 >> hello_world' ") >> >> did produce the expected result with a simple openmi "hello_world" code I >> wrote. >> >> I might be harder though with the real third party app I have in mind. And >> I realize >> getting passed over a job scheduler with this approach might not work at >> all... >> >> I have looked at the MPI_Comm_spawn call but I failed to understand how it >> could help here. For instance, can I use it to launch an mpi app with the >> option "-n 5" ? >> >> Alex >> >> 2014-12-12 0:36 GMT-02:00 Gilles Gouaillardet <gilles.gouaillar...@iferc.org >> >> : >> >> Alex, >> >> can you try something like >> call system(sh -c 'env -i /.../mpirun -np 2 /.../app_name') >> >> -i start with an empty environment >> that being said, you might need to set a few environment variables >> manually : >> env -i PATH=/bin ... >> >> and that being also said, this "trick" could be just a bad idea : >> you might be using a scheduler, and if you empty the environment, the >> scheduler >> will not be aware of the "inside" run. >> >> on top of that, invoking system might fail depending on the interconnect >> you use. >> >> Bottom line, i believe Ralph's reply is still valid, even if five years >> have passed : >> changing your workflow, or using MPI_Comm_spawn is a much better approach. >> >> Cheers, >> >> Gilles >> >> On 2014/12/12 11:22, Alex A. Schmidt wrote: >> >> Dear OpenMPI users, >> >> Regarding to this previous >> post<http://www.open-mpi.org/community/lists/users/2009/06/9560.php> >> <http://www.open-mpi.org/community/lists/users/2009/06/9560.php> >> <http://www.open-mpi.org/community/lists/users/2009/06/9560.php> >> <http://www.open-mpi.org/community/lists/users/2009/06/9560.php> from 2009, >> I wonder if the reply >> from Ralph Castain is still valid. My need is similar but quite simpler: >> to make a system call from an openmpi fortran application to run a >> third party openmpi application. I don't need to exchange mpi messages >> with the application. I just need to read the resulting output file >> generated >> by it. I have tried to do the following system call from my fortran openmpi >> code: >> >> call system("sh -c 'mpirun -n 2 app_name") >> >> but I get >> >> ********************************************************** >> >> Open MPI does not support recursive calls of mpirun >> >> ********************************************************** >> >> Is there a way to make this work? >> >> Best regards, >> >> Alex >> >> >> >> >> _______________________________________________ >> users mailing listus...@open-mpi.org >> >> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >> Link to this post: >> http://www.open-mpi.org/community/lists/users/2014/12/25966.php >> >> >> >> _______________________________________________ >> users mailing listus...@open-mpi.org >> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >> Link to this >> post:http://www.open-mpi.org/community/lists/users/2014/12/25967.php >> >> >> >> _______________________________________________ >> users mailing listus...@open-mpi.org >> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >> >> Link to this post: >> http://www.open-mpi.org/community/lists/users/2014/12/25968.php >> >> >> >> _______________________________________________ >> users mailing list >> us...@open-mpi.org >> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >> Link to this post: >> http://www.open-mpi.org/community/lists/users/2014/12/25969.php >> > > > _______________________________________________ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2014/12/25970.php