I would like to discover the processes using a ZooKeeper server. The purpose is to use MPI as a communication library for applications managed by a resource manager such as Mesos or Yarn.
Thanks, Supun.. On Fri, Jul 8, 2016 at 12:55 PM, Ralph Castain <r...@open-mpi.org> wrote: > You’d need to have some rendezvous mechanism. I suppose one option would > be to launch a set of PMIx servers on the nodes (and ensure they know about > each other) to support these things, but that’s all mpirun really does > anyway. > > What did you have in mind? > > On Jul 8, 2016, at 9:49 AM, Supun Kamburugamuve <skamburugam...@gmail.com> > wrote: > > Thanks for the quick response. Is there a way for extending OpenMPI so > that it can discover the processes using other means? > > Supun. > > On Fri, Jul 8, 2016 at 12:45 PM, Ralph Castain <r...@open-mpi.org> wrote: > >> If not spawned by mpirun, and not spawned by a resource manager, then it >> won’t work. There is no way for the procs to wireup. >> >> >> On Jul 8, 2016, at 9:42 AM, Supun Kamburugamuve <skamburugam...@gmail.com> >> wrote: >> >> Yes, the processes are not spawned by MPI and they are not spawned by >> something like Slurm/PBS. >> >> How does MPI get to know what processes running in what nodes in a >> general sense? Do we need to write some plugin so that it can figure out >> this information? I guess this must be the way it is supporting Slurm/PBS >> etc. >> >> Thanks, >> Supun.. >> >> On Fri, Jul 8, 2016 at 12:06 PM, Ralph Castain <r...@open-mpi.org> wrote: >> >>> You mean you didn’t launch those procs via mpirun, yes? If you started >>> them via some resource manager, then you might just be able to call >>> MPI_Init and have them wireup. >>> >>> >>> > On Jul 8, 2016, at 8:55 AM, Supun Kamburugamuve < >>> skamburugam...@gmail.com> wrote: >>> > >>> > Hi, >>> > >>> > I have a set of processes running and these are not managed/spawned by >>> Open MPI. Is it possible to use Open MPI as a pure communication library >>> among these processes? >>> > >>> > Thanks, >>> > Supun.. >>> > _______________________________________________ >>> > users mailing list >>> > us...@open-mpi.org >>> > Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users >>> > Link to this post: >>> http://www.open-mpi.org/community/lists/users/2016/07/29612.php >>> >>> _______________________________________________ >>> users mailing list >>> us...@open-mpi.org >>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users >>> Link to this post: >>> http://www.open-mpi.org/community/lists/users/2016/07/29613.php >> >> >> _______________________________________________ >> users mailing list >> us...@open-mpi.org >> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users >> Link to this post: >> http://www.open-mpi.org/community/lists/users/2016/07/29614.php >> >> >> >> _______________________________________________ >> users mailing list >> us...@open-mpi.org >> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users >> Link to this post: >> http://www.open-mpi.org/community/lists/users/2016/07/29615.php >> > > _______________________________________________ > users mailing list > us...@open-mpi.org > Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/07/29616.php > > > > _______________________________________________ > users mailing list > us...@open-mpi.org > Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/07/29617.php >