We should probably create an MPI extension that exposes the internal database so you can just get the info as it is already present, and it seems a shame to execute a collective operation just to re-create it. I’ll put it on my “to-do” list.
> On Nov 25, 2014, at 4:14 PM, Teranishi, Keita <knte...@sandia.gov> wrote: > > Adam, > > Thanks for the suggestion. I was using my own tokenizer (or those from > boost) to extract node ID numbers, but your approach is more generic and > serves my own purpose. For the time being I will try to leverage the SCR > source, and will ask you if I need any further assistance. > > Best Regards, > ----------------------------------------------------------------------------- > Keita Teranishi > Principal Member of Technical Staff > Scalable Modeling and Analysis Systems > Sandia National Laboratories > Livermore, CA 94551 > +1 (925) 294-3738 > > > From: <Moody>, "Adam T." <mood...@llnl.gov <mailto:mood...@llnl.gov>> > Reply-To: Open MPI Users <us...@open-mpi.org <mailto:us...@open-mpi.org>> > Date: Tuesday, November 25, 2014 at 3:09 PM > To: Open MPI Users <us...@open-mpi.org <mailto:us...@open-mpi.org>> > Subject: [EXTERNAL] Re: [OMPI users] How to find MPI ranks located in remote > nodes? > > Hi Keita, > There is no MPI API to do this from within an MPI application. One method I > have used for this purpose is to create a function that executes an > MPI_Comm_split operation using a string as a color value. As output, it > returns a communicator containing all procs that specified the same string as > the calling proc. To get a comm of all procs on the same node on a Linux > system, I pass in the value of gethostname(). > > As an example, see scr_split.h/c from the SCR library at > https://github.com/hpc/scr <https://github.com/hpc/scr.> That implementation > uses a bitonic sort along with scan operations to execute the split. You can > also accomplish this with hashing. If you need this type of functionality, I > have a cleaned-up copy that I could send you without all of the SCR related > code. > -Adam > > > From: users [users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org>] > on behalf of Ralph Castain [r...@open-mpi.org <mailto:r...@open-mpi.org>] > Sent: Tuesday, November 25, 2014 2:38 PM > To: Open MPI Users > Subject: Re: [OMPI users] How to find MPI ranks located in remote nodes? > > Every process has a complete map of where every process in the job is located > - not sure if there is an MPI API for accessing it, though. > > >> On Nov 25, 2014, at 2:32 PM, Teranishi, Keita <knte...@sandia.gov >> <mailto:knte...@sandia.gov>> wrote: >> >> Hi, >> >> I am trying to figure out a way for each local MPI rank to identify the >> ranks located in physically remote nodes (just different nodes) of cluster >> or MPPs such as Cray. I am using MPI_Get_processor_name to get the node ID, >> but it requires some processing to map MPI rank to the node ID. Is there >> any better ways doing this using MPI-2.2 (or earlier) capabilities? It >> will be great if I can easily get a list of MPI ranks in the same physical >> node. >> >> Thanks, >> ----------------------------------------------------------------------------- >> Keita Teranishi >> Principal Member of Technical Staff >> Scalable Modeling and Analysis Systems >> Sandia National Laboratories >> Livermore, CA 94551 >> +1 (925) 294-3738 >> >> _______________________________________________ >> users mailing list >> us...@open-mpi.org <mailto:us...@open-mpi.org> >> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >> <http://www.open-mpi.org/mailman/listinfo.cgi/users> >> Link to this post: >> http://www.open-mpi.org/community/lists/users/2014/11/25868.php >> <http://www.open-mpi.org/community/lists/users/2014/11/25868.php> > _______________________________________________ > users mailing list > us...@open-mpi.org <mailto:us...@open-mpi.org> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > <http://www.open-mpi.org/mailman/listinfo.cgi/users> > Link to this post: > http://www.open-mpi.org/community/lists/users/2014/11/25872.php > <http://www.open-mpi.org/community/lists/users/2014/11/25872.php>