Dear Ralph (all ;))

In regards of these posts and due to you adding it to your todo list.

I wanted to do something similarly and implemented a "quick fix".
I wanted to create a communicator per node, and then create a window to
allocate an array in shared memory, however, I came to short in the current
implementation, and hence needed the "per-socket".

I implemented a way to create communicators using the comm_split_type and
rely on the HWloc types.

Here is my commit-msg:
"
We can now split communicators based on hwloc full capabilities up to BOARD.
I.e.:
HWTHREAD,CORE,L1CACHE,L2CACHE,L3CACHE,SOCKET,NUMA,NODE,BOARD
where NODE is the same as SHARED.
"

Maybe what I did could be useful somehow?
I mean to achieve the effect one could do:
comm_split_type(MPI_COMM_TYPE_Node,comm)
create new group from all nodes not belonging to this group.
This can even be more fine tuned if one wishes to create a group of
"master" cores on each node.

I have not been able to compile it due to my autogen.pl giving me some
errors. However, I think it should compile just fine.

Do you think it could be useful?

If interested you can find my, single commit branch, at:
https://github.com/zerothi/ompi

It is very little that has changed.


2014-11-26 1:18 GMT+01:00 Ralph Castain <r...@open-mpi.org>:

> We should probably create an MPI extension that exposes the internal
> database so you can just get the info as it is already present, and it
> seems a shame to execute a collective operation just to re-create it. I’ll
> put it on my “to-do” list.
>
>
> On Nov 25, 2014, at 4:14 PM, Teranishi, Keita <knte...@sandia.gov> wrote:
>
> Adam,
>
> Thanks for the suggestion.  I was using my own tokenizer (or those from
> boost) to extract node ID numbers, but your approach is more generic and
> serves my own purpose.  For the time being I will try to leverage the SCR
> source, and will ask you if I need any further assistance.
>
> Best Regards,
>
> -----------------------------------------------------------------------------
> Keita Teranishi
> Principal Member of Technical Staff
> Scalable Modeling and Analysis Systems
> Sandia National Laboratories
> Livermore, CA 94551
> +1 (925) 294-3738
>
>
> From: <Moody>, "Adam T." <mood...@llnl.gov>
> Reply-To: Open MPI Users <us...@open-mpi.org>
> Date: Tuesday, November 25, 2014 at 3:09 PM
> To: Open MPI Users <us...@open-mpi.org>
> Subject: [EXTERNAL] Re: [OMPI users] How to find MPI ranks located in
> remote nodes?
>
> Hi Keita,
> There is no MPI API to do this from within an MPI application.  One method
> I have used for this purpose is to create a function that executes an
> MPI_Comm_split operation using a string as a color value.  As output, it
> returns a communicator containing all procs that specified the same string
> as the calling proc.  To get a comm of all procs on the same node on a
> Linux system, I pass in the value of gethostname().
>
> As an example, see scr_split.h/c from the SCR library at
> https://github.com/hpc/scr <https://github.com/hpc/scr.>  That
> implementation uses a bitonic sort along with scan operations to execute
> the split.  You can also accomplish this with hashing.  If you need this
> type of functionality, I have a cleaned-up copy that I could send you
> without all of the SCR related code.
> -Adam
>
>
> ------------------------------
> *From:* users [users-boun...@open-mpi.org] on behalf of Ralph Castain [
> r...@open-mpi.org]
> *Sent:* Tuesday, November 25, 2014 2:38 PM
> *To:* Open MPI Users
> *Subject:* Re: [OMPI users] How to find MPI ranks located in remote nodes?
>
> Every process has a complete map of where every process in the job is
> located - not sure if there is an MPI API for accessing it, though.
>
>
> On Nov 25, 2014, at 2:32 PM, Teranishi, Keita <knte...@sandia.gov> wrote:
>
> Hi,
>
> I am trying  to figure out a way for each local MPI rank to identify the
> ranks located in physically remote nodes (just different nodes) of cluster
> or MPPs such as Cray.  I am using MPI_Get_processor_name to get the node
> ID, but it requires some processing to map MPI rank to the node ID.  Is
> there any better ways doing this using MPI-2.2 (or earlier) capabilities?
> It will be great if I can easily get a list of MPI ranks in the same
> physical node.
>
> Thanks,
>
> -----------------------------------------------------------------------------
> Keita Teranishi
> Principal Member of Technical Staff
> Scalable Modeling and Analysis Systems
> Sandia National Laboratories
> Livermore, CA 94551
> +1 (925) 294-3738
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2014/11/25868.php
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2014/11/25872.php
>
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2014/11/25873.php
>



-- 
Kind regards Nick



-- 
Kind regards Nick

Reply via email to