Dear George, Dear all,

I have just rewritten the code to make it more clear:

* INTEGER :: colorl,colorglobal*
* INTEGER :: LOCAL_COMM,MASTER_COMM*
* !++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*
* !-----------------------------------------------*
* ! create WORLD communicator*
* !-----------------------------------------------*
* CALL MPI_INIT(MPIworld%iErr)*
* CALL MPI_COMM_SIZE(MPI_COMM_WORLD, MPIworld%nCPU, MPIworld%iErr)  !get
the nCPU*
* CALL MPI_COMM_RANK(MPI_COMM_WORLD, MPIworld%rank, MPIworld%iErr)  !get
the rank*
* !*
* colorl = MPIworld%rank/4*
* !*
* !-----------------------------------------------*
* ! create LOCAL communicator*
* !-----------------------------------------------*
* CALL
MPI_COMM_SPLIT(MPI_COMM_WORLD,colorl,MPIworld%rank,LOCAL_COMM,MPIworld%iErr)*
* CALL MPI_COMM_RANK(LOCAL_COMM, MPIlocal%rank,MPIlocal%iErr)*
* CALL MPI_COMM_SIZE(LOCAL_COMM, MPIlocal%nCPU,MPIlocal%iErr)*
* !*
* !*
* !WRITE(*,'(A15,I3,A15,I3)') 'WORLD RANK  ',MPIworld%rank,'LOCAL RANK
 ',MPIlocal%rank*
* !*
* !-----------------------------------------------*
* ! create MASTER communicator*
* !-----------------------------------------------*
* IF(MOD(MPIworld%rank,4).EQ.0)THEN*
*    colorglobal = MOD(MPIworld%rank,4)*
* ELSE*
*    colorglobal = MPI_COMM_NULL*
* ENDIF*
* !*
* CALL
MPI_COMM_SPLIT(MPI_COMM_WORLD,colorglobal,MPIworld%rank,MASTER_COMM,MPIworld%iErr)*
* CALL MPI_COMM_RANK(MASTER_COMM, MPImaster%rank,MPImaster%iErr)*
* CALL MPI_COMM_SIZE(MASTER_COMM, MPImaster%nCPU,MPImaster%iErr)*
* !*
* !*
* WRITE(*,'(A15,I3,A15,I3,A15,I3)') 'WORLD RANK  ',MPIworld%rank,'LOCAL
RANK  ',MPIlocal%rank,'MASTER  ',MPImaster%nCPU *

This is the result:

WORLD RANK    2   LOCAL RANK    2       MASTER   12
WORLD RANK    3   LOCAL RANK    3       MASTER   12
WORLD RANK   10   LOCAL RANK    2       MASTER   12
*WORLD RANK   12   LOCAL RANK    0       MASTER    4*
WORLD RANK    1   LOCAL RANK    1       MASTER   12
WORLD RANK    5   LOCAL RANK    1       MASTER   12
*WORLD RANK    4   LOCAL RANK    0       MASTER    4*
WORLD RANK    6   LOCAL RANK    2       MASTER   12
WORLD RANK   13   LOCAL RANK    1       MASTER   12
WORLD RANK   14   LOCAL RANK    2       MASTER   12
WORLD RANK    7   LOCAL RANK    3       MASTER   12
WORLD RANK   11   LOCAL RANK    3       MASTER   12
*WORLD RANK    0   LOCAL RANK    0       MASTER    4*
*WORLD RANK    8   LOCAL RANK    0       MASTER    4*
WORLD RANK    9   LOCAL RANK    1       MASTER   12
WORLD RANK   15   LOCAL RANK    3       MASTER   12

I am expecting only a new communicator only for the master but it seems
that I get two new groups despite I set "*colorglobal = MPI_COMM_NULL*"

What do think? Is there something that I haven't understood properly?

Thanks again, I am trying to learn better MPI_Comm_create_group.

Thanks


Diego


On 28 July 2017 at 16:59, Diego Avesani <diego.aves...@gmail.com> wrote:

> Dear George, Dear all,
>
> here the code:
>
> PROGRAM TEST
> USE MPI
> IMPLICIT NONE
> ! mpif90 -r8 *.f90
> !+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> +++++++++++++++++
> INTEGER   :: rank
> INTEGER   :: subrank,leader_rank
> INTEGER   :: nCPU
> INTEGER   :: subnCPU
> INTEGER   :: ierror
> INTEGER   :: tag
> INTEGER   :: status(MPI_STATUS_SIZE)
> INTEGER   :: colorloc,colorglobal
> INTEGER   :: NEW_COMM,LEADER_COMM
> !+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> +++++++++++++++++
>  CALL MPI_INIT(ierror)
>  CALL MPI_COMM_SIZE(MPI_COMM_WORLD, nCPU, ierror)
>  CALL MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror)
>  !
>  colorloc = rank/4
>  !
>  !
>  CALL MPI_COMM_SPLIT(MPI_COMM_WORLD,colorloc,rank,NEW_COMM,ierror)
>  CALL MPI_COMM_RANK(NEW_COMM, subrank,ierror);
>  CALL MPI_COMM_SIZE(NEW_COMM, subnCPU,ierror);
>  !
>  IF(MOD(rank,4).EQ.0)THEN
>     colorglobal = MOD(rank,4)
>  ELSE
>     colorglobal = *MPI_COMM_NULL*
>  ENDIF
>  !
>  CALL MPI_COMM_SPLIT(MPI_COMM_WORLD,colorglobal,rank,LEADER_COMM,ierror)
>  CALL MPI_COMM_RANK(LEADER_COMM, leader_rank,ierror);
>  !
>  CALL MPI_FINALIZE(ierror)
> ENDPROGRAM
>
> Now, it works.
>
> Could you please explain me "*MPI_Comm_create_group*". I am trying buy my
> self, but it seems quite different from MPI_SPLIT_COMM.
>
> Again, really, really thanks
>
> Diego
>
>
> On 28 July 2017 at 16:02, George Bosilca <bosi...@icl.utk.edu> wrote:
>
>> I guess the second comm_rank call is invalid on all non-leader processes,
>> as their LEADER_COMM communicator is MPI_COMM_NULL.
>>
>> george
>>
>> On Fri, Jul 28, 2017 at 05:06 Diego Avesani <diego.aves...@gmail.com>
>> wrote:
>>
>>> Dear George, Dear all,
>>>
>>> thanks, thanks a lot. I will tell you everything.
>>> I will try also to implement your suggestion.
>>>
>>> Unfortunately,  the program that I have show to you is not working. I
>>> get the following error:
>>>
>>> [] *** An error occurred in MPI_Comm_rank
>>> [] *** reported by process [643497985,7]
>>> [] *** on communicator MPI_COMM_WORLD
>>> [] *** MPI_ERR_COMM: invalid communicator
>>> [] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now
>>> abort,
>>> [] ***    and potentially your MPI job)
>>> [warn] Epoll ADD(4) on fd 47 failed.  Old events were 0; read change was
>>> 0 (none); write change was 1 (add): Bad file descriptor
>>> [warn] Epoll ADD(4) on fd 65 failed.  Old events were 0; read change was
>>> 0 (none); write change was 1 (add): Bad file descriptor
>>> [] 8 more processes have sent help message help-mpi-errors.txt /
>>> mpi_errors_are_fatal
>>> [] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help /
>>> error messages
>>>
>>> What do you think could be the error?
>>>
>>> Really, Really thanks again
>>>
>>>
>>>
>>>
>>>
>>> Diego
>>>
>>>
>>> On 27 July 2017 at 15:57, George Bosilca <bosi...@icl.utk.edu> wrote:
>>>
>>>> This looks good. If performance is critical you can speed up the entire
>>>> process by using MPI_Comm_create_group instead of the second
>>>> MPI_COMM_SPLIT. The MPI_Comm_create_group is collective only over the
>>>> resulting communicator and not over the source communicator, so its cost is
>>>> only dependent of the number of groups and not on the total number of
>>>> processes.
>>>>
>>>> You can also try to replace the first MPI_COMM_SPLIT by the same
>>>> approach. I would be curious to see the outcome.
>>>>
>>>>   George.
>>>>
>>>>
>>>> On Thu, Jul 27, 2017 at 9:44 AM, Diego Avesani <diego.aves...@gmail.com
>>>> > wrote:
>>>>
>>>>> Dear George, Dear all,
>>>>>
>>>>> I have tried to create a simple example. In particular, I would like
>>>>> to use 16 CPUs and to create four groups according to rank is and then a
>>>>> communicator between masters of each group.I have tried to follow the 
>>>>> first
>>>>> part of this example
>>>>> <http://mpitutorial.com/tutorials/introduction-to-groups-and-communicators/>.
>>>>> In the last part I have tried to create a communicator be masters as
>>>>> suggested by George.
>>>>>
>>>>> Here my example:
>>>>>
>>>>>  CALL MPI_INIT(ierror)
>>>>>  CALL MPI_COMM_SIZE(MPI_COMM_WORLD, nCPU, ierror)
>>>>>  CALL MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror)
>>>>>  !
>>>>>  colorloc = rank/4
>>>>>  !
>>>>>  CALL MPI_COMM_SPLIT(MPI_COMM_WORLD,colorloc,rank,*NEW_COMM*,ierror)
>>>>>  CALL MPI_COMM_RANK(*NEW_COMM*, subrank,ierror);
>>>>>  CALL MPI_COMM_SIZE(*NEW_COMM*, subnCPU,ierror);
>>>>>  !
>>>>>  IF(MOD(rank,4).EQ.0)THEN
>>>>>     *! where I set color for the masters*
>>>>>     colorglobal = MOD(rank,4)
>>>>>  ELSE
>>>>>     colorglobal = MPI_UNDEFINED
>>>>>  ENDIF
>>>>>  !
>>>>>  CALL MPI_COMM_SPLIT(MPI_COMM_WORLD,colorglobal,rank,LEADER_COMM,i
>>>>> error)
>>>>>  CALL MPI_COMM_RANK(*LEADER_COMM*, leader_rank,ierror);
>>>>>  CALL MPI_FINALIZE(ierror)
>>>>>
>>>>> I would like to know if this could be correct. I mean if I have
>>>>> understood correctly what George told me about the code design. Now, this
>>>>> example does not work, but probably there is some coding error.
>>>>>
>>>>> Really, Really thanks
>>>>> Diego
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Diego
>>>>>
>>>>>
>>>>> On 27 July 2017 at 10:42, Diego Avesani <diego.aves...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Dear George, Dear all,
>>>>>>
>>>>>> A question regarding program design:
>>>>>> The draft that I have sent to you has to be done many and many times.
>>>>>> Does the splitting procedure ensure efficiency?
>>>>>>
>>>>>> I will try, at a lest to create groups and split them. I am a
>>>>>> beginner in the MPI groups environment.
>>>>>> really, really thanks.
>>>>>>
>>>>>> You are my lifesaver.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Diego
>>>>>>
>>>>>>
>>>>>> On 26 July 2017 at 15:09, George Bosilca <bosi...@icl.utk.edu> wrote:
>>>>>>
>>>>>>> Diego,
>>>>>>>
>>>>>>> As all your processes are started under the umbrella of a single
>>>>>>> mpirun, they have a communicator in common, the MPI_COMM_WORLD.
>>>>>>>
>>>>>>> One possible implementation, using MPI_Comm_split, will be the
>>>>>>> following:
>>>>>>>
>>>>>>> MPI_Comm small_comm, leader_comm;
>>>>>>>
>>>>>>> /* Create small_comm on all processes */
>>>>>>>
>>>>>>> /* Now use MPI_Comm_split on MPI_COMM_WORLD to select the leaders */
>>>>>>> MPI_Comm_split( MPI_COMM_WORLD,
>>>>>>>                              i_am_leader(small_comm) ? 1 :
>>>>>>> MPI_UNDEFINED,
>>>>>>>                          rank_in_comm_world,
>>>>>>>                          &leader_Comm);
>>>>>>>
>>>>>>> The leader_comm will be a valid communicator on all leaders
>>>>>>> processes, and MPI_COMM_NULL on all others.
>>>>>>>
>>>>>>>   George.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Jul 26, 2017 at 4:29 AM, Diego Avesani <
>>>>>>> diego.aves...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Dear George, Dear all,
>>>>>>>>
>>>>>>>> I use "mpirun -np xx ./a.out"
>>>>>>>>
>>>>>>>> I do not know if I have some common  grounds. I mean, I have to
>>>>>>>> design everything from the begging. You can find what I would like to 
>>>>>>>> do in
>>>>>>>> the attachment. Basically, an MPI cast in another MPI. Consequently, I 
>>>>>>>> am
>>>>>>>> thinking to MPI groups or MPI virtual topology with a 2D cart, using 
>>>>>>>> the
>>>>>>>> columns as "groups" and the first rows as the external groups to 
>>>>>>>> handle the
>>>>>>>> columns.
>>>>>>>>
>>>>>>>> What do think? What do you suggest?
>>>>>>>> Really Really thanks
>>>>>>>>
>>>>>>>>
>>>>>>>> Diego
>>>>>>>>
>>>>>>>>
>>>>>>>> On 25 July 2017 at 19:26, George Bosilca <bosi...@icl.utk.edu>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Diego,
>>>>>>>>>
>>>>>>>>> Assuming you have some common  grounds between the 4 initial
>>>>>>>>> groups (otherwise you will have to connect them via
>>>>>>>>> MPI_Comm_connect/MPI_Comm_accept) you can merge the 4 groups
>>>>>>>>> together and then use any MPI mechanism to create a partial group of
>>>>>>>>> leaders (such as MPI_Comm_split).
>>>>>>>>>
>>>>>>>>> If you spawn the groups via MPI_Comm_spawn then the answer is
>>>>>>>>> slightly more complicated, you need to use MPI_Intercomm_create, with 
>>>>>>>>> the
>>>>>>>>> spawner as the bridge between the different communicators (and then
>>>>>>>>> MPI_Intercomm_merge to create your intracomm). You can find a good 
>>>>>>>>> answer
>>>>>>>>> on stackoverflow on this at https://stackoverflow.com/ques
>>>>>>>>> tions/24806782/mpi-merge-multiple-intercoms-into-a-single-
>>>>>>>>> intracomm
>>>>>>>>>
>>>>>>>>> How is your MPI environment started (single mpirun or
>>>>>>>>> mpi_comm_spawn) ?
>>>>>>>>>
>>>>>>>>>   George.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Jul 25, 2017 at 10:44 AM, Diego Avesani <
>>>>>>>>> diego.aves...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Dear All,
>>>>>>>>>>
>>>>>>>>>> I am studying Groups and Communicators, but before start going in
>>>>>>>>>> detail, I have a question about groups.
>>>>>>>>>>
>>>>>>>>>> I would like to know if is it possible to create a group of
>>>>>>>>>> masters of the other groups and then a intra-communication in the new
>>>>>>>>>> group. I have spent sometime reading different tutorial and 
>>>>>>>>>> presentation,
>>>>>>>>>> but it is difficult, at least for me, to understand if is it 
>>>>>>>>>> possible to
>>>>>>>>>> create this sort of MPI cast in another MPI.
>>>>>>>>>>
>>>>>>>>>> In the attachment you can find a pictures that summarize what I
>>>>>>>>>> would like to do.
>>>>>>>>>>
>>>>>>>>>> Another strategies could be use virtual topology.
>>>>>>>>>>
>>>>>>>>>> What do you think?
>>>>>>>>>>
>>>>>>>>>> I really, really, appreciate any kind of help, suggestions or
>>>>>>>>>> link where I can study this topics.
>>>>>>>>>>
>>>>>>>>>> Again, thanks
>>>>>>>>>>
>>>>>>>>>> Best Regards,
>>>>>>>>>>
>>>>>>>>>> Diego
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> users mailing list
>>>>>>>>>> users@lists.open-mpi.org
>>>>>>>>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> users mailing list
>>>>>>>>> users@lists.open-mpi.org
>>>>>>>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> users mailing list
>>>>>>>> users@lists.open-mpi.org
>>>>>>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> users mailing list
>>>>>>> users@lists.open-mpi.org
>>>>>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> users@lists.open-mpi.org
>>>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> users@lists.open-mpi.org
>>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>>
>>
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>>
>
>
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to