Dear George, Dear all,

thanks, thanks a lot. I will tell you everything.
I will try also to implement your suggestion.

Unfortunately,  the program that I have show to you is not working. I get
the following error:

[] *** An error occurred in MPI_Comm_rank
[] *** reported by process [643497985,7]
[] *** on communicator MPI_COMM_WORLD
[] *** MPI_ERR_COMM: invalid communicator
[] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[] ***    and potentially your MPI job)
[warn] Epoll ADD(4) on fd 47 failed.  Old events were 0; read change was 0
(none); write change was 1 (add): Bad file descriptor
[warn] Epoll ADD(4) on fd 65 failed.  Old events were 0; read change was 0
(none); write change was 1 (add): Bad file descriptor
[] 8 more processes have sent help message help-mpi-errors.txt /
mpi_errors_are_fatal
[] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help /
error messages

What do you think could be the error?

Really, Really thanks again





Diego


On 27 July 2017 at 15:57, George Bosilca <bosi...@icl.utk.edu> wrote:

> This looks good. If performance is critical you can speed up the entire
> process by using MPI_Comm_create_group instead of the second
> MPI_COMM_SPLIT. The MPI_Comm_create_group is collective only over the
> resulting communicator and not over the source communicator, so its cost is
> only dependent of the number of groups and not on the total number of
> processes.
>
> You can also try to replace the first MPI_COMM_SPLIT by the same approach.
> I would be curious to see the outcome.
>
>   George.
>
>
> On Thu, Jul 27, 2017 at 9:44 AM, Diego Avesani <diego.aves...@gmail.com>
> wrote:
>
>> Dear George, Dear all,
>>
>> I have tried to create a simple example. In particular, I would like to
>> use 16 CPUs and to create four groups according to rank is and then a
>> communicator between masters of each group.I have tried to follow the first
>> part of this example
>> <http://mpitutorial.com/tutorials/introduction-to-groups-and-communicators/>.
>> In the last part I have tried to create a communicator be masters as
>> suggested by George.
>>
>> Here my example:
>>
>>  CALL MPI_INIT(ierror)
>>  CALL MPI_COMM_SIZE(MPI_COMM_WORLD, nCPU, ierror)
>>  CALL MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror)
>>  !
>>  colorloc = rank/4
>>  !
>>  CALL MPI_COMM_SPLIT(MPI_COMM_WORLD,colorloc,rank,*NEW_COMM*,ierror)
>>  CALL MPI_COMM_RANK(*NEW_COMM*, subrank,ierror);
>>  CALL MPI_COMM_SIZE(*NEW_COMM*, subnCPU,ierror);
>>  !
>>  IF(MOD(rank,4).EQ.0)THEN
>>     *! where I set color for the masters*
>>     colorglobal = MOD(rank,4)
>>  ELSE
>>     colorglobal = MPI_UNDEFINED
>>  ENDIF
>>  !
>>  CALL MPI_COMM_SPLIT(MPI_COMM_WORLD,colorglobal,rank,LEADER_COMM,ierror)
>>  CALL MPI_COMM_RANK(*LEADER_COMM*, leader_rank,ierror);
>>  CALL MPI_FINALIZE(ierror)
>>
>> I would like to know if this could be correct. I mean if I have
>> understood correctly what George told me about the code design. Now, this
>> example does not work, but probably there is some coding error.
>>
>> Really, Really thanks
>> Diego
>>
>>
>>
>>
>>
>>
>>
>> Diego
>>
>>
>> On 27 July 2017 at 10:42, Diego Avesani <diego.aves...@gmail.com> wrote:
>>
>>> Dear George, Dear all,
>>>
>>> A question regarding program design:
>>> The draft that I have sent to you has to be done many and many times.
>>> Does the splitting procedure ensure efficiency?
>>>
>>> I will try, at a lest to create groups and split them. I am a beginner
>>> in the MPI groups environment.
>>> really, really thanks.
>>>
>>> You are my lifesaver.
>>>
>>>
>>>
>>> Diego
>>>
>>>
>>> On 26 July 2017 at 15:09, George Bosilca <bosi...@icl.utk.edu> wrote:
>>>
>>>> Diego,
>>>>
>>>> As all your processes are started under the umbrella of a single
>>>> mpirun, they have a communicator in common, the MPI_COMM_WORLD.
>>>>
>>>> One possible implementation, using MPI_Comm_split, will be the
>>>> following:
>>>>
>>>> MPI_Comm small_comm, leader_comm;
>>>>
>>>> /* Create small_comm on all processes */
>>>>
>>>> /* Now use MPI_Comm_split on MPI_COMM_WORLD to select the leaders */
>>>> MPI_Comm_split( MPI_COMM_WORLD,
>>>>                              i_am_leader(small_comm) ? 1 :
>>>> MPI_UNDEFINED,
>>>>                          rank_in_comm_world,
>>>>                          &leader_Comm);
>>>>
>>>> The leader_comm will be a valid communicator on all leaders processes,
>>>> and MPI_COMM_NULL on all others.
>>>>
>>>>   George.
>>>>
>>>>
>>>>
>>>> On Wed, Jul 26, 2017 at 4:29 AM, Diego Avesani <diego.aves...@gmail.com
>>>> > wrote:
>>>>
>>>>> Dear George, Dear all,
>>>>>
>>>>> I use "mpirun -np xx ./a.out"
>>>>>
>>>>> I do not know if I have some common  grounds. I mean, I have to
>>>>> design everything from the begging. You can find what I would like to do 
>>>>> in
>>>>> the attachment. Basically, an MPI cast in another MPI. Consequently, I am
>>>>> thinking to MPI groups or MPI virtual topology with a 2D cart, using the
>>>>> columns as "groups" and the first rows as the external groups to handle 
>>>>> the
>>>>> columns.
>>>>>
>>>>> What do think? What do you suggest?
>>>>> Really Really thanks
>>>>>
>>>>>
>>>>> Diego
>>>>>
>>>>>
>>>>> On 25 July 2017 at 19:26, George Bosilca <bosi...@icl.utk.edu> wrote:
>>>>>
>>>>>> Diego,
>>>>>>
>>>>>> Assuming you have some common  grounds between the 4 initial groups
>>>>>> (otherwise you will have to connect them via 
>>>>>> MPI_Comm_connect/MPI_Comm_accept)
>>>>>> you can merge the 4 groups together and then use any MPI mechanism to
>>>>>> create a partial group of leaders (such as MPI_Comm_split).
>>>>>>
>>>>>> If you spawn the groups via MPI_Comm_spawn then the answer is
>>>>>> slightly more complicated, you need to use MPI_Intercomm_create, with the
>>>>>> spawner as the bridge between the different communicators (and then
>>>>>> MPI_Intercomm_merge to create your intracomm). You can find a good answer
>>>>>> on stackoverflow on this at https://stackoverflow.com/ques
>>>>>> tions/24806782/mpi-merge-multiple-intercoms-into-a-single-intracomm
>>>>>>
>>>>>> How is your MPI environment started (single mpirun or mpi_comm_spawn)
>>>>>> ?
>>>>>>
>>>>>>   George.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Jul 25, 2017 at 10:44 AM, Diego Avesani <
>>>>>> diego.aves...@gmail.com> wrote:
>>>>>>
>>>>>>> Dear All,
>>>>>>>
>>>>>>> I am studying Groups and Communicators, but before start going in
>>>>>>> detail, I have a question about groups.
>>>>>>>
>>>>>>> I would like to know if is it possible to create a group of masters
>>>>>>> of the other groups and then a intra-communication in the new group. I 
>>>>>>> have
>>>>>>> spent sometime reading different tutorial and presentation, but it is
>>>>>>> difficult, at least for me, to understand if is it possible to create 
>>>>>>> this
>>>>>>> sort of MPI cast in another MPI.
>>>>>>>
>>>>>>> In the attachment you can find a pictures that summarize what I
>>>>>>> would like to do.
>>>>>>>
>>>>>>> Another strategies could be use virtual topology.
>>>>>>>
>>>>>>> What do you think?
>>>>>>>
>>>>>>> I really, really, appreciate any kind of help, suggestions or link
>>>>>>> where I can study this topics.
>>>>>>>
>>>>>>> Again, thanks
>>>>>>>
>>>>>>> Best Regards,
>>>>>>>
>>>>>>> Diego
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> users mailing list
>>>>>>> users@lists.open-mpi.org
>>>>>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> users mailing list
>>>>>> users@lists.open-mpi.org
>>>>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> users@lists.open-mpi.org
>>>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> users@lists.open-mpi.org
>>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>>>>
>>>
>>>
>>
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>>
>
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to