Diego,

As all your processes are started under the umbrella of a single mpirun,
they have a communicator in common, the MPI_COMM_WORLD.

One possible implementation, using MPI_Comm_split, will be the following:

MPI_Comm small_comm, leader_comm;

/* Create small_comm on all processes */

/* Now use MPI_Comm_split on MPI_COMM_WORLD to select the leaders */
MPI_Comm_split( MPI_COMM_WORLD,
                             i_am_leader(small_comm) ? 1 : MPI_UNDEFINED,
                         rank_in_comm_world,
                         &leader_Comm);

The leader_comm will be a valid communicator on all leaders processes, and
MPI_COMM_NULL on all others.

  George.



On Wed, Jul 26, 2017 at 4:29 AM, Diego Avesani <diego.aves...@gmail.com>
wrote:

> Dear George, Dear all,
>
> I use "mpirun -np xx ./a.out"
>
> I do not know if I have some common  grounds. I mean, I have to
> design everything from the begging. You can find what I would like to do in
> the attachment. Basically, an MPI cast in another MPI. Consequently, I am
> thinking to MPI groups or MPI virtual topology with a 2D cart, using the
> columns as "groups" and the first rows as the external groups to handle the
> columns.
>
> What do think? What do you suggest?
> Really Really thanks
>
>
> Diego
>
>
> On 25 July 2017 at 19:26, George Bosilca <bosi...@icl.utk.edu> wrote:
>
>> Diego,
>>
>> Assuming you have some common  grounds between the 4 initial groups
>> (otherwise you will have to connect them via 
>> MPI_Comm_connect/MPI_Comm_accept)
>> you can merge the 4 groups together and then use any MPI mechanism to
>> create a partial group of leaders (such as MPI_Comm_split).
>>
>> If you spawn the groups via MPI_Comm_spawn then the answer is slightly
>> more complicated, you need to use MPI_Intercomm_create, with the spawner as
>> the bridge between the different communicators (and then
>> MPI_Intercomm_merge to create your intracomm). You can find a good answer
>> on stackoverflow on this at https://stackoverflow.com/ques
>> tions/24806782/mpi-merge-multiple-intercoms-into-a-single-intracomm
>>
>> How is your MPI environment started (single mpirun or mpi_comm_spawn) ?
>>
>>   George.
>>
>>
>>
>> On Tue, Jul 25, 2017 at 10:44 AM, Diego Avesani <diego.aves...@gmail.com>
>> wrote:
>>
>>> Dear All,
>>>
>>> I am studying Groups and Communicators, but before start going in
>>> detail, I have a question about groups.
>>>
>>> I would like to know if is it possible to create a group of masters of
>>> the other groups and then a intra-communication in the new group. I have
>>> spent sometime reading different tutorial and presentation, but it is
>>> difficult, at least for me, to understand if is it possible to create this
>>> sort of MPI cast in another MPI.
>>>
>>> In the attachment you can find a pictures that summarize what I would
>>> like to do.
>>>
>>> Another strategies could be use virtual topology.
>>>
>>> What do you think?
>>>
>>> I really, really, appreciate any kind of help, suggestions or link where
>>> I can study this topics.
>>>
>>> Again, thanks
>>>
>>> Best Regards,
>>>
>>> Diego
>>>
>>> _______________________________________________
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>>>
>>
>>
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>>
>
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to