You will need to create a special variable that holds 2 entries, one for
the max operation (with whatever type you need) and an int for the rank of
the process. The MAXLOC is described on the OMPI man page [1] and you can
find an example on how to use it on the MPI Forum [2].

George.


[1] https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php
[2] https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html

On Fri, Aug 10, 2018 at 11:25 AM Diego Avesani <diego.aves...@gmail.com>
wrote:

>  Dear all,
> I have probably understood.
> The trick is to use a real vector and to memorize also the rank.
>
> Have I understood correctly?
> thanks
>
> Diego
>
>
> On 10 August 2018 at 17:19, Diego Avesani <diego.aves...@gmail.com> wrote:
>
>> Deal all,
>> I do not understand how MPI_MINLOC works. it seem locate the maximum in a
>> vector and not the CPU to which the valur belongs to.
>>
>> @ray: and if two has the same value?
>>
>> thanks
>>
>>
>> Diego
>>
>>
>> On 10 August 2018 at 17:03, Ray Sheppard <rshep...@iu.edu> wrote:
>>
>>> As a dumb scientist, I would just bcast the value I get back to the
>>> group and ask whoever owns it to kindly reply back with its rank.
>>>      Ray
>>>
>>>
>>> On 8/10/2018 10:49 AM, Reuti wrote:
>>>
>>>> Hi,
>>>>
>>>> Am 10.08.2018 um 16:39 schrieb Diego Avesani <diego.aves...@gmail.com>:
>>>>>
>>>>> Dear all,
>>>>>
>>>>> I have a problem:
>>>>> In my parallel program each CPU compute a value, let's say eff.
>>>>>
>>>>> First of all, I would like to know the maximum value. This for me is
>>>>> quite simple,
>>>>> I apply the following:
>>>>>
>>>>> CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION, MPI_MAX,
>>>>> MPI_MASTER_COMM, MPIworld%iErr)
>>>>>
>>>> Would MPI_MAXLOC be sufficient?
>>>>
>>>> -- Reuti
>>>>
>>>>
>>>> However, I would like also to know to which CPU that value belongs. Is
>>>>> it possible?
>>>>>
>>>>> I have set-up a strange procedure but it works only when all the CPUs
>>>>> has different values but fails when two of then has the same eff value.
>>>>>
>>>>> Is there any intrinsic MPI procedure?
>>>>> in anternative,
>>>>> do you have some idea?
>>>>>
>>>>> really, really thanks.
>>>>> Diego
>>>>>
>>>>>
>>>>> Diego
>>>>>
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> users@lists.open-mpi.org
>>>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> users@lists.open-mpi.org
>>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>
>>
>>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to