Hi Jeff S.
OK, then I misunderstood Jeff H.
Sorry about that, Jeff H..
Nevertheless, Diego Avesani has certainly a point.
And it is the point of view of an user,
something that hopefully matters.
I'd add to Diego's arguments
that maxloc, minloc, and friends are part of
Fortran, Matlab, etc.
A science/engineering programmer expects it to be available,
not to have to be reinvented from scratch,
both on the baseline language, as well as in MPI.
In addition,
MPI developers cannot expect the typical MPI user to keep track
of what goes on in the MPI Forum.
I certainly don't have either the skill or the time for it.
However, developers can make an effort to listen to the chatter in the
various MPI user's list, before making any decision of stripping off
functionality, specially such a basic one as minloc, maxloc.
My two cents from a pedestrian MPI user,
who thinks minloc and maxloc are great,
knows nothing about the MPI Forum protocols and activities,
but hopes the Forum pays attention to users' needs.
Gus Correa
PS - Jeff S.: Please, bring Diego's request to the Forum! Add my vote
too. :)
On 08/10/2018 02:19 PM, Jeff Squyres (jsquyres) via users wrote:
Jeff H. was referring to Nathan's offhand remark about his desire to kill the
MPI_MINLOC / MPI_MAXLOC operations. I think Jeff H's point is that this is
just Nathan's opinion -- as far as I know, there is no proposal in front of the
MPI Forum to actively deprecate MPI_MINLOC or MPI_MAXLOC. Speaking this
opinion on a public mailing list with no other context created a bit of
confusion.
The Forum is quite transparent in what it does -- e.g., anyone is allowed to
come to its meetings and hear (and participate in!) all the deliberations, etc.
But speaking off-the-cuff about something that *might* happen *someday* that
would have impact on real users and real codes -- that might have caused a
little needless confusion.
On Aug 10, 2018, at 2:11 PM, Gus Correa <g...@ldeo.columbia.edu> wrote:
Hmmm ... no, no, no!
Keep it secret why!?!?
Diego Avesani's questions and questioning
may have saved us users from getting a
useful feature deprecated in the name of code elegance.
Code elegance may be very cherished by developers,
but it is not necessarily helpful to users,
specially if it strips off useful functionality.
My cheap 2 cents from a user.
Gus Correa
On 08/10/2018 01:52 PM, Jeff Hammond wrote:
This thread is a perfect illustration of why MPI Forum participants should not
flippantly discuss feature deprecation in discussion with users. Users who are
not familiar with the MPI Forum process are not able to evaluate whether such
proposals are serious or have any hope of succeeding and therefore may be
unnecessarily worried about their code breaking in the future, when that future
is 5 to infinity years away.
If someone wants to deprecate MPI_{MIN,MAX}LOC, they should start that
discussion on https://github.com/mpi-forum/mpi-issues/issues or
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll.
Jeff
On Fri, Aug 10, 2018 at 10:27 AM, Jeff Squyres (jsquyres) via users
<users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>> wrote:
It is unlikely that MPI_MINLOC and MPI_MAXLOC will go away any time
soon.
As far as I know, Nathan hasn't advanced a proposal to kill them in
MPI-4, meaning that they'll likely continue to be in MPI for at
least another 10 years. :-)
(And even if they did get killed in MPI-4, implementations like Open
MPI would continue to keep them in our implementations for quite a
while -- i.e., years)
> On Aug 10, 2018, at 1:13 PM, Diego Avesani
<diego.aves...@gmail.com <mailto:diego.aves...@gmail.com>> wrote:
>
> I agree about the names, it is very similar to MIN_LOC and
MAX_LOC in fortran 90.
> However, I find difficult to define some algorithm able to do the
same things.
>
>
>
> Diego
>
>
> On 10 August 2018 at 19:03, Nathan Hjelm via users
<users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>> wrote:
> They do not fit with the rest of the predefined operations (which
operate on a single basic type) and can easily be implemented as
user defined operations and get the same performance. Add to that
the fixed number of tuple types and the fact that some of them are
non-contiguous (MPI_SHORT_INT) plus the terrible names. If I could
kill them in MPI-4 I would.
>
> On Aug 10, 2018, at 9:47 AM, Diego Avesani
<diego.aves...@gmail.com <mailto:diego.aves...@gmail.com>> wrote:
>
>> Dear all,
>> I have just implemented MAXLOC, why should they go away?
>> it seems working pretty well.
>>
>> thanks
>>
>> Diego
>>
>>
>> On 10 August 2018 at 17:39, Nathan Hjelm via users
<users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>> wrote:
>> The problem is minloc and maxloc need to go away. better to use
a custom op.
>>
>> On Aug 10, 2018, at 9:36 AM, George Bosilca <bosi...@icl.utk.edu
<mailto:bosi...@icl.utk.edu>> wrote:
>>
>>> You will need to create a special variable that holds 2
entries, one for the max operation (with whatever type you need) and
an int for the rank of the process. The MAXLOC is described on the
OMPI man page [1] and you can find an example on how to use it on
the MPI Forum [2].
>>>
>>> George.
>>>
>>>
>>> [1] https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php
<https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php>
>>> [2]
https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html
<https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html>
>>>
>>> On Fri, Aug 10, 2018 at 11:25 AM Diego Avesani
<diego.aves...@gmail.com <mailto:diego.aves...@gmail.com>> wrote:
>>> Dear all,
>>> I have probably understood.
>>> The trick is to use a real vector and to memorize also the rank.
>>>
>>> Have I understood correctly?
>>> thanks
>>>
>>> Diego
>>>
>>>
>>> On 10 August 2018 at 17:19, Diego Avesani
<diego.aves...@gmail.com <mailto:diego.aves...@gmail.com>> wrote:
>>> Deal all,
>>> I do not understand how MPI_MINLOC works. it seem locate the
maximum in a vector and not the CPU to which the valur belongs to.
>>>
>>> @ray: and if two has the same value?
>>>
>>> thanks
>>>
>>>
>>> Diego
>>>
>>>
>>> On 10 August 2018 at 17:03, Ray Sheppard <rshep...@iu.edu
<mailto:rshep...@iu.edu>> wrote:
>>> As a dumb scientist, I would just bcast the value I get back to
the group and ask whoever owns it to kindly reply back with its rank.
>>> Ray
>>>
>>>
>>> On 8/10/2018 10:49 AM, Reuti wrote:
>>> Hi,
>>>
>>> Am 10.08.2018 um 16:39 schrieb Diego Avesani
<diego.aves...@gmail.com <mailto:diego.aves...@gmail.com>>:
>>>
>>> Dear all,
>>>
>>> I have a problem:
>>> In my parallel program each CPU compute a value, let's say eff.
>>>
>>> First of all, I would like to know the maximum value. This for
me is quite simple,
>>> I apply the following:
>>>
>>> CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION,
MPI_MAX, MPI_MASTER_COMM, MPIworld%iErr)
>>> Would MPI_MAXLOC be sufficient?
>>>
>>> -- Reuti
>>>
>>>
>>> However, I would like also to know to which CPU that value
belongs. Is it possible?
>>>
>>> I have set-up a strange procedure but it works only when all
the CPUs has different values but fails when two of then has the
same eff value.
>>>
>>> Is there any intrinsic MPI procedure?
>>> in anternative,
>>> do you have some idea?
>>>
>>> really, really thanks.
>>> Diego
>>>
>>>
>>> Diego
>>>
>>> _______________________________________________
>>> users mailing list
>>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>>> https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
>>> _______________________________________________
>>> users mailing list
>>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>>> https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
>>>
>>> _______________________________________________
>>> users mailing list
>>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>>> https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>>> https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
>>> _______________________________________________
>>> users mailing list
>>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>>> https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
>>
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>> https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
>>
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>> https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
> https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
> https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
-- Jeff Squyres
jsquy...@cisco.com <mailto:jsquy...@cisco.com>
_______________________________________________
users mailing list
users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
--
Jeff Hammond
jeff.scie...@gmail.com <mailto:jeff.scie...@gmail.com>
http://jeffhammond.github.io/
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users