Dear Jeff,
you are right.
The question is:
Is it possible to have a barrier for all CPUs despite they belong to
different group?
If the answer is yes I will go in more details.
Thank a lot
Diego
On 10 August 2018 at 19:49, Jeff Squyres (jsquyres) via users <
users@lists.open-mpi.org> wrote:
>
I think that your reasons are very valid, and probably why the Forum a)
invented MPI_MINLOC/MAXLOC in the first place, and b) why no one has put forth
a proposal to get rid of them.
:-)
> On Aug 10, 2018, at 2:42 PM, Gus Correa wrote:
>
> Hi Jeff S.
>
> OK, then I misunderstood Jeff H.
> So
Hi Jeff S.
OK, then I misunderstood Jeff H.
Sorry about that, Jeff H..
Nevertheless, Diego Avesani has certainly a point.
And it is the point of view of an user,
something that hopefully matters.
I'd add to Diego's arguments
that maxloc, minloc, and friends are part of
Fortran, Matlab, etc.
A sc
Jeff H. was referring to Nathan's offhand remark about his desire to kill the
MPI_MINLOC / MPI_MAXLOC operations. I think Jeff H's point is that this is
just Nathan's opinion -- as far as I know, there is no proposal in front of the
MPI Forum to actively deprecate MPI_MINLOC or MPI_MAXLOC. Spe
On 08/10/2018 01:27 PM, Jeff Squyres (jsquyres) via users wrote:
It is unlikely that MPI_MINLOC and MPI_MAXLOC will go away any time soon.
As far as I know, Nathan hasn't advanced a proposal to kill them in MPI-4,
meaning that they'll likely continue to be in MPI for at least another 10
years.
Hmmm ... no, no, no!
Keep it secret why!?!?
Diego Avesani's questions and questioning
may have saved us users from getting a
useful feature deprecated in the name of code elegance.
Code elegance may be very cherished by developers,
but it is not necessarily helpful to users,
specially if it strip
This thread is a perfect illustration of why MPI Forum participants should
not flippantly discuss feature deprecation in discussion with users. Users
who are not familiar with the MPI Forum process are not able to evaluate
whether such proposals are serious or have any hope of succeeding and
there
I'm not quite clear what the problem is that you're running in to -- you just
said that there is "some problem with MPI_barrier".
What problem, exactly, is happening with your code? Be as precise and specific
as possible.
It's kinda hard to tell what is happening in the code snippet below beca
It is unlikely that MPI_MINLOC and MPI_MAXLOC will go away any time soon.
As far as I know, Nathan hasn't advanced a proposal to kill them in MPI-4,
meaning that they'll likely continue to be in MPI for at least another 10
years. :-)
(And even if they did get killed in MPI-4, implementations l
I agree about the names, it is very similar to MIN_LOC and MAX_LOC in
fortran 90.
However, I find difficult to define some algorithm able to do the same
things.
Diego
On 10 August 2018 at 19:03, Nathan Hjelm via users wrote:
> They do not fit with the rest of the predefined operations (which
They do not fit with the rest of the predefined operations (which operate on a
single basic type) and can easily be implemented as user defined operations and
get the same performance. Add to that the fixed number of tuple types and the
fact that some of them are non-contiguous (MPI_SHORT_INT) p
Hi Diego,
if they are float/reals, the error (overflow) bits will likely make
them unique. If you are looking at integers, I would use isends and
just capture the first one. You could make a little round robin and
poll everyone, saving the ranks that match, but if you are using
hundreds/th
Dear all,
I did it, but I am still afraid about Nathan concern.
What do you think?
thanks again
Diego
On 10 August 2018 at 17:41, Reuti wrote:
>
> > Am 10.08.2018 um 17:24 schrieb Diego Avesani :
> >
> > Dear all,
> > I have probably understood.
> > The trick is to use a real vector and to
Dear all,
I have a MPI program with three groups with some CPUs in common.
I have some problem with MPI_barrier.
I try to make my self clear. I have three communicator:
INTEGER :: MPI_GROUP_WORLD
INTEGER :: MPI_LOCAL_COMM
INTEGER :: MPI_MASTER_COMM
when I apply:
IF(MPIworld%rank.EQ.0) W
Dear all,
I have just implemented MAXLOC, why should they go away?
it seems working pretty well.
thanks
Diego
On 10 August 2018 at 17:39, Nathan Hjelm via users wrote:
> The problem is minloc and maxloc need to go away. better to use a custom
> op.
>
> On Aug 10, 2018, at 9:36 AM, George Bos
> Am 10.08.2018 um 17:24 schrieb Diego Avesani :
>
> Dear all,
> I have probably understood.
> The trick is to use a real vector and to memorize also the rank.
Yes, I thought of this:
https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html
-- Reuti
> Have I understood correctly?
> t
The problem is minloc and maxloc need to go away. better to use a custom op.
> On Aug 10, 2018, at 9:36 AM, George Bosilca wrote:
>
> You will need to create a special variable that holds 2 entries, one for the
> max operation (with whatever type you need) and an int for the rank of the
> pro
You will need to create a special variable that holds 2 entries, one for
the max operation (with whatever type you need) and an int for the rank of
the process. The MAXLOC is described on the OMPI man page [1] and you can
find an example on how to use it on the MPI Forum [2].
George.
[1] https:/
On Fri, 2018-08-10 at 11:03 -0400, Ray Sheppard wrote:
> As a dumb scientist, I would just bcast the value I get back to the
> group and ask whoever owns it to kindly reply back with its rank.
> Ray
>
Depends how many times one needs to run this--your solution involves
quite a bit of extra
Dear all,
I have probably understood.
The trick is to use a real vector and to memorize also the rank.
Have I understood correctly?
thanks
Diego
On 10 August 2018 at 17:19, Diego Avesani wrote:
> Deal all,
> I do not understand how MPI_MINLOC works. it seem locate the maximum in a
> vector a
Deal all,
I do not understand how MPI_MINLOC works. it seem locate the maximum in a
vector and not the CPU to which the valur belongs to.
@ray: and if two has the same value?
thanks
Diego
On 10 August 2018 at 17:03, Ray Sheppard wrote:
> As a dumb scientist, I would just bcast the value I g
As a dumb scientist, I would just bcast the value I get back to the
group and ask whoever owns it to kindly reply back with its rank.
Ray
On 8/10/2018 10:49 AM, Reuti wrote:
Hi,
Am 10.08.2018 um 16:39 schrieb Diego Avesani :
Dear all,
I have a problem:
In my parallel program each CPU c
Hi,
> Am 10.08.2018 um 16:39 schrieb Diego Avesani :
>
> Dear all,
>
> I have a problem:
> In my parallel program each CPU compute a value, let's say eff.
>
> First of all, I would like to know the maximum value. This for me is quite
> simple,
> I apply the following:
>
> CALL MPI_ALLREDUCE(e
Dear all,
I have a problem:
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for me is quite
simple,
I apply the following:
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION, MPI_MAX,
MPI_MASTER_COMM, MPIworld%i
24 matches
Mail list logo