MPI_Iallreduce completes, and hence **before** MPI_Wait() completes,
so the behavior of such a program is undefined.
Cheers,
Gilles
On 11/14/2019 9:42 AM, Camille Coti via users wrote:
Dear all,
I have a little piece of code shown below that initializes a
multidimensional Fortran array
Dear all,
I have a little piece of code shown below that initializes a
multidimensional Fortran array and performs:
- a non-blocking MPI_Iallreduce immediately followed by an MPI_Wait
- a blocking MPI_Allreduce
After both calls, it displays a few elements of the input and output
buffers.
In
this is problem, but it's worth
checking). Grab the latest beta:
http://www.open-mpi.org/software/plpa/v1.2/
It's a very small package and easy to install under your $HOME (or
whatever).
Can you send the output of "plpa-info --topo"?
On Aug 22, 2008, at 7:00 AM,
sed our tests to pass while your system fails. I'm particularly
suspicious of the old kernel you are running and how our revised code
will handle it.
For now, I would suggest you work with revisions lower than r19391 -
could you please confirm that r19390 or earlier works?
Thanks
Ralph
there.
We'll report back later with an estimate of how quickly this can be fixed.
Thanks
Ralph
On Aug 22, 2008, at 7:03 AM, Camille Coti wrote:
Ralph,
I compiled a clean checkout from the trunk (r19392), the problem is
still the same.
Camille
Ralph Castain a écrit :
Hi Camille
What OM
maffinity framework makes some calls into paffinity that need to
adjust.
So version number would help a great deal in this case.
Thanks
Ralph
On Aug 22, 2008, at 5:23 AM, Camille Coti wrote:
Hello,
I am trying to run applications on a shared-memory machine. For the
moment I am just trying to run
Hello,
I am trying to run applications on a shared-memory machine. For the
moment I am just trying to run tests on point-to-point communications (a
trivial token ring) and collective operations (from the SkaMPI tests
suite).
It runs smoothly if mpi_paffinity_alone is set to 0. For a number