async
progress support but others do not.
-Nathan
On Aug 06, 2018, at 11:29 AM, "Palmer, Bruce J" wrote:
Hi,
?
Is there anything that can be done to boost asynchronous progress for MPI RMA
operations in OpenMPI 3.1? I?m trying to use the MPI RMA runtime in Global
Arrays and it loo
Hi,
Is there anything that can be done to boost asynchronous progress for MPI RMA
operations in OpenMPI 3.1? I'm trying to use the MPI RMA runtime in Global
Arrays and it looks like the performance is pretty bad for some of our tests.
I've seen similar results in other MPI implementations (e.g.
Gilles,
I downloaded and built openmpi-2.0.0rc2 and used that for the test. I get a
crash on more than 1 processor for the lock/unlock protocol with the error
message
[node005:29916] *** An error occurred in MPI_Win_lock
[node005:29916] *** reported by process [3736862721,6]
[node005:29916] ***
I've been trying to recreate the semantics of the Global Array gather and
scatter operations using MPI RMA routines and I've run into some issues with
MPI Datatypes. I've been focusing on building MPI versions of the GA gather and
scatter calls, which I've been trying to implement using MPI data
PI on Cray, InfiniBand, and other systems.
As I think I've said before on some list, one of the best ways to understand
the mapping between ARMCI and MPI RMA is to look at ARMCI-MPI.
Jeff
On Wed, Jan 6, 2016 at 8:51 AM, Palmer, Bruce J
wrote:
>
> Hi,
> I?m trying to compare the se
Hi,
I'm trying to compare the semantics of MPI RMA with those of ARMCI. I've
written a small test program that writes data to a remote processor and then
reads the data back to the original processor. In ARMCI, you should be able to
do this since operations to the same remote processor are compl
I'm trying to get some code working using request-based RMA (MPI_Rput,
MPI_Rget, MPI_Raccumulate). My understanding of the MPI 3.0 standard is that
after calling MPI_Wait on the request handle, the local buffers should be safe
to use. On calling MPI_Win_flush_all on the window used for RMA opera