Bruce:

The ARMCI-MPI (mpi3rma branch) test suite is a good way to determine if an
MPI-3 implementation supports passive target RMA properly. I haven't tested
against OpenMPI recently but will add it to Travis CI before the year is
over.

Best,

Jeff

On Monday, December 14, 2015, Palmer, Bruce J <bruce.pal...@pnnl.gov> wrote:

> I’m trying to get some code working using request-based RMA (MPI_Rput,
> MPI_Rget, MPI_Raccumulate). My understanding of the MPI 3.0 standard is
> that after calling MPI_Wait on the request handle, the local buffers should
> be safe to use. On calling MPI_Win_flush_all on the window used for RMA
> operations, all operations should be completed on the remote processor.
> Based on this, I would expect that the following program should work:
>
>
>
> #include <mpi.h>
>
>
>
> int main(int argc, char *argv[])
>
> {
>
>   int bytes = 4096;
>
>   MPI_Win win;
>
>   void *buf;
>
>
>
>   MPI_Init(&argc, &argv);
>
>
>
>   MPI_Alloc_mem(bytes,MPI_INFO_NULL, &win);
>
>   MPI_Win_create(buf,bytes,1,MPI_INFO_NULL,MPI_COMM_WORLD,&win);
>
>   MPI_Win_flush_all(win);
>
>
>
>   MPI_Win_free(&win);
>
>   MPI_Finalize();
>
>   return(0);
>
> }
>
>
>
> However, with openmpi-1.8.3 I’m seeing a crash
>
>
>
> [node302:3689] *** An error occurred in MPI_Win_flush_all
>
> [node302:3689] *** reported by process [2431516673,0]
>
> [node302:3689] *** on win rdma window 3
>
> [node302:3689] *** MPI_ERR_RMA_SYNC: error executing rma sync
>
> [node302:3689] *** MPI_ERRORS_ARE_FATAL (processes in this win will now
> abort,
>
> [node302:3689] ***    and potentially your MPI job)
>
>
>
> I’m seeing similar failures for mvapich2-2.1 and mpich-3.2. Does anyone
> know if this stuff is suppose to work? I’ve had pretty good luck using the
> original RMA calls (MPI_Put, MPI_Get and MPI_Accumulate) with
> MPI_Lock/MPI_Unlock but the request-based calls are mostly a complete
> failure.
>
>
>
> Bruce Palmer
>


-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/

Reply via email to