Jeff, Nathan,
Thank you for the positive feedback. I took a chance to look at current
master and tried (based on my humble understanding of the OpenMPI
internals) to remove the error check in ompi_osc_pt2pt_flush. Upon
testing with the example code I sent initially, I saw a Segfault that
stem
Nathan and I discussed at the MPI Forum last week. I argued that your
usage is not erroneous, although certain pathological cases (likely
concocted) can lead to nasty behavior. He indicated that he would remove
the error check, but it may require further discussion/debate with others.
You can re
Ping :) I would really appreciate any input on my question below. I
crawled through the standard but cannot seem to find the wording that
prohibits thread-concurrent access and synchronization.
Using MPI_Rget works in our case but MPI_Rput only guarantees local
completion, not remote completio
This is fine if each thread interacts with a different window, no?
Jeff
On Sun, Feb 19, 2017 at 5:32 PM Nathan Hjelm wrote:
> You can not perform synchronization at the same time as communication on
> the same target. This means if one thread is in
> MPI_Put/MPI_Get/MPI_Accumulate (target) you c
Nathan,
Thanks for your clarification. Just so that I understand where my
misunderstanding of this matter comes from: can you please point me to
the place in the standard that prohibits thread-concurrent window
synchronization using MPI_Win_flush[_all]? I can neither seem to find
such a passa
You can not perform synchronization at the same time as communication on the
same target. This means if one thread is in MPI_Put/MPI_Get/MPI_Accumulate
(target) you can’t have another thread in MPI_Win_flush (target) or
MPI_Win_flush_all(). If your program is doing that it is not a valid MPI
pr
All,
We are trying to combine MPI_Put and MPI_Win_flush on locked (using
MPI_Win_lock_all) dynamic windows to mimic a blocking put. The
application is (potentially) multi-threaded and we are thus relying on
MPI_THREAD_MULTIPLE support to be available.
When I try to use this combination (MPI_