Just to follow up on Jeff's comments:
I'm a member of the MPI-3 RMA committee and we are working on improving
the current state of the RMA spec. Right now it's not possible to ask
for local completion of specific RMA operations. Part of the current
RMA proposal is an extension that would all
I personally find the entire MPI one-sided chapter to be incredibly confusing
and subject to arbitrary interpretation. I have consistently advised people to
not use it since the late '90s.
That being said, the MPI one-sided chapter is being overhauled in the MPI-3
forum; the standardization pr
But that is what surprises me. Indeed the scenario I described can be
implemented using two-sided communication, but it seems not to be possible
when using one sided communication.
Additionally the MPI 2.2. standard describes on page 356 the matching rules
for post and start, complete and wait and
Hi Toon,
Can you use non-blocking send/recv? It sounds like this will give you
the completion semantics you want.
Best,
~Jim.
On 2/24/11 6:07 AM, Toon Knapen wrote:
In that case, I have a small question concerning design:
Suppose task-based parallellism where one node (master) distributes
In that case, I have a small question concerning design:
Suppose task-based parallellism where one node (master) distributes
work/tasks to 2 other nodes (slaves) by means of an MPI_Put. The master
allocates 2 buffers locally in which it will store all necessary data that
is needed by the slave to
On Feb 18, 2011, at 8:59 AM, Toon Knapen wrote:
> (Probably this issue has been discussed at length before but unfortunately I
> did not find any threads (on this site or anywhere else) on this topic, if
> you are able to provide me with links to earlier discussions on this topic,
> please do n
(Probably this issue has been discussed at length before but unfortunately I
did not find any threads (on this site or anywhere else) on this topic, if
you are able to provide me with links to earlier discussions on this topic,
please do not hesitate)
Is there an alternative to MPI_Win_complete th