I totally agree with Dave here. Moreover, based on the logic exposed by Jeff, there is no right solution because if one choose to first wait on the receive requests this also leads to a deadlock as the send requests might not be progressed.
As a side note, posting the receive requests first minimize the potential for unexpected requests. George. On Fri, Jan 9, 2015 at 12:31 PM, Dave Goodell (dgoodell) <dgood...@cisco.com > wrote: > On Jan 9, 2015, at 7:46 AM, Jeff Squyres (jsquyres) <jsquy...@cisco.com> > wrote: > > > Yes, I know examples 3.8/3.9 are blocking examples. > > > > But it's morally the same as: > > > > MPI_WAITALL(send_requests...) > > MPI_WAITALL(recv_requests...) > > > > Strictly speaking, that can deadlock, too. > > > > It reality, it has far less chance of deadlocking than examples 3.8 and > 3.9 (because you're likely within the general progression engine, and the > implementation will progress both the send and receive requests while in > the first WAITALL). > > > > But still, it would be valid for an implementation to *only* progress > the send requests -- and NOT the receive requests -- while in the first > WAITALL. Which makes it functionally equivalent to examples 3.8/3.9. > > That's not true. The implementation is required to make progress on all > outstanding requests (assuming they can be progressed). The following > should not deadlock: > > ----✂---- > for (...) MPI_Isend(...) > for (...) MPI_Irecv(...) > MPI_Waitall(send_requests...) > MPI_Waitall(recv_requests...) > ----✂---- > > -Dave > > _______________________________________________ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2015/01/26154.php