Hi Eugene
You're right, it is blocking send, buffers can be reused after MPI_Send
returns.
My bad, I only read your answer to Sebastien and Ole
after I posted mine.
Could MPI run out of [internal] buffers to hold the messages, perhaps?
The messages aren't that big anyway [5000 doubles].
Could MPI behave differently regarding internal
buffering when communication is intra-node vs. across the network?
[It works intra-node, according to Ole's posting.]
I suppose Ole rebuilt OpenMPI on his newly installed Ubuntu.
Gus Correa
Eugene Loh wrote:
I'm missing the point on the buffer re-use. It seems to me that the
sample program passes some buffer around in a ring. Each process
receives the buffer with a blocking receive and then forwards it with a
blocking send. The blocking send does not return until the send buffer
is safe to reuse.
On 9/19/2011 7:37 AM, Gus Correa wrote:
You could try the examples/connectivity.c program in the
OpenMPI source tree, to test if everything is alright.
It also hints how to solve the buffer re-use issue
that Sebastien [rightfully] pointed out [i.e., declare separate
buffers for MPI_Send and MPI_Recv].
Sébastien Boisvert wrote:
Is it safe to re-use the same buffer (variable A) for MPI_Send and
MPI_Recv given that MPI_Send may be eager depending on
the MCA parameters ?