Hi Ole, Eugene

For what it is worth, I tried Ole's program here,
as Devendra Rai had done before.
I ran it across two nodes, with a total of 16 processes.
I tried mca parameters for openib Infiniband,
then for tcp on Gigabit Ethernet.
Both work.
I am using OpenMPI 1.4.3 compiled with GCC 4.1.2 on CentOS 5.2.
Thanks.

Gus Correa

Gus Correa wrote:
Hi Eugene

You're right, it is blocking send, buffers can be reused after MPI_Send returns.
My bad, I only read your answer to Sebastien and Ole
after I posted mine.

Could MPI run out of [internal] buffers to hold the messages, perhaps?
The messages aren't that big anyway [5000 doubles].
Could MPI behave differently regarding internal
buffering when communication is intra-node vs. across the network?
[It works intra-node, according to Ole's posting.]

I suppose Ole rebuilt OpenMPI on his newly installed Ubuntu.

Gus Correa


Eugene Loh wrote:
I'm missing the point on the buffer re-use. It seems to me that the sample program passes some buffer around in a ring. Each process receives the buffer with a blocking receive and then forwards it with a blocking send. The blocking send does not return until the send buffer is safe to reuse.

On 9/19/2011 7:37 AM, Gus Correa wrote:
You could try the examples/connectivity.c program in the
OpenMPI source tree, to test if everything is alright.
It also hints how to solve the buffer re-use issue
that Sebastien [rightfully] pointed out [i.e., declare separate
buffers for MPI_Send and MPI_Recv].

Sébastien Boisvert wrote:
Is it safe to re-use the same buffer (variable A) for MPI_Send and MPI_Recv given that MPI_Send may be eager depending on
the MCA parameters ?

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to