This problem is not related to Open MPI. Is related to the way you
use MPI. In fact there are 2 problems:
1. Buffered sends will copy the data into the attached buffer. In
your case, I think this only add one more memcpy operation to the
critical path, which might partially explain the impressive slow-down
(but I don't think this is the main reason). Buffering an MPI_PACKED
data seems like a non optimal solution. You want to keep the critical
path as short as possible and avoid any extra/useless memcopy. Using
a double buffering technique (which will effectively double the
amount of memory required for your communications) can give you some
benefit.
2. Once the data is buffered, the Bsend (and the Ibsend) return to
the user application without progressing the communication. With few
exceptions (based on the available networks, which is not the case
for TCP nor shared memory) the point-to-point communication will only
be progressed on the next MPI call. If you look in the MPI standard
to see what exactly means to return from a blocking send, you will
realize that the only requirement is that the user can touch the send
buffer. From this perspective, the major difference between a
MPI_Send and an MPI_Bsend operation is that the MPI_Send will return
once the data is delivered to the NIC (which then can then complete
the communication asynchronously), while at the end of the MPI_Bsend
the data is still in the application memory. The only way to get any
benefit from the MPI_Bsend is to have a progress thread which take
care of the pending communications in the background. Such thread is
not enabled by default in Open MPI.
Thanks,
george.
On Mar 22, 2007, at 5:18 PM, Michael wrote:
Is there known issue with buffered sends in OpenMPI 1.1.4?
I changed a single send which is called thousands of times from
MPI_SEND (& MPI_ISEND) to MPI_BSEND (& MPI_IBSEND) and my Fortran 90
code slowed down by a factor of 10.
I've looked at several references and I can't see where I'm making a
mistake. The MPI_SEND is for MPI_PACKED data, so it's first
parameter is an allocated character array. I also allocated a
character array for the buffer passed to MPI_BUFFER_ATTACH.
Looking at the model implementation in a reference they give a model
of using MPI_PACKED inside MPI_BSEND, I was wondering if this could
be a problem, i.e. packing packed data?
Michael
ps. I have to use OpenMPI 1.1.4 to maintain compatibility with a
major HPC center.
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users