A few points in addition to what has already been said:

1. You can always post a receive for size N when the actual message is <=N. You can use this fact to pre-post a receive with size N, where N is large enough for the header and a medium-sized message. If you have a short message, it'll fit entirely within N and you're good. If the message size is greater than N, you can still send the first "eager" part of the message with the header, and then send a second message with the remaining (size-N) after that. So for short/medium messages, you'll still accomplish the transfer in 1 network message, but long messages will effectively require 2. And for long messages, you're already taking a long time to transmit the message, so adding the overhead of a 2nd MPI message is negligible compared to the overall time.

Note that this is pretty much what most MPI's do under the covers, anyway (including Open MPI).

2. There is a dormant-but-will-be-resurrected proposal in front of the MPI-3 Forum right now to do exactly what you want: "MPI receive and allocate a buffer big enough for however big the incoming message is." But even if that proposal passes, it'll likely be a while before it shows up in MPI implementations. :-\


On Jun 4, 2009, at 10:27 AM, Neil Ludban wrote:

> Date: Thu, 4 Jun 2009 11:14:16 +1000
> From: Lars Andersson <lars...@gmail.com>
> Subject: [OMPI users] Receiving MPI messages of unknown size
> To: us...@open-mpi.org
>
> When using blocking message passing, I have simply solved the problem
> by first sending a small, fixed size header containing the size of
> rest of the data, sent in the following mpi message. When using
> non-blocking message passing, this doesn't seem to be such a good
> idea, since we cant post the main data transfer until we have received > the message header... It seems to take away most of the advantages on
> non-blocking io in the first place.

If enough messages are small enough, a medium sized message with
fixed header and variable data eliminates most of the second message
overhead.
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems

Reply via email to