The patch is in 1.2.6 and beyond.
It's not really a serialization issue -- it's an "early completion"
optimization, meaning that as soon as the underlying network stack
indicates that the buffer has been copied, OMPI marks the request as
complete and returns. But the data may not actually have been pushed
out on the network wire yet (so to speak) -- it may still require
additional API-driven progress before the message actually departs for
the peer. While it may sound counterintuitive, this is actually an
acceptable compromise/optimization for MPI applications that dip into
the MPI layer frequently -- they'll naturally progress anything that
has been queued up but not fully sent yet. Disabling early completion
means that OMPI won't mark the request as complete until the message
requires no further progression from OMPI for it to be transited to
the peer (e.g., the network hardware can completely take over the
progression).
Hence, in your case, it looks serialized because you put in a big
sleep(). If you called other MPI functions instead of sleep, it
wouldn't appear as serialized.
Make sense? (yes, I know it's a fine line distinction ;-) )
OMPI v1.3 internally differentiates between "early completion" and
"out on the wire" so that it can automatically tell the difference
(i.e., we changed our message progression engine to recognize the
difference). This change was seen as too big to port back to the
v1.2. series, so the compromise was to put the "disable early
completion" flag in the v1.2 series.
On Sep 17, 2008, at 12:31 PM, Gregory D Abram wrote:
Wow. I am indeed on IB.
So a program that calls an MPI_Bcast, then does a bunch of setup
work that should be done in parallel before re-synchronizing, in
fact serializes the setup work? I see its not quite that bad - If I
run my little program on 5 nodes, I get 0 immediately, 1,2 and 4
after 5 seconds and 3 after 10, revealing, I guess, the tree
distribution.
Ticket 1224 isn't terribly clear - is this patch already in 1.2.6 or
1.2.7, or do I have to download source, patch and build?
Greg
<graycol.gif>Jeff Squyres ---09/17/2008 12:03:21 PM---Are you using
IB, perchance?
Jeff Squyres <jsquy...@cisco.com>
Sent by: users-boun...@open-mpi.org
09/17/08 11:55 AM
Please respond to
Open MPI Users <us...@open-mpi.org>
<ecblank.gif>
To
<ecblank.gif>
Open MPI Users <us...@open-mpi.org>
<ecblank.gif>
cc
<ecblank.gif>
<ecblank.gif>
Subject
<ecblank.gif>
Re: [OMPI users] Odd MPI_Bcast behavior
<ecblank.gif>
<ecblank.gif>
Are you using IB, perchance?
We have an "early completion" optimization in the 1.2 series that can
cause this kind of behavior. For apps that dip into the MPI layer
frequently, it doesn't matter. But for those that do not dip into the
MPI layer frequently, it can cause delays like this. See
http://www.open-mpi.org/faq/?category=openfabrics#v1.2-use-early-completion
for a few more details.
If you're not using IB, let us know.
On Sep 17, 2008, at 10:34 AM, Gregory D Abram wrote:
> I have a little program which initializes, calls MPI_Bcast, prints a
> message, waits five seconds, and finalized. I sure thought that each
> participating process would print the message immediately, then all
> would wait and exit - thats what happens with mvapich 1.0.0. On
> OpenMPI 1.2.5, though, I get the message immediately from proc 0,
> then 5 seconds later, from proc 1, and then 5 seconds later, it
> exits- as if MPI_Finalize on proc 0 flushed the MPI_Bcast. If I add
> a MPI_Barrier after the MPI_Bcast, it works as I'd expect. Is this
> behavior correct? If so, I so I have a bunch of code to change in
> order to work correctly on OpenMPI.
>
> Greg
>
> Here's the code:
>
> #include <stdlib.h>
> #include <stdio.h>
> #include <mpi.h>
>
> main(int argc, char *argv[])
> {
> char hostname[256]; int r, s;
> MPI_Init(&argc, &argv);
>
> gethostname(hostname, sizeof(hostname));
>
> MPI_Comm_rank(MPI_COMM_WORLD, &r);
> MPI_Comm_size(MPI_COMM_WORLD, &s);
>
> fprintf(stderr, "%d of %d: %s\n", r, s, hostname);
>
> int i = 99999;
> MPI_Bcast(&i, sizeof(i), MPI_UNSIGNED_CHAR, 0, MPI_COMM_WORLD);
> // MPI_Barrier(MPI_COMM_WORLD);
>
> fprintf(stderr, "%d: got it\n", r);
>
> sleep(5);
>
> MPI_Finalize();
> }
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Jeff Squyres
Cisco Systems
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Jeff Squyres
Cisco Systems