Are you using IB, perchance?
We have an "early completion" optimization in the 1.2 series that can
cause this kind of behavior. For apps that dip into the MPI layer
frequently, it doesn't matter. But for those that do not dip into the
MPI layer frequently, it can cause delays like this. See http://www.open-mpi.org/faq/?category=openfabrics#v1.2-use-early-completion
for a few more details.
If you're not using IB, let us know.
On Sep 17, 2008, at 10:34 AM, Gregory D Abram wrote:
I have a little program which initializes, calls MPI_Bcast, prints a
message, waits five seconds, and finalized. I sure thought that each
participating process would print the message immediately, then all
would wait and exit - thats what happens with mvapich 1.0.0. On
OpenMPI 1.2.5, though, I get the message immediately from proc 0,
then 5 seconds later, from proc 1, and then 5 seconds later, it
exits- as if MPI_Finalize on proc 0 flushed the MPI_Bcast. If I add
a MPI_Barrier after the MPI_Bcast, it works as I'd expect. Is this
behavior correct? If so, I so I have a bunch of code to change in
order to work correctly on OpenMPI.
Greg
Here's the code:
#include <stdlib.h>
#include <stdio.h>
#include <mpi.h>
main(int argc, char *argv[])
{
char hostname[256]; int r, s;
MPI_Init(&argc, &argv);
gethostname(hostname, sizeof(hostname));
MPI_Comm_rank(MPI_COMM_WORLD, &r);
MPI_Comm_size(MPI_COMM_WORLD, &s);
fprintf(stderr, "%d of %d: %s\n", r, s, hostname);
int i = 99999;
MPI_Bcast(&i, sizeof(i), MPI_UNSIGNED_CHAR, 0, MPI_COMM_WORLD);
// MPI_Barrier(MPI_COMM_WORLD);
fprintf(stderr, "%d: got it\n", r);
sleep(5);
MPI_Finalize();
}
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Jeff Squyres
Cisco Systems