;t terribly clear - is this patch already in 1.2.6 or
1.2.7, or do I have to download source, patch and build?
Greg
Jeff Squyres ---09/17/2008 12:03:21 PM---Are you using
IB, perchance?
Jeff Squyres
Sent by: users-boun...@open-mpi.org
09/17/08 11:55 AM
Please respond to
Open MPI Users
Wow. I am indeed on IB.
So a program that calls an MPI_Bcast, then does a bunch of setup work that
should be done in parallel before re-synchronizing, in fact serializes the
setup work? I see its not quite that bad - If I run my little program on 5
nodes, I get 0 immediately, 1,2 and 4 after 5
I guess this must depend on what BTL you're using. If I run all
processes on the same node, I get the behavior you expect. So, are you
running processes on the same node, or different nodes and, if
different, via TCP or IB?
Gregory D Abram wrote:
I have a little program which initializes,
Are you using IB, perchance?
We have an "early completion" optimization in the 1.2 series that can
cause this kind of behavior. For apps that dip into the MPI layer
frequently, it doesn't matter. But for those that do not dip into the
MPI layer frequently, it can cause delays like this.
I have a little program which initializes, calls MPI_Bcast, prints a
message, waits five seconds, and finalized. I sure thought that each
participating process would print the message immediately, then all would
wait and exit - thats what happens with mvapich 1.0.0. On OpenMPI 1.2.5,
though, I