As usual, Dick is much more eloquent than me. :-)
He also correctly pointed out to me in an off-list mail that in my
first reply, I casually used the internal term "blocking progress" and
probably sowed some of the initial seeds of confusion in this thread
(because "blocking" has specific
I hope the following helps, but maybe I'm just repeating myself and Dick.
Let's say you're stuck in an MPI_Recv, MPI_Bcast, or MPI_Barrier call
waiting on someone else. You want to free up the CPU for more
productive purposes. There are basically two cases:
1) If you want to free the CPU u
Vincent
1) Assume you are running an MPI program which has 16 tasks in
MPI_COMM_WORLD, you have 16 dedicated CPUs and each task is single
threaded. (a task is a distinct process, a process can contain one or more
threads) The is the most common traditional model. In this model, when a
task makes
This program is 100% correct from MPI perspective. However, in Open
MPI (and I think most of the others MPI), a collective communication
is something that will drain most of the resources, similar to all
blocking functions.
Now I will answer to your original post. Using non blocking
commu
Ok let's take the simple example here, I might have use wrong terms and
I apologize for it
While the rank 0 process is sleeping the other ones are in bcast waiting
for data
program test
use mpi
implicit none
integer :: mpi_wsize, mpi_rank, mpi_err
integer :: data
call mpi_init(mpi_er
On Sep 3, 2008, at 6:11 PM, Vincent Rotival wrote:
Eugene,
No what I'd like is that when doing something like
call mpi_bcast(data, 1, MPI_INTEGER, 0, .)
the program continues AFTER the Bcast is completed (so no control
returned to user), but while threads with rank > 0 are waiting in
Eugene,
No what I'd like is that when doing something like
call mpi_bcast(data, 1, MPI_INTEGER, 0, .)
the program continues AFTER the Bcast is completed (so no control
returned to user), but while threads with rank > 0 are waiting in Bcast
they are not taking CPU resources
I hope it is
Vincent Rotival wrote:
The solution I retained was for the main thread to isend data
separately to each other threads that are using Irecv + loop on
mpi_test to test the finish of the Irecv. It mught be dirty but works
much better than using Bcast
Thanks for the clarification.
But this st
Eugene Loh wrote:
Jeff Squyres wrote:
On Sep 2, 2008, at 7:25 PM, Vincent Rotival wrote:
I think I already read some comments on this issue, but I'd like to
know of latest versions of OpenMPI have managed to solve it. I am
now running 1.2.5
If I run a MPI program with synchronization r
Jeff Squyres wrote:
On Sep 2, 2008, at 7:25 PM, Vincent Rotival wrote:
I think I already read some comments on this issue, but I'd like to
know of latest versions of OpenMPI have managed to solve it. I am
now running 1.2.5
If I run a MPI program with synchronization routines (e.g.
MPI_b
On Sep 2, 2008, at 7:25 PM, Vincent Rotival wrote:
I think I already read some comments on this issue, but I'd like to
know of latest versions of OpenMPI have managed to solve it. I am
now running 1.2.5
If I run a MPI program with synchronization routines (e.g.
MPI_barrier, MPI_bcast...),
11 matches
Mail list logo