On Sep 4, 2008, at 4:35 PM, Eugene Loh wrote:
There are many alternatives to polling hard. One is to yield the
CPU if someone else is asking for it. Again, Open MPI has some
support for this today with the "mpi_yield_when_idle" variable.
Right? Might not be all of what someone wants, bu
Jeff Squyres wrote:
OMPI currently polls for message passing progress. While you're in
MPI_BCAST, it's quite possible/ likely that OMPI will poll hard until
the BCAST is done. It is possible that a future version of OMPI will
use a hybrid polling+non- polling approach for progress, such th
p; Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363
users-boun...@open-mpi.org wrote on 09/03/2008 01:11:00 PM:
> [image removed]
>
> Re: [OMPI users] CPU burning in Wait state
>
> Vincent Rotival
>
> t
I hope the following helps, but maybe I'm just repeating myself and Dick.
Let's say you're stuck in an MPI_Recv, MPI_Bcast, or MPI_Barrier call
waiting on someone else. You want to free up the CPU for more
productive purposes. There are basically two cases:
1) If you want to free the CPU u
-- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363
users-boun...@open-mpi.org wrote on 09/03/2008 01:11:00 PM:
> [image removed]
>
> Re: [OMPI users] CPU burning in Wait state
>
> Vincent Rotival
>
> to:
>
> Open MPI Users
>
> 09/03/2008
This program is 100% correct from MPI perspective. However, in Open
MPI (and I think most of the others MPI), a collective communication
is something that will drain most of the resources, similar to all
blocking functions.
Now I will answer to your original post. Using non blocking
commu
Ok let's take the simple example here, I might have use wrong terms and
I apologize for it
While the rank 0 process is sleeping the other ones are in bcast waiting
for data
program test
use mpi
implicit none
integer :: mpi_wsize, mpi_rank, mpi_err
integer :: data
call mpi_init(mpi_er
On Sep 3, 2008, at 6:11 PM, Vincent Rotival wrote:
Eugene,
No what I'd like is that when doing something like
call mpi_bcast(data, 1, MPI_INTEGER, 0, .)
the program continues AFTER the Bcast is completed (so no control
returned to user), but while threads with rank > 0 are waiting in
Eugene,
No what I'd like is that when doing something like
call mpi_bcast(data, 1, MPI_INTEGER, 0, .)
the program continues AFTER the Bcast is completed (so no control
returned to user), but while threads with rank > 0 are waiting in Bcast
they are not taking CPU resources
I hope it is
Vincent Rotival wrote:
The solution I retained was for the main thread to isend data
separately to each other threads that are using Irecv + loop on
mpi_test to test the finish of the Irecv. It mught be dirty but works
much better than using Bcast
Thanks for the clarification.
But this st
Eugene Loh wrote:
Jeff Squyres wrote:
On Sep 2, 2008, at 7:25 PM, Vincent Rotival wrote:
I think I already read some comments on this issue, but I'd like to
know of latest versions of OpenMPI have managed to solve it. I am
now running 1.2.5
If I run a MPI program with synchronization r
Jeff Squyres wrote:
On Sep 2, 2008, at 7:25 PM, Vincent Rotival wrote:
I think I already read some comments on this issue, but I'd like to
know of latest versions of OpenMPI have managed to solve it. I am
now running 1.2.5
If I run a MPI program with synchronization routines (e.g.
MPI_b
On Sep 2, 2008, at 7:25 PM, Vincent Rotival wrote:
I think I already read some comments on this issue, but I'd like to
know of latest versions of OpenMPI have managed to solve it. I am
now running 1.2.5
If I run a MPI program with synchronization routines (e.g.
MPI_barrier, MPI_bcast...),
Dear all
I think I already read some comments on this issue, but I'd like to know
of latest versions of OpenMPI have managed to solve it. I am now running
1.2.5
If I run a MPI program with synchronization routines (e.g. MPI_barrier,
MPI_bcast...), all threads waiting for data are still burni
14 matches
Mail list logo