Shaun Jackman wrote:
On Tue, 2009-03-24 at 07:03 -0800, Eugene Loh wrote:
I'm not sure I understand this suggestion, so I'll say it the way I
understand it. Would it be possible for each process to send an
"all done" message to each of its neighbors? Conversely, each
process would poll its
On Tue, 2009-03-24 at 07:03 -0800, Eugene Loh wrote:
I'm not sure I understand this suggestion, so I'll say it the way I
understand it. Would it be possible for each process to send an "all
done" message to each of its neighbors? Conversely, each process would
poll its neighbors for messages,
On Tue, 2009-03-24 at 07:03 -0800, Eugene Loh wrote:
> > Perhaps there is a better way of accomplishing the same thing however,
> > MPI_Barrier syncronises all processes so is potentially a lot more
> > heavyweight than it needs to be, in this example you only need to
> > syncronise with your ne
Ashley Pittman wrote:
On 23 Mar 2009, at 23:36, Shaun Jackman wrote:
loop {
MPI_Ibsend (for every edge of every leaf node)
MPI_barrier
MPI_Iprobe/MPI_Recv (until no messages pending)
MPI_Allreduce (number of nodes removed)
} until (no nodes removed by any node)
Previously, I attempted to use
On 23 Mar 2009, at 23:36, Shaun Jackman wrote:
loop {
MPI_Ibsend (for every edge of every leaf node)
MPI_barrier
MPI_Iprobe/MPI_Recv (until no messages pending)
MPI_Allreduce (number of nodes removed)
} until (no nodes removed by any node)
Previously, I attempted to use a single MPI_Allreduce w
Hi Ralph,
Thanks for your response. My problem is removing all leaf nodes from a
directed graph, which is distributed among a number of processes. Each
process iterates over its portion of the graph, and if a node is a
leaf (indegree(n) == 0 || outdegree(n) == 0), it removes the node
(which i
I think perhaps you folks are all caught up a tad too much in the
standard and not reading the intent of someone's question... :-)
I believe the original question was concerned with ensuring that all
procs had completed MPI_Allreduce before his algorithm attempted other
operations. As you f
On 23 Mar 2009, at 21:11, Ralph Castain wrote:
Just one point to emphasize - Eugene said it, but many times people
don't fully grasp the implication.
On an MPI_Allreduce, the algorithm requires that all processes -enter-
the call before anyone can exit.
It does -not- require that they all e
433-8363
users-boun...@open-mpi.org wrote on 03/23/2009 05:11:05 PM:
> [image removed]
>
> Re: [OMPI users] Collective operations and synchronization
>
> Ralph Castain
>
> to:
>
> Open MPI Users
>
> 03/23/2009 05:12 PM
>
> Sent by:
>
> users-boun...@
Unfortunately even the MPI_Barrier doesn't guarantee a synchronous
exit on all processes. There is no such thing in the MPI and there is
no way to implement such a synchronization primitive in general (if
one take in account metrics such as performance or scalability).
In this particular co
Just one point to emphasize - Eugene said it, but many times people
don't fully grasp the implication.
On an MPI_Allreduce, the algorithm requires that all processes -enter-
the call before anyone can exit.
It does -not- require that they all exit at the same time.
So if you want to synchr
Shaun Jackman wrote:
I've just read in the Open MPI documentation [1]
That's the MPI spec, actually.
that collective operations, such as MPI_Allreduce, may synchronize,
but do not necessarily synchronize. My algorithm requires a collective
operation and synchronization; is there a better (m
12 matches
Mail list logo