Hi Ralph,
Thanks for your response. My problem is removing all leaf nodes from a
directed graph, which is distributed among a number of processes. Each
process iterates over its portion of the graph, and if a node is a
leaf (indegree(n) == 0 || outdegree(n) == 0), it removes the node
(which i
I think perhaps you folks are all caught up a tad too much in the
standard and not reading the intent of someone's question... :-)
I believe the original question was concerned with ensuring that all
procs had completed MPI_Allreduce before his algorithm attempted other
operations. As you f
On 23 Mar 2009, at 21:11, Ralph Castain wrote:
Just one point to emphasize - Eugene said it, but many times people
don't fully grasp the implication.
On an MPI_Allreduce, the algorithm requires that all processes -enter-
the call before anyone can exit.
It does -not- require that they all e
There is no synchronization operation in MPI that promises all tasks will
exit at the same time. For MPI_Barrier they will exit as close to the same
time as the implementation can reasonably support but as long as the
application is distributed and there are delays in the interconnect, it is
not p
Unfortunately even the MPI_Barrier doesn't guarantee a synchronous
exit on all processes. There is no such thing in the MPI and there is
no way to implement such a synchronization primitive in general (if
one take in account metrics such as performance or scalability).
In this particular co
Thanks that works for me.
Rene
On Mon, 2009-03-23 at 19:40 +, Ralph Castain wrote:
> Ah - guess the VampirTrace guys missed those. In the interim, you can
> disable that part of the code by adding
>
> --enable-contrib-no-build=vt
>
> to your configure line
>
> Ralph
>
>
> On Mar 23, 2
Just one point to emphasize - Eugene said it, but many times people
don't fully grasp the implication.
On an MPI_Allreduce, the algorithm requires that all processes -enter-
the call before anyone can exit.
It does -not- require that they all exit at the same time.
So if you want to synchr
Shaun Jackman wrote:
I've just read in the Open MPI documentation [1]
That's the MPI spec, actually.
that collective operations, such as MPI_Allreduce, may synchronize,
but do not necessarily synchronize. My algorithm requires a collective
operation and synchronization; is there a better (m
I've just read in the Open MPI documentation [1] that collective
operations, such as MPI_Allreduce, may synchronize, but do not
necessarily synchronize. My algorithm requires a collective operation
and synchronization; is there a better (more efficient?) method than
simply calling MPI_Allreduce
Hi!
Sorry for taking up this old thread, but I think the solution is not yet
satisfactory.
To summarize the problem: OpenMPI has a plugin architecture. The plugins
rely on the fact, that the OpenMPI library is loaded into the global
namespace and are accessible to the plugins. If the mpi lib
Ah - guess the VampirTrace guys missed those. In the interim, you can
disable that part of the code by adding
--enable-contrib-no-build=vt
to your configure line
Ralph
On Mar 23, 2009, at 1:34 PM, Rene Salmon wrote:
Hi,
In the release notes for openmpi-1.3.1 there was this:
- Fix a few
Hi,
In the release notes for openmpi-1.3.1 there was this:
- Fix a few places where we used PATH_MAX instead of OMPI_PATH_MAX,
leading to compile problems on some platforms. Thanks to Andrea Iob
for the bug report.
I guess maybe all the places where PATH_MAX appears did not get
replac
One thing you might want to try is blowing away that prefix dir and
reinstalling OMPI 1.3.1. I'm not confident that "make uninstall" does
an adequate job of cleaning things out. The problem is that there are
major differences between 1.2.x and 1.3.x, and the uninstall may well
miss some thi
Hi!
Ralph Castain wrote:
I regularly run jobs like that on 1.3.1 - it has no desire to use ssh to
start anything. On a local host such as this command uses, all mpiexec
does is fork/exec the procs.
That sounds strange. I'm just going back and forth between OpenMPI 1.2.9
and OpenMPI 1.3.1 by
I regularly run jobs like that on 1.3.1 - it has no desire to use ssh
to start anything. On a local host such as this command uses, all
mpiexec does is fork/exec the procs.
It sounds like something strange is going on in your environment that
makes OMPI think it is launching on a remote hos
Hello!
I've tried to find anything on this on the mailings list or anywhere
else, but I wasn't able to.
In OpenMPI 1.2.x, I was able to simply run
mpiexec -n 2 hostname
on my Dual core machine without any problems. All MPI tasks inherited
the environment of the calling shell, and no
HI,
Here I am again with questions about MPI_Barrier. I did some
benchmark on MPI_Barrier and wondered if OpenMPI's
implementation automatically calls the tuned version of MPI_Barrier,
e.g. tree algorithm, when the number of nodes exceeds 4?
Any thoughts are welcomed. :D
Shan
17 matches
Mail list logo