Re: [OMPI users] Collective operations and synchronization

2009-03-23 Thread Shaun Jackman
Hi Ralph, Thanks for your response. My problem is removing all leaf nodes from a directed graph, which is distributed among a number of processes. Each process iterates over its portion of the graph, and if a node is a leaf (indegree(n) == 0 || outdegree(n) == 0), it removes the node (which i

Re: [OMPI users] Collective operations and synchronization

2009-03-23 Thread Ralph Castain
I think perhaps you folks are all caught up a tad too much in the standard and not reading the intent of someone's question... :-) I believe the original question was concerned with ensuring that all procs had completed MPI_Allreduce before his algorithm attempted other operations. As you f

Re: [OMPI users] Collective operations and synchronization

2009-03-23 Thread Ashley Pittman
On 23 Mar 2009, at 21:11, Ralph Castain wrote: Just one point to emphasize - Eugene said it, but many times people don't fully grasp the implication. On an MPI_Allreduce, the algorithm requires that all processes -enter- the call before anyone can exit. It does -not- require that they all e

Re: [OMPI users] Collective operations and synchronization

2009-03-23 Thread Richard Treumann
There is no synchronization operation in MPI that promises all tasks will exit at the same time. For MPI_Barrier they will exit as close to the same time as the implementation can reasonably support but as long as the application is distributed and there are delays in the interconnect, it is not p

Re: [OMPI users] Collective operations and synchronization

2009-03-23 Thread George Bosilca
Unfortunately even the MPI_Barrier doesn't guarantee a synchronous exit on all processes. There is no such thing in the MPI and there is no way to implement such a synchronization primitive in general (if one take in account metrics such as performance or scalability). In this particular co

Re: [OMPI users] PATH_MAX error with compiling openmpi 1.3.1 withintel compilers

2009-03-23 Thread Rene Salmon
Thanks that works for me. Rene On Mon, 2009-03-23 at 19:40 +, Ralph Castain wrote: > Ah - guess the VampirTrace guys missed those. In the interim, you can > disable that part of the code by adding > > --enable-contrib-no-build=vt > > to your configure line > > Ralph > > > On Mar 23, 2

Re: [OMPI users] Collective operations and synchronization

2009-03-23 Thread Ralph Castain
Just one point to emphasize - Eugene said it, but many times people don't fully grasp the implication. On an MPI_Allreduce, the algorithm requires that all processes -enter- the call before anyone can exit. It does -not- require that they all exit at the same time. So if you want to synchr

Re: [OMPI users] Collective operations and synchronization

2009-03-23 Thread Eugene Loh
Shaun Jackman wrote: I've just read in the Open MPI documentation [1] That's the MPI spec, actually. that collective operations, such as MPI_Allreduce, may synchronize, but do not necessarily synchronize. My algorithm requires a collective operation and synchronization; is there a better (m

[OMPI users] Collective operations and synchronization

2009-03-23 Thread Shaun Jackman
I've just read in the Open MPI documentation [1] that collective operations, such as MPI_Allreduce, may synchronize, but do not necessarily synchronize. My algorithm requires a collective operation and synchronization; is there a better (more efficient?) method than simply calling MPI_Allreduce

[OMPI users] dlopening openmpi libs (was: Re: Problems in 1.3 loading shared libs when usingVampirServer)

2009-03-23 Thread Olaf Lenz
Hi! Sorry for taking up this old thread, but I think the solution is not yet satisfactory. To summarize the problem: OpenMPI has a plugin architecture. The plugins rely on the fact, that the OpenMPI library is loaded into the global namespace and are accessible to the plugins. If the mpi lib

Re: [OMPI users] PATH_MAX error with compiling openmpi 1.3.1 with intel compilers

2009-03-23 Thread Ralph Castain
Ah - guess the VampirTrace guys missed those. In the interim, you can disable that part of the code by adding --enable-contrib-no-build=vt to your configure line Ralph On Mar 23, 2009, at 1:34 PM, Rene Salmon wrote: Hi, In the release notes for openmpi-1.3.1 there was this: - Fix a few

[OMPI users] PATH_MAX error with compiling openmpi 1.3.1 with intel compilers

2009-03-23 Thread Rene Salmon
Hi, In the release notes for openmpi-1.3.1 there was this: - Fix a few places where we used PATH_MAX instead of OMPI_PATH_MAX, leading to compile problems on some platforms. Thanks to Andrea Iob for the bug report. I guess maybe all the places where PATH_MAX appears did not get replac

Re: [OMPI users] mpirun/exec requires ssh?

2009-03-23 Thread Ralph Castain
One thing you might want to try is blowing away that prefix dir and reinstalling OMPI 1.3.1. I'm not confident that "make uninstall" does an adequate job of cleaning things out. The problem is that there are major differences between 1.2.x and 1.3.x, and the uninstall may well miss some thi

Re: [OMPI users] mpirun/exec requires ssh?

2009-03-23 Thread Olaf Lenz
Hi! Ralph Castain wrote: I regularly run jobs like that on 1.3.1 - it has no desire to use ssh to start anything. On a local host such as this command uses, all mpiexec does is fork/exec the procs. That sounds strange. I'm just going back and forth between OpenMPI 1.2.9 and OpenMPI 1.3.1 by

Re: [OMPI users] mpirun/exec requires ssh?

2009-03-23 Thread Ralph Castain
I regularly run jobs like that on 1.3.1 - it has no desire to use ssh to start anything. On a local host such as this command uses, all mpiexec does is fork/exec the procs. It sounds like something strange is going on in your environment that makes OMPI think it is launching on a remote hos

[OMPI users] mpirun/exec requires ssh?

2009-03-23 Thread Olaf Lenz
Hello! I've tried to find anything on this on the mailings list or anywhere else, but I wasn't able to. In OpenMPI 1.2.x, I was able to simply run mpiexec -n 2 hostname on my Dual core machine without any problems. All MPI tasks inherited the environment of the calling shell, and no

[OMPI users] Does OpenMPI's MPI_Barrier automatically call the tuned version?

2009-03-23 Thread Shanyuan Gao
HI, Here I am again with questions about MPI_Barrier. I did some benchmark on MPI_Barrier and wondered if OpenMPI's implementation automatically calls the tuned version of MPI_Barrier, e.g. tree algorithm, when the number of nodes exceeds 4? Any thoughts are welcomed. :D Shan