On Sat, Jun 11, 2011 at 5:17 PM, Ole Kliemann < ole-ompi-2...@mail.plastictree.net> wrote:
> On Sat, Jun 11, 2011 at 07:24:24AM -0600, Ralph Castain wrote: > > Oh my - that is such an old version! Any reason for using it instead of > something more recent? > > I'm using the cluster of the university where I work und I'm not the > admin. So I'm going with what is installed there. > Provided your account is available over all the nodes of the cluster (commonly through a shared filesystem (e.g. NFS)), you can easily install and use a more recent version of OpenMPI. mkdir -p ${HOME}/ompi-1.5.3 && ./configure --prefix=${HOME}/ompi-1.5.3 make make install You should not forget to modify your "PATH" and "LD_LIBRARY_PATH" environment variables in your ".bash_profile". > > It's the first time I'm using MPI. Before I complain to the admins about > old versions or anything else, I'd like to check whether my code > actually should be okay in regard to MPI specifications. > > > On Jun 11, 2011, at 8:43 AM, Ole Kliemann wrote: > > > > > Hi everyone! > > > > > > I'm trying to use MPI on a cluster running OpenMPI 1.2.4 and starting > > > processes through PBSPro_11.0.2.110766. I've been running into a couple > > > of performance and deadlock problems and like to check whether I'm > > > making a mistake. > > > > > > One of the deadlocks I managed to boil down to the attached example. I > > > run it on 8 cores. It usually deadlocks with all except one process > > > showing > > > > > > start barrier > > > > > > as last output. > > > > > > The one process out of order shows: > > > > > > start getting local > > > > > > My question at this point is simply whether this is expected behaviour > > > of OpenMPI. > > > > > > Thanks in advance! > > > Ole > > > <mpi_barrier.cc>_______________________________________________ > > > users mailing list > > > us...@open-mpi.org > > > http://www.open-mpi.org/mailman/listinfo.cgi/users > > > > > > _______________________________________________ > > users mailing list > > us...@open-mpi.org > > http://www.open-mpi.org/mailman/listinfo.cgi/users > > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users >