I'm using mpirun and the nodes are all on the same machin (a 8 cpu box
with an intel i7). coresize is unlimited:
ulimit -a
core file size (blocks, -c) unlimited
David
n Fri, 2010-08-13 at 13:47 -0400, Jeff Squyres wrote:
> On Aug 13, 2010, at 1:18 PM, David Ronis wrote:
>
> > Second
On Aug 13, 2010, at 1:18 PM, David Ronis wrote:
> Second coredumpsize is unlimited, and indeed I DO get core dumps when
> I'm running a single-processor version.
What launcher are you using underneath Open MPI?
You might want to make sure that the underlying launcher actually sets the
coredum
Thanks to all who replied.
First, I'm running openmpi 1.4.2.
Second coredumpsize is unlimited, and indeed I DO get core dumps when
I'm running a single-processor version. Third, the problem isn't
stopping the program, MPI_Abort does that just fine, rather it's getting
a cordump. According t
On 08/12/10 21:53, Jed Brown wrote:
Or OMPI_CC=icc-xx.y mpicc ...
If we enable a different set of run time library paths for Intel
compilers than those used to build OMPI when we compile and execute the
MPI app these new run-time libs will be accessible to OMPI libs to run
against instea
Nope. I probably won't get to it for a while. I'll let you know if I do.
On Aug 13, 2010, at 12:17 PM,
wrote:
> OK, I will do that.
>
> But did you try this program on a system where the latest trunk is
> installed? Were you successful in checkpointing?
>
> - Ananda
> -Original Message--
OK, I will do that.
But did you try this program on a system where the latest trunk is
installed? Were you successful in checkpointing?
- Ananda
-Original Message-
Message: 9
List-Post: users@lists.open-mpi.org
Date: Fri, 13 Aug 2010 10:21:29 -0400
From: Joshua Hursey
Subject: Re: [OMPI
Josh
I have stack traces of all 8 python processes when I observed the hang after
successful completion of checkpoint. They are in the attached document. Please
see if these stack traces provide any clue.
Thanks
Ananda
From: Ananda Babu Mudar (WT01 - Energy an
Hi Sunita
My guess is that you are picking a wrong mpiexec,
because of the way you set your PATH.
What do you get from "which mpiexec"?
Try *pre-pending" the OpenMPI path to the existing PATH,
instead of appending it (that's what you did with the LD_LIBRARY_PATH):
export PATH=/home/sunitap/soft
You might want to make sure that this .bashrc is both the same and is
executated properly upon both interactive and non-interactive logins on all the
systems that you are running on.
On Aug 13, 2010, at 1:57 AM, sun...@chem.iitb.ac.in wrote:
> Dear Open-mpi users,
>
> I installed openmpi-1.4.
I probably won't have an opportunity to work on reproducing this on the 1.4.2.
The trunk has a bunch of bug fixes that probably will not be backported to the
1.4 series (things have changed too much since that branch). So I would suggest
trying the 1.5 series.
-- Josh
On Aug 13, 2010, at 10:12
Josh
I am having problems compiling the sources from the latest trunk. It
complains of libgomp.spec missing even though that file exists on my
system. I will see if I have to change any other environment variables
to have a successful compilation. I will keep you posted.
BTW, were you successful
sun...@chem.iitb.ac.in wrote:
Dear Open-mpi users,
I installed openmpi-1.4.1 in my user area and then set the path for
openmpi in the .bashrc file as follow. However, am still getting following
error message whenever am starting the parallel molecular dynamics
simulation using GROMACS. So every
hello Sunita,
what linux distribution is this?
On Fri, Aug 13, 2010 at 1:57 AM, wrote:
> Dear Open-mpi users,
>
> I installed openmpi-1.4.1 in my user area and then set the path for
> openmpi in the .bashrc file as follow. However, am still getting following
> error message whenever am starting
Dear Open-mpi users,
I installed openmpi-1.4.1 in my user area and then set the path for
openmpi in the .bashrc file as follow. However, am still getting following
error message whenever am starting the parallel molecular dynamics
simulation using GROMACS. So every time am starting the MD job, I n
14 matches
Mail list logo