Hi,
I'm not sure whether this problem is with SLURM or OpenMPI, but the stack
traces (below) point to an issue within OpenMPI.
Whenever I try to launch an MPI job within SLURM, mpirun immediately
segmentation faults -- but only if the machine that SLURM allocated to MPI is
different to the one
Dear OpenMPI users and developers,
i'm using OpenMPI 1.4.3 and Intel compiler. My simple application require 3
line arguments to work. If i use the follow command:
mpirun -np 2 ./a.out a b "c d"
It works well.
Debugging my application with Totalview:
mpirun -np 2 --debug ./a.out a b "c d"
Ar
Hi,
Am 27.01.2011 um 09:48 schrieb Gabriele Fatigati:
> Dear OpenMPI users and developers,
>
> i'm using OpenMPI 1.4.3 and Intel compiler. My simple application require 3
> line arguments to work. If i use the follow command:
>
> mpirun -np 2 ./a.out a b "c d"
>
> It works well.
>
> Debuggin
Mm,
doing as you suggest the output is:
a
b
"c
d"
and not:
a
b
"c d"
2011/1/27 Reuti
> Hi,
>
> Am 27.01.2011 um 09:48 schrieb Gabriele Fatigati:
>
> > Dear OpenMPI users and developers,
> >
> > i'm using OpenMPI 1.4.3 and Intel compiler. My simple application require
> 3 line arguments to wo
Am 27.01.2011 um 10:32 schrieb Gabriele Fatigati:
> Mm,
>
> doing as you suggest the output is:
>
> a
> b
> "c
> d"
Whoa - your applications without the debugger is running fine - so I don't
think that it's a problem with `mpirun` per se.
The same happens with single quotes inside double quot
The problem is how mpirun scan input parameters when Totalview is invoked.
There is some wrong behaviour in the middle :(
2011/1/27 Reuti
> Am 27.01.2011 um 10:32 schrieb Gabriele Fatigati:
>
> > Mm,
> >
> > doing as you suggest the output is:
> >
> > a
> > b
> > "c
> > d"
>
> Whoa - your appli
The problem is that mpirun regenerates itself to exec a command of "totalview
mpirun ", and the quotes are lost in the process.
Just start your debugged job with "totalview mpirun ..." and it should work
fine.
On Jan 27, 2011, at 3:00 AM, Gabriele Fatigati wrote:
> The problem is how mpiru
The command
"totalview mpirun..."
starts debugging on mpirun not on my executable :(
Code showed is related to main.c of OpenMPI.
2011/1/27 Ralph Castain
> The problem is that mpirun regenerates itself to exec a command of
> "totalview mpirun ", and the quotes are lost in the process.
>
I found the code in OMPI that is dropping the quoting.
Specifically: it *is* OMPI that is dropping your quoting / splitting "foo bar"
into 2 arguments when re-execing totalview.
Let me see if I can gin up a patch...
On Jan 27, 2011, at 7:42 AM, Ralph Castain wrote:
> The problem is that mpi
Ok Jeff,
explain me where is the code and i'll try to fix it.
Thanks a lot.
2011/1/27 Jeff Squyres
> I found the code in OMPI that is dropping the quoting.
>
> Specifically: it *is* OMPI that is dropping your quoting / splitting "foo
> bar" into 2 arguments when re-execing totalview.
>
> Let m
Hi,
I was wondering what support Open MPI has for allowing a job to
continue running when one or more processes in the job die
unexpectedly? Is there a special mpirun flag for this? Any other ways?
It seems obvious that collectives will fail once a process dies, but
would it be possible to create
The current version of Open MPI does not support continued operation of an MPI
application after process failure within a job. If a process dies, so will the
MPI job. Note that this is true of many MPI implementations out there at the
moment.
At Oak Ridge National Laboratory, we are working on
Am 27.01.2011 um 15:23 schrieb Joshua Hursey:
> The current version of Open MPI does not support continued operation of an
> MPI application after process failure within a job. If a process dies, so
> will the MPI job. Note that this is true of many MPI implementations out
> there at the moment
On Jan 27, 2011, at 7:47 AM, Reuti wrote:
> Am 27.01.2011 um 15:23 schrieb Joshua Hursey:
>
>> The current version of Open MPI does not support continued operation of an
>> MPI application after process failure within a job. If a process dies, so
>> will the MPI job. Note that this is true of
On Jan 27, 2011, at 9:47 AM, Reuti wrote:
> Am 27.01.2011 um 15:23 schrieb Joshua Hursey:
>
>> The current version of Open MPI does not support continued operation of an
>> MPI application after process failure within a job. If a process dies, so
>> will the MPI job. Note that this is true of
Am 27.01.2011 um 16:10 schrieb Joshua Hursey:
>
> On Jan 27, 2011, at 9:47 AM, Reuti wrote:
>
>> Am 27.01.2011 um 15:23 schrieb Joshua Hursey:
>>
>>> The current version of Open MPI does not support continued operation of an
>>> MPI application after process failure within a job. If a process
I did my patch against the development trunk; could you try the attached patch
to a trunk nightly tarball and see if that works for you?
If it does, I can provide patches for v1.4 and v1.5 (the code moved a bit
between these 3 versions, so I would need to adapt the patches a little).
On Jan 2
Just touting around for any experiences with the following,
combination (if it's already out there somewhere?) ahead
of fully spec-ing a required software stack:
Mellanox Connect-X HCAs talking through
a Voltaire ISR4036 IB QDR switch
RHEL (yep, not the usual NetBSD!)
OFED (built with
18 matches
Mail list logo