rested in the
> underlying cause.
>
> Especially as the example in Open MPI's FAQ lists -np to start with
> GirdEngine integration, it should have hit other users too.
>
> -- Reuti
>
>
>> Regards,
>> Eloi
>>
>>
>> -Original Message--
er users too.
-- Reuti
> Regards,
> Eloi
>
>
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Ralph Castain
> Sent: mardi 10 avril 2012 16:43
> To: Open MPI Users
> Subject: Re: [OMPI users] sge tight
> This might be of interest to Reuti and you : it seems that we cannot
> reproduce the problem anymore if we don't provide the "-np N" option on the
> orterun command line. Of course, we need to launch a few other runs to be
> really sure because the allocation error was not always observable. A
om: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Ralph Castain
> Sent: mardi 10 avril 2012 16:43
> To: Open MPI Users
> Subject: Re: [OMPI users] sge tight intregration leads to bad allocation
>
> Could well be a bug in OMPI - I can take a look, t
f
Of Ralph Castain
Sent: mardi 10 avril 2012 16:43
To: Open MPI Users
Subject: Re: [OMPI users] sge tight intregration leads to bad allocation
Could well be a bug in OMPI - I can take a look, though it may be awhile before
I get to it. Have you tried one of the 1.5 series releases?
On Apr 10, 201
Could well be a bug in OMPI - I can take a look, though it may be awhile before
I get to it. Have you tried one of the 1.5 series releases?
On Apr 10, 2012, at 3:42 AM, Eloi Gaudry wrote:
> Thx. This is the allocation which is also confirmed by the Open MPI output.
> [eg: ] exactly, but not the
Thx. This is the allocation which is also confirmed by the Open MPI output.
[eg: ] exactly, but not the one used afterwards by openmpi
- The application was compiled with the same version of Open MPI?
[eg: ] yes, version 1.4.4 for all
- Does the application start something on its own besides the
Am 06.04.2012 um 12:17 schrieb Eloi Gaudry:
> > - Can you please post while it's running the relevant lines from:
> > ps -e f --cols=500
> > (f w/o -) from both machines.
> > It's allocated between the nodes more like in a round-robin fashion.
> > [eg: ] I'll try to do this tomorrow, as soon as so
> - Can you please post while it's running the relevant lines from:
> ps -e f --cols=500
> (f w/o -) from both machines.
> It's allocated between the nodes more like in a round-robin fashion.
> [eg: ] I'll try to do this tomorrow, as soon as some slots become free.
> Thanks for your feedback Reuti
Am 05.04.2012 um 18:58 schrieb Eloi Gaudry:
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Reuti
> Sent: jeudi 5 avril 2012 18:41
> To: Open MPI Users
> Subject: Re: [OMPI users] sge tight intregration leads
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf
Of Reuti
Sent: jeudi 5 avril 2012 18:41
To: Open MPI Users
Subject: Re: [OMPI users] sge tight intregration leads to bad allocation
Am 05.04.2012 um 17:55 schrieb Eloi Gaudry:
>
> &
Am 05.04.2012 um 17:55 schrieb Eloi Gaudry:
>
> >> Here are the allocation info retrieved from `qstat -g t` for the related
> >> job:
> >
> > For me the output of `qstat -g t` shows MASTER and SLAVE entries but no
> > variables. Is there any wrapper defined for `qstat` to reformat the output
>> Here are the allocation info retrieved from `qstat -g t` for the related job:
>
> For me the output of `qstat -g t` shows MASTER and SLAVE entries but no
> variables. Is there any wrapper defined for `qstat` to reformat the output
> (or a ~/.sge_qstat defined)?
>
> [eg: ] sorry, i forgot abo
Am 03.04.2012 um 17:24 schrieb Eloi Gaudry:
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Reuti
> Sent: mardi 3 avril 2012 17:13
> To: Open MPI Users
> Subject: Re: [OMPI users] sge tight intregration leads
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf
Of Reuti
Sent: mardi 3 avril 2012 17:13
To: Open MPI Users
Subject: Re: [OMPI users] sge tight intregration leads to bad allocation
Am 03.04.2012 um 16:59 schrieb Eloi Gaudry:
> Hi Re
2 Num cores/socket: 4
> Daemon: [[54347,0],2] Daemon launched: False
> Num slots: 1 Slots in use: 1
> Num slots allocated: 1 Max slots: 0
> Username on node: NULL
> Num procs: 1 Next node_rank: 1
> Data for proc: [[54347,1],2]
> Pid: 0 Local rank: 0 Node rank: 0
>State: 0 A
Behalf
Of Reuti
Sent: mardi 3 avril 2012 16:24
To: Open MPI Users
Subject: Re: [OMPI users] sge tight intregration leads to bad allocation
Hi,
Am 03.04.2012 um 16:12 schrieb Eloi Gaudry:
> Thanks for your feedback.
> No, this is the other way around, the "reserved" slots on all nod
; Eloi
>
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Ralph Castain
> Sent: mardi 3 avril 2012 15:58
> To: Open MPI Users
> Subject: Re: [OMPI users] sge tight intregration leads to bad allocation
>
> I'm afraid there isn
[mailto:users-boun...@open-mpi.org] On Behalf
Of Ralph Castain
Sent: mardi 3 avril 2012 15:58
To: Open MPI Users
Subject: Re: [OMPI users] sge tight intregration leads to bad allocation
I'm afraid there isn't enough info here to help. Are you saying you only
allocated one slot/node,
issue observed is somehow
different than the one you mentioned here.
Regards,
Eloi
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf
Of Tom Bryan
Sent: mardi 3 avril 2012 15:49
To: Open MPI Users
Subject: Re: [OMPI users] sge tight intregration leads t
I'm afraid there isn't enough info here to help. Are you saying you only
allocated one slot/node, so the two slots on charlie is in error?
Sent from my iPad
On Apr 3, 2012, at 6:23 AM, "Eloi Gaudry" wrote:
> Hi,
>
> I’ve observed a strange behavior during rank allocation on a distributed run
How are you launching the application?
I had an app that did an Spawn_multiple with tight SGE integration, and
there was a difference in behavior depending on whether or not an app was
launched via mpiexec. I¹m not sure whether it¹s the same issue as you¹re
seeing, but Reuti describes the problem
Hi,
I've observed a strange behavior during rank allocation on a distributed run
schedule and submitted using Sge (Son of Grid Egine 8.0.0d) and OpenMPI-1.4.4.
Briefly, there is a one-slot difference between allocated rank/slot for Sge and
OpenMPI. The issue here is that one node becomes over
23 matches
Mail list logo