ilto:users-boun...@open-mpi.org] On Behalf Of Ralph
>> Castain
>> Sent: Friday, June 06, 2014 3:03 PM
>> To: Open MPI Users
>> Subject: Re: [OMPI users] Determining what parameters a scheduler
>> passes to OpenMPI
>>
>> It's possible that you are hitting a
open-mpi.org] On Behalf Of Ralph Castain
>> Sent: Friday, June 06, 2014 3:03 PM
>> To: Open MPI Users
>> Subject: Re: [OMPI users] Determining what parameters a scheduler passes to
>> OpenMPI
>>
>> It's possible that you are hitting a bug - not sure how
half Of Ralph Castain
> Sent: Friday, June 06, 2014 3:03 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] Determining what parameters a scheduler passes to
> OpenMPI
>
> It's possible that you are hitting a bug - not sure how much the
> cpus-per-proc option has been
Version 1.6 (i.e. prior to 1.6.1)
-Original Message-
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph Castain
Sent: Friday, June 06, 2014 3:03 PM
To: Open MPI Users
Subject: Re: [OMPI users] Determining what parameters a scheduler passes to
OpenMPI
It's possible
cheduler is passing along the correct slot count #s (16 and 8, resp).
>
> Am I running into a bug w/ OpenMPI 1.6?
>
> --john
>
>
>
> -Original Message-----
> From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph Castain
> Sent: Friday, June 06, 2014 1:30
: Open MPI Users
Subject: Re: [OMPI users] Determining what parameters a scheduler passes to
OpenMPI
On Jun 6, 2014, at 10:24 AM, Gus Correa wrote:
> On 06/06/2014 01:05 PM, Ralph Castain wrote:
>> You can always add --display-allocation to the cmd line to see what
>> we thought w
d the hostfile /home/sasso/TEST/hosts.file contains 24 entries (the
>>>> first 16 being host node0001 and the last 8 being node0002), it
>>>> appears that 24 MPI tasks try to start on node0001 instead of getting
>>>> distributed as 16 on node0001 and 8 on node000
to:users-boun...@open-mpi.org] On Behalf Of Ralph
Castain
Sent: Friday, June 06, 2014 12:31 PM
To: Open MPI Users
Subject: Re: [OMPI users] Determining what parameters a scheduler
passes to OpenMPI
We currently only get the node and slots/node info from PBS - we
don't get any task placement in
t;>
>>
>> From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph Castain
>> Sent: Friday, June 06, 2014 12:31 PM
>> To: Open MPI Users
>> Subject: Re: [OMPI users] Determining what parameters a scheduler passes to
>> OpenMPI
>>
>> W
] On Behalf Of Ralph Castain
> Sent: Friday, June 06, 2014 12:31 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] Determining what parameters a scheduler passes to
> OpenMPI
>
> We currently only get the node and slots/node info from PBS - we don't get
> any task pl
riday, June 06, 2014 12:31 PM
To: Open MPI Users
Subject: Re: [OMPI users] Determining what parameters a scheduler passes to
OpenMPI
We currently only get the node and slots/node info from PBS - we don't get any
task placement info at all. We then use the mpirun cmd options and built-in
mapp
We currently only get the node and slots/node info from PBS - we don't get any
task placement info at all. We then use the mpirun cmd options and built-in
mappers to map the tasks to the nodes.
I suppose we could do more integration in that regard, but haven't really seen
a reason to do so - th
For the PBS scheduler and using a build of OpenMPI 1.6 built against PBS
include files + libs, is there a way to determine (perhaps via some debugging
flags passed to mpirun) what job placement parameters are passed from the PBS
scheduler to OpenMPI? In particular, I am talking about task place
13 matches
Mail list logo