ght idea, but there are quite a few more
>>> options. It misses map_cpu, rank, plus the NUMA-based options:
>>> rank_ldom, map_ldom, and mask_ldom. See the srun man pages for
>>> documentation.
>>>
>>>
>>> From: Riebs, Andy
>>>
ursday,
October 27, 2016 1:53 PM
To: users@lists.open-mpi.org
Subject: Re:
[OMPI users] Slurm binding not propagated to MPI
jobs
Hi
> options. It misses map_cpu, rank, plus the NUMA-based options:
> rank_ldom, map_ldom, and mask_ldom. See the srun man pages for documentation.
>
>
> From: Riebs, Andy
> Sent: Thursday, October 27, 2016 1:53 PM
> To: users@lists.open-mpi.org
> Subject: Re: [OMPI use
, Andy
Sent: Thursday, October 27, 2016 1:53 PM
To: users@lists.open-mpi.org
Subject: Re: [OMPI users] Slurm binding not propagated to MPI jobs
Hi Ralph,
I haven't played around in this code, so I'll flip the question over to the
Slurm list, and report back here when I learn anything.
Chee
Hi Ralph,
I haven't played around in this code, so I'll flip the question
over to the Slurm list, and report back here when I learn
anything.
Cheers
Andy
On 10/27/2016 01:44 PM,
r...@open-mpi.org wrote:
Sigh - of course it wou
Sigh - of course it wouldn’t be simple :-(
All right, let’s suppose we look for SLURM_CPU_BIND:
* if it includes the word “none”, then we know the user specified that they
don’t want us to bind
* if it includes the word mask_cpu, then we have to check the value of that
option.
* If it is all
Yes, they still exist:
$ srun --ntasks-per-node=2 -N1 env | grep BIND | sort -u
SLURM_CPU_BIND_LIST=0x
SLURM_CPU_BIND=quiet,mask_cpu:0x
SLURM_CPU_BIND_TYPE=mask_cpu:
SLURM_CPU_BIND_VERBOSE=quiet
Here are the relevant Slurm configuration option
And if there is no --cpu_bind on the cmd line? Do these not exist?
> On Oct 27, 2016, at 10:14 AM, Andy Riebs wrote:
>
> Hi Ralph,
>
> I think I've found the magic keys...
>
> $ srun --ntasks-per-node=2 -N1 --cpu_bind=none env | grep BIND
> SLURM_CPU_BIND_VERBOSE=quiet
> SLURM_CPU_BIND_TYPE=no
Hi Ralph,
I think I've found the magic keys...
$ srun --ntasks-per-node=2 -N1 --cpu_bind=none env | grep BIND
SLURM_CPU_BIND_VERBOSE=quiet
SLURM_CPU_BIND_TYPE=none
SLURM_CPU_BIND_LIST=
SLURM_CPU_BIND=quiet,none
SLURM_CPU_BIND_VERBOSE=quiet
SLURM_CPU_BIND_TYPE=none
SLURM_CPU_BIND_LIST=
SLURM_CPU_
Hey Andy
Is there a SLURM envar that would tell us the binding option from the srun cmd
line? We automatically bind when direct launched due to user complaints of poor
performance if we don’t. If the user specifies a binding option, then we detect
that we were already bound and don’t do it.
Ho
Hi All,
We are running Open MPI version 1.10.2, built with support for Slurm
version 16.05.0. When a user specifies "--cpu_bind=none", MPI tries to
bind by core, which segv's if there are more processes than cores.
The user reports:
What I found is that
% srun --ntasks-per-node=8 --cpu_bind
11 matches
Mail list logo