Dear all,

we currently see a change of a default behavior of a job step.
On our old cluster (Slurm 20.11.9) a job step take all the resources of my 
allocation.
rotscher@tauruslogin5:~> salloc --partition=interactive --nodes=1 --ntasks=1 
--cpus-per-task=24 --hint=nomultithread
salloc: Pending job allocation 37851810
salloc: job 37851810 queued and waiting for resources
salloc: job 37851810 has been allocated resources
salloc: Granted job allocation 37851810
salloc: Waiting for resource configuration
salloc: Nodes taurusi6605 are ready for job
bash-4.2$ srun numactl -show
policy: default
preferred node: current
physcpubind: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
cpubind: 0 1
nodebind: 0 1
membind: 0 1

If run the same command on our new cluster the job step take only 1 core 
instead of all without any further paramter.
[rotscher@login1 ~]$ salloc --nodes=1 --ntasks=1 --cpus-per-task=24 
--hint=nomultithread
salloc: Pending job allocation 9197
salloc: job 9197 queued and waiting for resources
salloc: job 9197 has been allocated resources
salloc: Granted job allocation 9197
salloc: Waiting for resource configuration
salloc: Nodes n1601 are ready for job
[rotscher@login1 ~]$ srun numactl -show
policy: default
preferred node: current
physcpubind: 0
cpubind: 0
nodebind: 0
membind: 0 1 2 3 4 5 6 7

If I add the parameter „-c 24“ to the job step it also take the hole resources, 
but the step should take it per default.
[rotscher@login1 ~]$ srun -c 24 numactl -show
policy: default
preferred node: current
physcpubind: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
cpubind: 0 1
nodebind: 0 1
membind: 0 1 2 3 4 5 6 7

I searched the slurm.conf documentation, the mailing list and also the 
changelog, but found no reference to a matching parameter.
Do anyone of you know the behavior and how to change it?

Best wishes,
Danny

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to