Hi Nick

I'm curious what makes you think job is using all CPU cores once running? 
Would you be able to share output from 'ps' command while job is running?:

ps -p <pid> -L -o pid,tid,psr,pcpu

Execute it on the compute node where your job is running. 
<pid> is the process id of the job/task. 

Thank you.

-----Original Message-----
From: slurm-users <slurm-users-boun...@lists.schedmd.com> On Behalf Of DENOU, 
MEHDI
Sent: Friday, September 21, 2018 1:17 PM
To: Slurm User Community List <slurm-users@lists.schedmd.com>; slurm-users 
<slurm-us...@schedmd.com>
Subject: Re: [slurm-users] Job allocating more CPUs than requested

Hello Nick,

What is the result with only -n 1 ?
Could you provide your slurm.conf ?

A lot of parameters are involved in the allocation process. The choice between 
a few cores or a whole node depends mostly of "SelectType" and "Shared" in the 
partition definition.

Regards,

-----Original Message-----
From: slurm-users <slurm-users-boun...@lists.schedmd.com> On Behalf Of Nicolas 
Bock
Sent: Friday, September 21, 2018 6:54 PM
To: slurm-users <slurm-us...@schedmd.com>
Subject: [slurm-users] Job allocating more CPUs than requested

Hi,

A job run with

  sbatch --ntasks=1 \
    --ntasks-per-node=1 \
    --cpus-per-task=1 \ --ntasks-per-core=1 \
    --sockets-per-core=1 \
    --cores-per-socket=1 \
    --threads-per-core=1

shows as requesting 1 CPU when in queue, but then allocates all CPU cores once 
running. Why is that?

Any suggestions would be greatly appreciated,

Nick



Reply via email to