Your nodes are hyperthreaded (ThreadsPerCore=2).  Slurm always allocates _all 
threads_ associated with a selected core to jobs.  So you're being assigned 
both threads on core N.


On our development-partition nodes we configure the threads as cores, e.g.


NodeName=moria CPUs=16 Boards=1 SocketsPerBoard=2 CoresPerSocket=8 
ThreadsPerCore=1


to force Slurm to schedule the threads separately.



> On Feb 7, 2019, at 12:10 PM, Xiang Gao <qasdfgtyu...@gmail.com> wrote:
> 
> Hi All,
> 
> We configured slurm on a server with 8 GPU and 16 CPUs and want to use slurm 
> to scheduler for both CPU and GPU jobs. We observed an unexpected behavior 
> that, although there are 16 CPUs, slurm only schedule 8 jobs to run even if 
> there are jobs not asking any GPU. If I inspect detailed information using 
> `scontrol show job`, I see some strange thing on some job that just ask for 1 
> CPU:
> 
> NumNodes=1 NumCPUs=2 NumTasks=1 CPUs/Task=1
> 
> If I understand these concepts correctly, as the number of nodes is 1, number 
> of tasks is 1, and number of cpus/task is 1, in principle there is no way 
> that the final number of CPUs is 2. I'm not sure if I misunderstand the 
> concepts, configure slurm wrongly, or this is a bug. So I come for help.
> 
> Some related config are:
> 
> # COMPUTE NODES  
> NodeName=moria CPUs=16 Boards=1 SocketsPerBoard=2 CoresPerSocket=4 
> ThreadsPerCore=2 RealMemory=120000 
> Gres=gpu:gtx1080ti:2,gpu:titanv:3,gpu:v100:1,gpu:gp100:2
> State=UNKNOWN  
> PartitionName=queue Nodes=moria Default=YES MaxTime=INFINITE State=UP
> 
> # SCHEDULING  
> FastSchedule=1 
> SchedulerType=sched/backfill 
> GresTypes=gpu 
> SelectType=select/cons_res 
> SelectTypeParameters=CR_Core
> 
> Best,
> Xiang Gao


::::::::::::::::::::::::::::::::::::::::::::::::::::::
Jeffrey T. Frey, Ph.D.
Systems Programmer V / HPC Management
Network & Systems Services / College of Engineering
University of Delaware, Newark DE  19716
Office: (302) 831-6034  Mobile: (302) 419-4976
::::::::::::::::::::::::::::::::::::::::::::::::::::::




Reply via email to