Hi All, We configured slurm on a server with 8 GPU and 16 CPUs and want to use slurm to scheduler for both CPU and GPU jobs. We observed an unexpected behavior that, although there are 16 CPUs, slurm only schedule 8 jobs to run even if there are jobs not asking any GPU. If I inspect detailed information using `scontrol show job`, I see some strange thing on some job that just ask for 1 CPU:
NumNodes=1 NumCPUs=2 NumTasks=1 CPUs/Task=1 If I understand these concepts correctly, as the number of nodes is 1, number of tasks is 1, and number of cpus/task is 1, in principle there is no way that the final number of CPUs is 2. I'm not sure if I misunderstand the concepts, configure slurm wrongly, or this is a bug. So I come for help. Some related config are: # COMPUTE NODES NodeName=moria CPUs=16 Boards=1 SocketsPerBoard=2 CoresPerSocket=4 ThreadsPerCore=2 RealMemory=120000 Gres=gpu:gtx1080ti:2,gpu:titanv:3,gpu:v100:1,gpu:gp100:2 State=UNKNOWN PartitionName=queue Nodes=moria Default=YES MaxTime=INFINITE State=UP # SCHEDULING FastSchedule=1 SchedulerType=sched/backfill GresTypes=gpu SelectType=select/cons_res SelectTypeParameters=CR_Core Best, Xiang Gao