Re: [slurm-users] Limit concurrent gpu resources

2019-04-24 Thread Prentice Bisbal
Here's how we handle this here: Create a separate partition named debug that also contains that node. Give the debug partition a very short timelimit, say 30 - 60 minutes. Long enough for debugging, but too short to do any real work. Make the priority of the debug partition much higher than t

Re: [slurm-users] Limit concurrent gpu resources

2019-04-24 Thread Renfro, Michael
We put a ‘gpu’ QOS on all our GPU partitions, and limit jobs per user to 8 (our GPU capacity) via MaxJobsPerUser. Extra jobs get blocked, allowing other users to queue jobs ahead of the extras. # sacctmgr show qos gpu format=name,maxjobspu Name MaxJobsPU -- - gpu

[slurm-users] Limit concurrent gpu resources

2019-04-24 Thread Mike Cammilleri
Hi everyone, We have a single node with 8 gpus. Users often pile up lots of pending jobs and are using all 8 at the same time, but for a user who just wants to do a short run debug job and needs one of the gpus, they are having to wait too long for a gpu to free up. Is there a way with gres.con