You could limit the resources with the QOS. It is not per node, but you
have some options:
https://slurm.schedmd.com/qos.html#limits
Otherwise you could just enforce the limits per partition and put weight
on the nodes, so that the CPU nodes are allocated before the GPU nodes.
Have you checked t
Dears,
we are using SLURM 18.08.6, we have 12 nodes with 4 x GPUs and 21
CPU-only nodes. We have 3 partitions:
gpu: only gpu nodes,
cpu: only cpu nodes
longjobs: all nodes.
Jobs in longjobs are with the lowest priority and can be preempted to
suspend. Our goal is to to allow using GP