For the first question: you should be able to define each node’s core count, 
hyperthreading, or other details in slurm.conf. That would allow Slurm to 
schedule (well-behaved) tasks to each node without anything getting overloaded.

For the second question about jobs that aren’t well-behaved (a job requesting 1 
CPU, but starting multiple parallel threads, or multiple MPI processes), you’ll 
also want to set up cgroups to constrain each job’s processes to its share of 
the node (so a 1-core job starting N threads will end up with each thread 
getting a 1/N share of a CPU).

On Jan 28, 2020, at 6:12 AM, zz <anand6...@gmail.com> wrote:

Hi,

I am testing slurm for a small cluster, I just want to know that is there 
anyway I could set a max job limit per node, I have nodes with different specs 
running under same qos. Please ignore if it is a stupid question.

Also I would like to know what will happen when a process which is running on a 
dual core system which requires say 4 cores at  some step.

Thanks

Reply via email to