On Saturday, 20 October 2018 9:57:16 AM AEDT Noam Bernstein wrote: > If not, is there another way to do this?
You can use --exclusive for jobs that want whole nodes. You will likely also want to use: SelectTypeParameters=CR_Core_Memory,CR_ONE_TASK_PER_CORE to ensure jobs are given one core (with all its associated threads) per task. Also set DefMemPerCPU so that jobs get allocated a default amount of RAM per core if they forget to ask for it. > And however we achieve this, how does slurm decide what order to assign > nodes to jobs in the presence of jobs that don't take entire nodes. If we > have a 2 16 core nodes and two 8 task jobs, are they going to be packed > into a single node, or each on its own node (leaving no free node for > another 16 task job that requires an entire node)? As long as you don't use CR_LLN (least loaded node) as your select parameter and you don't use pack_serial_at_end in SchedulerParameters then Slurm (I believe) is meant to use a best fit algorithm. However, the thing that can still happen is that when you have lots of variable size jobs with very different walltimes you can start off with a nicely packed system at the beginning but holes then open up as jobs finish. So hopefully you'll have a nice mix of job sizes that will fit those holes. All the best, Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC