On 25/08/17 03:03, Patrick Goetz wrote:

> 1. When users submit (say) 8 long running single core jobs, it doesn't
> appear that Slurm attempts to consolidate them on a single node (each of
> our nodes can accommodate 16 tasks).

How much memory have you configured for your nodes and how much memory
are these single CPU jobs asking for?

That's one thing that can make Slurm need to start jobs on other nodes.

You can also tell it to pack single CPU jobs onto nodes at the other end
of the cluster with this:

pack_serial_at_end
    If used with the select/cons_res plugin then put serial jobs at
    the end of the available nodes rather than using a best fit
    algorithm. This may reduce resource fragmentation for some work-
    loads.

cheers,
Chris
-- 
 Christopher Samuel        Senior Systems Administrator
 Melbourne Bioinformatics - The University of Melbourne
 Email: [email protected] Phone: +61 (0)3 903 55545

Reply via email to