I see - yes, to clarify, we are specifying memory for each of these jobs, and there is enough memory on the nodes for both types of jobs to be running simultaneously.
On Fri, Nov 1, 2019 at 1:59 PM Brian Andrus <toomuc...@gmail.com> wrote: > I ask if you are specifying it, because if not, slurm will assume a job > will use all the memory available. > > So without specifying, your big job gets allocated 100% of the memory so > nothing could be sent to the node. Same if you don't specify for the little > jobs. It would want 100%, but if anything is running there, 100% is not > available as far as slurm is concerned. > > Brian > On 11/1/2019 10:52 AM, c b wrote: > > yes, there is enough memory for each of these jobs, and there is enough > memory to run the high resource and low resource jobs at the same time. > > On Fri, Nov 1, 2019 at 1:37 PM Brian Andrus <toomuc...@gmail.com> wrote: > >> Are you specifying memory for each of the jobs? >> >> Can't run a small job if there isn't enough memory available for it. >> >> Brian Andrus >> On 11/1/2019 7:42 AM, c b wrote: >> >> I have: >> SelectType=select/cons_res >> SelectTypeParameters=CR_CPU_Memory >> >> On Fri, Nov 1, 2019 at 10:39 AM Mark Hahn <h...@mcmaster.ca> wrote: >> >>> > In theory, these small jobs could slip in and run alongside the large >>> jobs, >>> >>> what are your SelectType and SelectTypeParameters settings? >>> ExclusiveUser=YES on partitions? >>> >>> regards, mark hahn. >>> >>>