Hi Rodrigo,
We indeed have overlooked this. The problem is that in general our jobs
need more than 2 days of resources, that's why we select a wall time in
the batch scripts equal to the max wall time allowed by the partition.
One thing we could try is to set the wall time at ~46h for the "l
Hi Jeremy,
If all jobs have the same time limit, backfill is impossible. The
documentation says: "Effectiveness of backfill scheduling is dependent upon
users specifying job time limits, otherwise all jobs will have the same
time limit and backfilling is impossible". I don't know to overcome that.
Hi Rodrigo and Rémi,
I had a similar behavior a long time ago, and I decided to set
SchedulerType=sched/builtin to empty X
nodes of jobs and execute that high-priority job requesting more than
one node. It is not ideal, but the
cluster has low load, so a user that requests more than one node
Hi Jérémy,
Le mercredi 12 janvier 2022 à 16:59, Jérémy Lapierre
a écrit :
> Hi To all slurm users,
>
> We have the following issue: jobs with highest priority are pending
> forever with "Resources" reason. More specifically, the jobs pending
> forever ask for 2 full nodes but all other jobs fro
Hi Jeremy,
I had a similar behavior a long time ago, and I decided to set
SchedulerType=sched/builtin to empty X nodes of jobs and execute that
high-priority job requesting more than one node. It is not ideal, but the
cluster has low load, so a user that requests more than one node doesn't
delay t
Hi To all slurm users,
We have the following issue: jobs with highest priority are pending
forever with "Resources" reason. More specifically, the jobs pending
forever ask for 2 full nodes but all other jobs from other users
(running or pending) need only a 1/4 of a node, then pending jobs ask