Hi Tina,
We have an additional partition with partitionqos that increase the limits and 
allows for running short jobs over the limits if nodes are idle. And on 
Submission in the Standard-Partitions we automatically add the additional 
partition via a job_submit-plugin.

Best,
Andreas 

> Am 04.09.2019 um 18:55 schrieb Christopher Benjamin Coffey 
> <chris.cof...@nau.edu>:
> 
> Hi Tina,
> 
> I think you could just have a qos called "override" that has no limits, or 
> maybe just high limits. Then, just modify the job's qos to be "override" with 
> scontrol. Based on your setup, you may also have to update the jobs account 
> to an "override" type account with no limits.
> 
> We do this from time to time.
> 
> Best,
> Chris
> 
> —
> Christopher Coffey
> High-Performance Computing
> Northern Arizona University
> 928-523-1167
> 
> 
> On 9/2/19, 12:47 PM, "slurm-users on behalf of Tina Fora" 
> <slurm-users-boun...@lists.schedmd.com on behalf of tf...@riseup.net> wrote:
> 
>    Hello,
> 
>    Is there a way to force a job to run that is being held back for
>    QOSGrpCpuLimit? This is coming from QOS that we have in place. For the
>    most part it works great but every once in a while we have free nodes that
>    are idle and I'd like to force the job to run.
> 
>    Tina
> 
> 
> 
> 

Reply via email to