Hello.
I'm running Slurm 18.08.1 and had configured limits for our users using QOS.
The default QOS has the limits set. Most users belong to this.
# sacctmgr show qos
Name Priority GraceTime Preempt PreemptMode Flags UsageThres UsageFactor
GrpTRES GrpTRESMins GrpTRESRunMin GrpJobs GrpSubmit
On 5/22/19 6:34 AM, Aravindh Sampathkumar wrote:
Nothing has changed recently, and today, I noticed that the QOS limits
which were working until now has silently stopped working. A user was
able to submit jobs enough to saturate the cluster singlehandedly
annoying other users.
Can you check
All,
So I am experiencing great frustrations with the associations and
performance of slurmdbd with a mariadb backend.
A simple example is where I have a user with access to 4 partitions each
with the same 1200 account codes.
I want to retire two of the partitions, but there is no simple wa