port RawUsage = 253632 s or 4227 mn
>>> QOS support RawUsage > GrpTRESMins SLURM should prevent to start a job for
>>> this
>>> account if it works as expected.
>>> 2) Run the benchmark to control limit GrpTRESMins efficiency over QOS
>>> rawusage
control limit GrpTRESMins efficiency over QOS
>> rawusage
>> toto@login1:~/TEST$ sbatch TRESMIN.slurm
>> Submitted batch job 3687
>> toto@login1:~/TEST$ squeue
>> JOBIDADMIN_COMMMIN_MEMOR SUBMIT_TIME PRIORITY PARTITION QOS USER STATE
>> TIME_LIMIT TIME NODES RE
00
0:02 1 None 2022-06-30T19:36:42
The job is running unless GrpTRESMins is under QOS support RawUsage .
Is there anything wrong with my control process that invalidates the result ?
Thanks
Gérard
[ http://www.cines.fr/ ]
> De: "gerard gil"
> À: "Slurm-users"
Hi Miguel,
>If I understood you correctly your goal was to limit the number of minutes
>each project can run. By associating each project to a slurm account with a
>nodecay QoS then you will have achieved your goal.
Here is what I what to do :
"All jobs submitted to an account regardless th
Hi Miguel,
OK, I did'nt know this command.
I'm not sure to understand how it works regarding to my goal.
I use the following command inspired by the command you gave me and I obtain a
UsageRaw for each QOS.
scontrol -o show assoc_mgr -accounts=myaccount Users=" "
Do I have to sumup all QO
Hi Miguel,
I modified my test configuration to evaluate the effect of NoDecay.
I modified all QOS adding NoDecay Flag.
toto@login1:~/TEST$ sacctmgr show QOS
Name Priority GraceTime Preempt PreemptExemptTime PreemptMode Flags UsageThres
UsageFactor GrpTRES GrpTRESMins GrpTRESRunMin GrpJobs G
Hi Miguel,
Good !!
I'll try this options on all existing QOS and see if everything works as
expected.
I'll inform you on the results.
Thanks a lot
Best,
Gérard
- Mail original -
> De: "Miguel Oliveira"
> À: "Slurm-users"
> Cc: "slurm-users"
> Envoyé: Vendredi 24 Juin 2022 14:07:
Hi Miguel,
> Why not? You can have multiple QoSs and you have other techniques to change
> priorities according to your policies.
Is this answer my question ?
"If all configured QOS use NoDecay, we can take advantage of the FairShare
priority with Decay and all jobs GrpTRESRaw with NoDecay ?"
Hi Miguel,
It sounds good !
But does it mean you have to request this "NoDecay" QOS to benefit the
fairshare priority ?
Does this also mean that if all the QOS we use are created with NoDecay, we can
take advantage of the FairShare priority and NoDecay for all jobs to use the
GrpTRESMins l
Hi Ole and B/H,
Thanks for your answers.
You're right B/H, and as I tuned TRESBillingWeights option to only counts cpu,
in my case : nb of reserved core = "TRES billing cost"
You're right again I forgot the PriorityDecayHalfLife parameter which is also
used by fairshare Multifactor Priority
Hi,
new strange behaviour.
I'm using sshare command to get the current values of GrpTRESRaw and
GrpTRESMins.
> agil: toto@login1:~/TEST$ sshare -A myproject -u " " -o
> account,user,GrpTRESRaw%80,GrpTRESMins
> Account User GrpTRESRaw GrpTRESMins
> --
>
Hello,
I am using SLURM v 19.05 and I am trying to figure out how the cpu GrpTRESRaw
is calculated for a job.
I would like to use GrpTRESMins to limit a project to an allotted amount of
hours.
If the limitation process works as defined in the documentation my tests show
some strange results
12 matches
Mail list logo