Hello all,

I am encountering some unexpected behavior where the jobs (queued & running) of one specific user have negative NICE values and therefore an increased priority. The user is not privileged in any way and cannot explicitly set the nice value to a negative value by e.g. adding "--nice=-INT" . There are also no QoS which would allow this (is this even possible?). The cluster is using the "priority/multifactor" plugin with weights set for Age, FaireShare and JobSize.

This is the only user on the whole cluster where this occurs. From what I can tell, he/she is not doing anything out of the ordinary. However, in the job scripts the user does set a nice value of "0". The user also uses some "strategy" where he/she submits the same job to multiple partitions and, as soon as one of these jobs starts, all other jobs (with the same jobname) will be set on "hold".

Does anyone have an idea how this could happen? Does Slurm internally adjust the NICE values in certain situations? (I searched the sources but couldn't find anything that would suggest this).

Slurm version is 23.02.1

Example squeue output:

[root@mgmt ~]# squeue -u USERID -O JobID,Nice
JOBID               NICE
14846760            -5202
14846766            -8988
14913146            -13758
14917361            -15103


Any hints are appreciated.

Kind regards
Sebastian

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to