Thanks Paul for taking the time to further look into this. In fact you are
correct and adding a default mode (which is then overridden by each
partition setting) keeps slurm happy with that configuration. Moreover
(after restarting daemons, etc per the documentation) everything seems to
be working
My concern was you config inadvertantly having that line commented out
and then seeing problems. If it wasn't then no worries at this point.
We run using preempt/partition_prio on our cluster and have a mix of
partitions using PreemptMode=OFF and PreemptMode=REQUEUE. So I know that
combination
Thanks Paul,
I don't understand what you mean by having a typo somewhere. I mean, that
configuration works just fine right now, whereas if I add the commented out
line any slurm command will just abort with the error "PreemptType and
PreemptMode values incompatible". So, assuming there is a typo,
At least in the example you are showing you have PreemptType commented
out, which means it will return the default. PreemptMode Cancel should
work, I don't see anything in the documentation that indicates it
wouldn't. So I suspect you have a typo somewhere in your conf.
-Paul Edmon-
On 1/11/
I would like to add a preemptable queue to our cluster. Actually I already
have. We simply want jobs submitted to that queue be preempted if there are
no resources available for jobs in other (high priority) queues.
Conceptually very simple, no conditionals, no choices, just what I wrote.
However i