Hi,
I'm going to have the following situation on my hands and I would like to ask
for some suggestions how to solve this in slurm. We're talking about a cluster
that is not yet operational so there are no real legacy configs that one needs
to take into account.
I have two sets of physical nodes
Hi,
I'm currently testing an approach similar to the example by Loris.
Why consider preemption? Because, in the original example, if the cluster is
saturated by long running jobs (like 2 weeks), there should be the possibility
to run short jobs right away.
Best,
Thomas
--
slurm-users mailing
Hi,
we're testing possible slurm configurations on a test system right now.
Eventually, it is going to serve ~1000 users.
We're going to have some users who are going to run lots of short jobs (a
couple of minutes to ~4h) and some users that run jobs that are going to run
for days or weeks. I w
Hi,
sorry, I had written an email but it apparently didn't go through
Götz was right. slurm.epilog.clean was the problem. There was a bug in there...
I fixed it and now it works.
Best,
Thomas
--
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm
Hi,
so, I figured out that I can give some users priority access for a specific
amount of TRES by creating a qos with the GrpTRESMins property and the
DenyOnLimit,NoDecay flags. This works nicely.
However, I would like to know, how much of this has already been consumed and I
have not yet found