u may find that reconfiguring the partition to have a QoS of 'normal' will
result in the GPU limit being applied, as intended. This is set in the
partition configuration in slurm.conf.
Killian
On Thu, 7 May 2020 at 18:25, Theis, Thomas
mailto:thomas.th...@teledyne.com>> wrote:
He
s
The University of Melbourne, Victoria 3010 Australia
On Wed, 6 May 2020 at 23:44, Theis, Thomas
mailto:thomas.th...@teledyne.com>> wrote:
UoM notice: External email. Be cautious of links, attachments, or impersonation
attempts.
Still have the same issue when I
Services
The University of Melbourne, Victoria 3010 Australia
On Wed, 6 May 2020 at 04:53, Theis, Thomas
mailto:thomas.th...@teledyne.com>> wrote:
UoM notice: External email. Be cautious of links, attachments, or impersonation
attempts.
Hey Killian,
I tried to
with generic GRES, it's worth a read!
Killian
On Thu, 23 Apr 2020 at 18:19, Theis, Thomas
mailto:thomas.th...@teledyne.com>> wrote:
Hi everyone,
First message, I am trying find a good way or multiple ways to limit the usage
of jobs per node or use of gpus per node, without blocking
Hi everyone,
First message, I am trying find a good way or multiple ways to limit the usage
of jobs per node or use of gpus per node, without blocking a user from
submitting them.
Example. We have 10 nodes each with 4 gpus in a partition. We allow a team of 6
people to submit jobs to any or all