Hi,
sorry, I had written an email but it apparently didn't go through
Götz was right. slurm.epilog.clean was the problem. There was a bug in there...
I fixed it and now it works.
Best,
Thomas
--
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm
I am running a GPU cluster where nodes are mostly off to save on
electricity. I have run into the problem that if I set
'MinTRES=gres/gpu=1' in the QoS for user-account associations, waking up
nodes on-demand stops working for these users. Jobs are allocated on all
running nodes but if a user s
We have these cards in some sd650v1 servers.
You get 2 nodes in a 1u configuration, but they are attached, you can only pull
both out of the rack at once.
Ours are slightly older, so we only have 1x 1Gb on-board per server, plus 1x
200Gb HDR port on the B server, which provides a “virtual” 200G
Hello,
This information can be found in the output of "scontrol show assoc_mgr
qos=".
best regards
Maciej Pawlik
śr., 28 lut 2024 o 16:04 thomas.hartmann--- via slurm-users <
slurm-users@lists.schedmd.com> napisał(a):
> Hi,
> so, I figured out that I can give some users priority access for
Thanks a lot!
Am 01.03.24 um 20:58 schrieb Maciej Pawlik via slurm-users:
Hello,
This information can be found in the output of "scontrol show
assoc_mgr qos=".
best regards
Maciej Pawlik
śr., 28 lut 2024 o 16:04 thomas.hartmann--- via slurm-users
napisał(a):
Hi,
so, I figure