Yes, dynamic DNS.
On Tue, Oct 25, 2022 at 2:17 PM Meaden, Xand wrote:
> The nodes are being removed as they aren't resolving in DNS anymore; are
> you using a dynamic system where only active hosts' names resolve?
>
> Xand
>
> --
> *From:* slurm-users on behalf of
>
The nodes are being removed as they aren't resolving in DNS anymore; are you
using a dynamic system where only active hosts' names resolve?
Xand
From: slurm-users on behalf of Joe
Teumer
Sent: Tuesday, October 25, 2022 7:42:16 PM
To: slurm-us...@schedmd.com
S
We noticed that the slurm controller will remove nodes that it cannot reach.
How can this be disabled?
We would like to see the nodes marked down/drain instead of the controller
removing the nodes from sinfo.
/var/log/slurm/slurmctld.log
[2022-10-25T13:10:01.500] debug: Log file re-opened
[2022-1
A very helpful reply, thank you!
For your "special testing config", do you just mean the
slurm.conf/gres.conf/*.conf files? So when you want to test a new version of
slurm, you replace the conf files and then restart all of the daemons?
Rob
Please ignore the question - the option SchedulerParameters=salloc_wait_nodes
solves the issue.
kind regards
Gizo
> Hello,
>
> it seems that in a cluster configured for power saving, salloc does not wait
> until the nodes
> assigned to the job recover from the power down state and go back
"Groner, Rob" writes:
> I'm wondering OVERALL if the test suite is supposed to work on ANY
> working slurm system. I could not find any documentation on how the
> slurm configuration and nodes were required to be setup in order for
> the test to workno indication that the test suite requires