o.k. thx for the explanation.
Am Fr., 27. Sept. 2019 um 15:38 Uhr schrieb Steffen Grunewald <
steffen.grunew...@aei.mpg.de>:
> On Fri, 2019-09-27 at 14:58:40 +0200, Rafał Kędziorski wrote:
> > Am Fr., 27. Sept. 2019 um 13:50 Uhr schrieb Steffen Grunewald <
> > stef
Am Fr., 27. Sept. 2019 um 13:50 Uhr schrieb Steffen Grunewald <
steffen.grunew...@aei.mpg.de>:
> On Fri, 2019-09-27 at 11:19:16 +0200, Juergen Salk wrote:
> > Hi Rafał,
> >
> > you may try setting `ReturnToService=2´ in slurm.conf.
> >
> > Best regards
> > Jürgen
>
> Caveat: A spontaneously reboot
Sept. 2019 um 08:43 Uhr schrieb Henkel, Andreas <
hen...@uni-mainz.de>:
> Hi Rafal,
>
> How do you restart the nodes? If you don’t use scontrol reboot
> Slurm doesn’t expect nodes to reboot therefore you see that reason in those
> cases.
>
> Best
> Andreas
>
Hi,
I'm working with slurm-wlm 18.08.5-2 on Raspberry Pi Cluster:
- 1 Pi 4 as manager
- 4 Pi 4 nodes
This work fine. But after every restart of the nodes I get this
cluster@pi-manager:~ $ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
devcluster*up infinite 4 down pi-4-n