Hi Steffen,
We are using Slurm on Debian Stretch at SURFsara on our LISA cluster.
We've been using the Debian Slurm (
https://salsa.debian.org/hpc-team/slurm-wlm) with a couple of patches,
although we're looking into a different option now.
Anyway, the daemons probably won't start because they'r
Kind regards,
Martijn Kruiten
Hi Fabio,
My guess is that you can (partly) solve this by using the correct state
in slurm.conf. Either CLOUD or FUTURE might be what you're looking for.
See `man slum.conf`.
Kind regards,
Martijn Kruiten
On Fri, 2019-05-17 at 09:17 +, Verzelloni Fabio wrote:
> Hello,
> I have
couple of nodes that were already idle to begin with. The RebootProgram is
/sbin/reboot, so nothing out of the ordinary.
Best regards,
Martijn Kruiten
--
| System Programmer | SURFsara | Science Park 140 | 1098 XG Amsterdam |
| T +31 6 20043417 | martijn.krui...@surfsara.nl | www.surfsara.nl |
that were already idle to begin with. The
RebootProgram is /sbin/reboot, so nothing out of the ordinary.
Best regards,
Martijn Kruiten
--
| System Programmer | SURFsara | Science Park 140 | 1098 XG Amsterdam |
| T +31 6 20043417 | martijn.krui...@surfsara
We pinpointed it to `ConstrainDevices=yes` in cgroup.conf. The solution
was to set `/dev/*` in cgroup_allowed_devices_file.conf. We did not
have anything there. We're now looking into the specific device that is
needed by pmi2.
Martijn Kruiten
On Thu, 2018-11-01 at 18:48 +0100, Bas van der
y are running 1 or 4 jobs on it, because with
ExlusiveUser=YES all the resources of that node are occupied either way.
Of course we could do some post processing on the standard accounting
output, but is there a smarter way to approach this?
Regards,
Martijn Kruiten
--
| System Programmer | SUR
y are running 1 or 4 jobs on it, because with
ExlusiveUser=YES all the resources of that node are occupied either way.
Of course we could do some post processing on the standard accounting
output, but is there a smarter way to approach this?
Regards,
Martijn Kruiten
--
| System Programmer | SUR