We are pleased to announce the availability of Slurm release 22.05.0.
To highlight some new features in 22.05:
- Support for dynamic node addition and removal
(https://slurm.schedmd.com/dynamic_nodes.html)
- Support for native Linux cgroup v2 operation
- Newly added plugins to support HPE Slin
Hi Ole
I only added the oversubscribe option because without it, it didn’t work - so
in fact, it appears not to have made any difference
I though the RealMemory option just said not to offer any jobs to the node that
didn’t have AT LEAST that amount of RAM
My large node has more than 64GB RAM (
Hi Jake,
Firstly, which Slurm version and which OS do you use?
Next, try simplifying by removing the oversubscribe configuration. Read
the slurm.conf manual page about oversubscribe, it looks a bit tricky.
The RealMemory=1000 is extremely low and might prevent jobs from
starting! Run "slur
Hi
I am just building my first Slurm setup and have got everything running - well,
almost.
I have a two node configuration. All of my setup exists on a single HyperV
server and I have divided up the resources to create my VMs
One node I will use for heavy duty work; this is called compute001
O
Il 26/05/2022 11:48, Diego Zuccato ha scritto:
Still can't
export TMPDIR=...
from TaskProlog script. Surely missing something important. Maybe
TaskProlog is called as a subshell? In that case it can't alter caller's
env... But IIUC someone made it work, and that confuses me...
Seems I finall
Il 25/05/2022 14:42, Mark Dixon ha scritto:
https://slurm.schedmd.com/faq.html#tmpfs_jobcontainer
https://slurm.schedmd.com/job_container.conf.html
I would be interested in hearing how well it works - it's so buried in
the documentation that unfortunately I didn't see it until after I
rolled a