Hi there,
we excitedly found the job_container/tmpfs plugin which neatly allows
us to provide local scratch space and a way of ensuring that /dev/shm
gets cleaned up after a job finishes. Unfortunately we found that it
does not play nicely with autofs which we use to provide networked
project and s
Hi Cristóbal,
I would guess you need to set up a cgroup.conf file
###
# Slurm cgroup support configuration file
###
ConstrainRAMSpace=yes
ConstrainSwapSpace=yes
AllowedRAMSpace=100
AllowedSwapSpace=0
MaxRAMPercent=100
MaxSwapPercent=0
#ConstrainDevices=yes
MemorySwappiness=0
TaskAffinity=no
Cgrou
Hi Slurm community,
Recently we found a small problem triggered by one of our jobs. We have a
*MaxMemPerNode*=*532000* setting in our compute node in slurm.conf file,
however we found out that a job that started with mem=65536, and after
hours of execution it was able to grow its memory usage durin