We've actually been patching the Slurm cgroup plug-in to enable configurable
per-node and per-partition swap settings. E.g. on node X with 64 cores a job
gets (N_core,job / N_core,tot)*8 GiB added to the physical RAM limit, where 8
GiB is some fraction of the total swap available. It's still n
Hi Eg.
if you are using cgroups (as you do if I read your other post correctly)
these two lines in your cgroup.conf should do the trick:
ConstrainSwapSpace=yes
AllowedSwapSpace=0
Regards,
Hermann
PS: BTW we are planning to *not* use this setting as right now we are
looking into allowing job