Hi Juergen, On Fri, Jul 12, 2019 at 03:21:31PM +0200, Juergen Salk wrote: > Dear all, > > I have configured pam_slurm_adopt in our Slurm test environment by > following the corresponding documentation: > > https://slurm.schedmd.com/pam_slurm_adopt.html > > I've set `PrologFlags=contain´ in slurm.conf and also have task/cgroup > enabled along with task/affinity (i.e. > `TaskPlugin=task/affinity,task/cgroup´).
<snip> > Thus, the ssh session seems to be totally unconstrained by cgroups in > terms of memory usage. In fact, I was able to launch a test > application from the interactive ssh session that consumed almost all > of the memory on that node. That's obviously undesirable for a shared > user environment with jobs from different users running side by side > on one node at the same time. > > I suppose this is nevertheless the expected behavior and just the way > it is when using pam_slurm_adopt to restrict access to the compute > nodes? Is that right? Or did I miss something obvious? I think we opened an issue for this at https://bugs.schedmd.com/show_bug.cgi?id=5920 with a proposed fix. This is on slurm 17.11, but SchedMD promised they'd pick it up for inclusion in 19.05.x. It does require some changes to the slurm code, which are here (for 17.11) https://github.com/hpcugent/slurm/pull/28/files I hope this helps you out a bit. Regards, -- Andy
signature.asc
Description: PGP signature