The traditional way of limiting memory use by users is to set a per-process 
limit of memory use "as" in /etc/security/limits.conf and also limit the 
number of processes.  Therefore the amount of memory that may be used is the 
address space limit multiplied by the number of processes.  This isn't that 
good.  For example if a user is compiling C++ software the g++ program can 
take quite a lot of RAM and there will also often be plenty of shell scripts 
etc running.  The RAM requirements for a g++ process multiplied by the number 
of processes reasonably needed for a couple of login sessions and all the 
scripts for building may be more than you want to allocate to them.  As an 
aside I'm thinking of how I killed some Unix servers while doing assignments 
for Unix Systems programming when I was in university.  :-#

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=860410

An additional problem is that there's a bug in recent versions of sshd in that 
limits.conf applies to the sshd process.

https://manpages.debian.org/testing/systemd/logind.conf.5.en.html

When you use systemd the systemd-logind creates a new cgroup named 
user-$UID.slice.

# cat \ 
/sys/fs/cgroup/memory/user.slice/user-506.slice/memory.max_usage_in_bytes
99999744

http://tinyurl.com/mhjb8ct

I've set the max_usage_in_bytes to 100M (see the above Red Hat URL for an 
explanation of this).  But it doesn't seem to work, I've written a test 
program that allocates memory and sets it via memset() and it gets to the 
ulimit setting without being stopped by the cgroup limit.

Any suggestion on how to get this going properly?

The next problem of course will be having systemd-logind set the limit when it 
creates the cgroup.  Any suggestions on that will be appreciated.

-- 
My Main Blog         http://etbe.coker.com.au/
My Documents Blog    http://doc.coker.com.au/
_______________________________________________
luv-main mailing list
[email protected]
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

Reply via email to