On Tue, 2023-05-16 at 09:16 +0100, Stuart Henderson wrote: > The strategy is that the sysadmin should configure datasize limits so > that processes hit memory allocation failures if they try to > overreach. > Defaults are setup with typical use-cases and machines in mind but > you > might know better and adjust.
If you read the original report: the person did not look for perfect behavior when running out of memory. Instead an attempt was made to create an OOM situation in which something very visible happens. Whether that is a carefully thought out strategy to kill the least necessary process, or whether the OS overwrites your boot disk as revenge doesn't matter here. Even after countless cp commands theĀ system did not run out of memory and whatever would have happened in that case didn't materialize. This supports my original proposition, that memory doesn't get leaked, but that instead RAM gets utilized in the most intelligent way to improve system performance. > > The kernel doesn't cope particularly well if you actually run out of > memory. Long delays, deadlocks, panics are likely. Yes bugs, but they > are > difficult ones, and the above strategy (i.e. use the system's built > in > protection mechanisms for userland processes) is not a bad one. > > (I understand that even on Linux with "OOM killer" it is often still > advisable to reboot when possible after triggering it.) >