For what it's worth, my research group attacked basically exactly this problem quite some time ago. We built a modified Linux kernel that we called Redline that was impervious to fork bombs, malloc bombs, and so on. No process could take down the system, much less unprivileged ones. I think some of the ideas we described back then would be worth adopting / adapting today (the code is of course hopelessly out of date: we published our paper on this at OSDI 2008).
We had a demo where we would run two identical systems, side by side, with the same workloads (a number of videos playing simultaneously), but with one running Redline, and the other running stock Linux. We would launch a fork/malloc bomb on both. The Redline system barely hiccuped. The stock Linux kernel would freeze and become totally unresponsive (or panic). It was a great demo, but also a pain, since we invariably had to restart the stock Linux box :). Redline: first class support for interactivity in commodity operating systems While modern workloads are increasingly interactive and resource-intensive (e.g., graphical user interfaces, browsers, and multimedia players), current operating systems have not kept up. These operating systems, which evolved from core designs that date to the 1970s and 1980s, provide good support for batch and command-line applications, but their ad hoc attempts to handle interactive workloads are poor. Their best-effort, priority-based schedulers provide no bounds on delays, and their resource managers (e.g., memory managers and disk I/O schedulers) are mostly oblivious to response time requirements. Pressure on any one of these resources can significantly degrade application responsiveness. We present Redline, a system that brings first-class support for interactive applications to commodity operating systems. Redline works with unaltered applications and standard APIs. It uses lightweight specifications to orchestrate memory and disk I/O management so that they serve the needs of interactive applications. Unlike realtime systems that treat specifications as strict requirements and thus pessimistically limit system utilization, Redline dynamically adapts to recent load, maximizing responsiveness and system utilization. We show that Redline delivers responsiveness to interactive applications even in the face of extreme workloads including fork bombs, memory bombs and bursty, large disk I/O requests, reducing application pauses by up to two orders of magnitude. Paper here: https://www.usenix.org/legacy/events/osdi08/tech/full_papers/yang/yang.pdf And links to code here: https://emeryberger.com/research/redline/ There has been some recent follow-on work in this direction: see this work out of Remzi and Andrea's lab at Wisconsin: http://pages.cs.wisc.edu/~remzi/Classes/739/Fall2016/Papers/splitio-sosp15.pdf -- emery -- Professor Emery Berger College of Information and Computer Sciences University of Massachusetts Amherst www.emeryberger.org, @emeryberger On Sun, Aug 18, 2019 at 2:53 PM Chris Murphy <li...@colorremedies.com> wrote: > On Sun, Aug 18, 2019 at 2:55 PM Gordan Bobic <gor...@redsleeve.org> wrote: > > > > On Sun, Aug 18, 2019 at 9:07 PM Kevin Kofler <kevin.kof...@chello.at> > wrote: > >> > >> Gordan Bobic wrote: > >> > Right, but is it better that _everything_ else suffers with more > memory > >> > pressure for the handful of relatively infrequent use cases for which > >> > ulimit can be used to explicitly raise the limit? > >> > >> Well, as I wrote, a lower limit might actually make sense on ARM. But > modern > >> x86 computers have gigabytes of RAM, so 1 MiB is ridiculously small > there. > >> So this would have to be an architecture-specific setting for ARM. > > > > > > That may be so, but this thread started off with memory pressure also > being an issue for regular desktop x86 use. > > > > I think optimizations like this, and including compile time defaults > should get smarter to do such optimizations and have a lot of > intrinsic value. But in any case, I think it's fair to say that we're > in very broad agreement that no matter what options get used or what > optimization do or don't happen, unprivileged processes should not be > able to effectively take down the system. That to me is really > incredible to discover. > > Everything else: no swap at all and tolerate abrupt and random > oom-killer killoffs, double the swap or use /dev/zram, or use 1/4 RAM > for swap, or throw a metric f ton of RAM at it, all of those are > different ways of dodging a cannon ball. Dodging the problem doesn't > actually fix the problem.Iff your dodge doesn't work out, you get hit > by a cannon ball. Not OK. It's an unprivileged task! I'm aghast. > > > -- > Chris Murphy > _______________________________________________ > devel mailing list -- devel@lists.fedoraproject.org > To unsubscribe send an email to devel-le...@lists.fedoraproject.org > Fedora Code of Conduct: > https://docs.fedoraproject.org/en-US/project/code-of-conduct/ > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines > List Archives: > https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org >
_______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org