Hi, On Mon, Jul 06, 2009 at 05:02:45PM +0200, Arne Babenhauserheide wrote: > Am Montag, 6. Juli 2009 15:01:39 schrieb Da Zheng:
> > What should the kernel do if it is out of resources? > > I'd say it should kill the offending process. > > Or if info about the offender isn't available, just kill all who > request more of the resource, until it has enough free resources again > :) > > That's a bit not-nice for applications, but it will likely get the > offender quite early, because that oen will most often request new > resources. Congratulations: you just found the fundamental shortcoming of the Hurd/Mach architecture! ;-) The problem is that in the multiserver architecture, we often have server processes allocating resources on behalf of client processes. The kernel has no clue who is really responsible for the resource usage. Note that this problem actually exists in other systems as well. The X server is a typical example: When clients allocate a lot of pixmaps (hi, Firefox), the memory usage of the server grows, and the kernel has no clue whom to blame. It will kill the X server, as the process consuming most memory (that used to happen with older Linux versions), or alternatively spare the X server because it's running as root (seems to be the default behaviour nowadays from what I heard), and kill some random other processes instead... In a multiserver system the situation is much worse though of course, as we have the client/server design applied everywhere. As I said before, I do think that the situation could be somewhat mitigated by introducing ugly fixed limits on various kinds of resource usage. A *proper* fix on the other hand requires a way to attribute all resource usage to the clients -- either by avoiding server-side allocation alltogether, or by keeping track somehow on whose behalf allocations happen. Either requires very fundamental changes to some low-level mechanisms. Note that this was the major motivation behind the Hurd/L4 port... -antrik-