On Fri, Dec 20, 2024 at 5:17 PM Diego Nieto Cid <dnie...@gmail.com> wrote: > >Also make sure to avoid limiting the kernel's own maps. > > Oh right, I need to check for the kernel map, even though the default > means no limit it may be nice to check at the enforcing point whether > the allocation happens against the kernel map or not.
In case you don't realize, there's more than one kernel map. kernel_map is the main one, but there are also ipc_kernel_map and device_io_map, which are submaps of kernel_map, and others could potentially be added. The way to check if a map belongs to the kernel is vm_map_pmap(map) == kernel_pmap. > To be honest, I'm trying to make the zzuf testsuit pass. Currently it fails > on a test that exhausts memory[1] when the driver calls it with the memory > limited to 256M [2][3]. > > [1] https://github.com/samhocevar/zzuf/blob/master/test/bug-memory.c > [2] > https://github.com/samhocevar/zzuf/blob/master/test/check-zzuf-M-max-memory#L41 > [3] https://github.com/samhocevar/zzuf/blob/master/src/myfork.c#L261 Hm, so that code doesn't exactly seem to care that it's address space size that it's limiting, as long as it's some way to limit memory usage. > > do we actually want this limit? > > Hrm don't know :( I gathered from here[4] that it's something we'd like to > have. But I may have misunderstood Samuel on that. > > [4] https://lists.gnu.org/archive/html/bug-hurd/2024-12/msg00133.html I cannot speak for Samuel of course :) but that too sounds like we'd want to put limits on memory usage and not on address space. On the other hand, I can see how implementing address space size limiting is easy, and tracking used physical memory is a lot less simple. To throw an idea out there, I wonder if the latter could be approximated by keeping track of the number of pages that had to be faulted in, but you'd need to somehow decrease this value back when memory is being unmapped. > > "Because Unix has it" should not be, by itself, considered enough of a > > reason > > to bring something into Mach. > > It was just easier to do in Mach where the address space size was already > accounted for. > > But I guess it could be enforced on GLIBC by accounting the total memory > allocated by either brK(), mmap() or mremap() calls. No, indeed, if we are to have an implementation of RLIMIT_AS in the Hurd, Mach is where it makes most sense to put it, since it's in charge of managing memory, and it can actually enforce things rather than just making Unix-level calls fail. What I meant is I'd also like new Mach features to make sense from Mach's own perspective, rather than being justified by the Hurd's need to implement various warts of the Unix APIs. Consider that Mach was designed to have more than just Unix hosted on top of it, and that it was maintained by a separate group of people than GNU Hurd developers. The Hurd even has a complete proc server, which basically keeps track of various additional task attributes (such as PID and parent) that give Mach tasks the semantics of Unix processes; it would've been simpler to add these into Mach directly, but very fortunately that was not done, although there were proposals to that end [0]. [0]: https://lists.gnu.org/archive/html/bug-hurd/2013-09/msg00024.html GNU Mach is basically a version of Mach that is, first of all, maintained and not abandoned, second, complies with GNU standards wrt things like the build system, and third, yes, has a few new API-visible features over the original Mach. But these features (protected payloads, memory object proxies, new task notifications, gsync, thread_terminate_release, ...) are well-justified from Mach's own perspective. Now, limiting the amount of memory available to a task (or a group of tasks) is of course also very much justified and logical. I'm saying: let's think about the best design for this feature, and then, if it ends up being aligned with a feature that Unix has, we can expose it as that Unix feature at the Hurd/glibc level. But it may end up looking more like cgroup's memory.max than like RLIMIT_AS, for example. On the other hand, resource accounting is famously not properly doable in Mach — OSF Mach at least had ledgers (which were never functional AFAIK), ours doesn't even do that. Welp, I guess that does sound discouraging :| Sergey