On Tue, 1 Apr 2025 at 21:03, Christian König <christian.koe...@amd.com> wrote: > > Am 31.03.25 um 22:43 schrieb Dave Airlie: > > On Tue, 11 Mar 2025 at 00:26, Maxime Ripard <mrip...@kernel.org> wrote: > >> Hi, > >> > >> On Mon, Mar 10, 2025 at 03:16:53PM +0100, Christian König wrote: > >>> [Adding Ben since we are currently in the middle of a discussion > >>> regarding exactly that problem] > >>> > >>> Just for my understanding before I deep dive into the code: This uses > >>> a separate dmem cgroup and does not account against memcg, don't it? > >> Yes. The main rationale being that it doesn't always make sense to > >> register against memcg: a lot of devices are going to allocate from > >> dedicated chunks of memory that are either carved out from the main > >> memory allocator, or not under Linux supervision at all. > >> > >> And if there's no way to make it consistent across drivers, it's not the > >> right tool. > >> > > While I agree on that, if a user can cause a device driver to allocate > > memory that is also memory that memcg accounts, then we have to > > interface with memcg to account that memory. > > This assumes that memcg should be in control of device driver allocated > memory. Which in some cases is intentionally not done. > > E.g. a server application which allocates buffers on behalves of clients gets > a nice deny of service problem if we suddenly start to account those buffers.
Yes we definitely need the ability to transfer an allocation between cgroups for this case. > > That was one of the reasons why my OOM killer improvement patches never > landed (e.g. you could trivially kill X/Wayland or systemd with that). > > > The pathological case would be a single application wanting to use 90% > > of RAM for device allocations, freeing it all, then using 90% of RAM > > for normal usage. How to create a policy that would allow that with > > dmem and memcg is difficult, since if you say you can do 90% on both > > then the user can easily OOM the system. > > Yeah, completely agree. > > That's why the GTT size limit we already have per device and the global 50% > TTM limit doesn't work as expected. People also didn't liked those limits and > because of that we even have flags to circumvent them, see > AMDGPU_GEM_CREATE_PREEMPTIBLE and TTM_TT_FLAG_EXTERNAL. > > Another problem is when and to which process we account things when eviction > happens? For example process A wants to use VRAM that process B currently > occupies. In this case we would give both processes a mix of VRAM and system > memory, but how do we account that? > > If we account to process B then it can be that process A fails because of > process Bs memcg limit. This creates a situation which is absolutely not > traceable for a system administrator. > > But process A never asked for system memory in the first place, so we can't > account the memory to it either or otherwise we make the process responsible > for things it didn't do. > > There are good argument for all solutions and there are a couple of blocks > which rule out one solution or another for a certain use case. To summarize I > think the whole situation is a complete mess. > > Maybe there is not this one solution and we need to make it somehow > configurable? My feeling is that we can't solve the VRAM eviction problem super effectively, but it's also probably not going to be a major common case, I don't think we should double account memcg/dmem just in case we have to evict all of a users dmem at some point, maybe if there was some kind of soft memcg limit we could add as an accounting but not enforced overhead it might be useful to track evictions, but yes we can't have A allocating memory causing B to fall over because we evict memory into it's memcg space and it fails to allocate the next time it tries, or having A fail in that case. For the UMA GPU case where there is no device memory or eviction problem, perhaps a configurable option to just say account memory in memcg for all allocations done by this process, and state yes you can work around it with allocation servers or whatever but the behaviour for well behaved things is at least somewhat defined. Dave.