On 2022-May-18, Joe Conway wrote: > On 5/18/22 11:11, Alvaro Herrera wrote:
> > Apparently, if the cgroup goes over the "high" limit, the processes are > > *throttled*. Then if the group goes over the "max" limit, OOM-killer is > > invoked. > You may be misinterpreting "throttle" in this context. From [1]: > > The memory.high boundary on the other hand can be set > much more conservatively. When hit, it throttles > allocations by forcing them into direct reclaim to > work off the excess, but it never invokes the OOM > killer. Well, that means the backend processes don't do their expected task (process some query) but instead they have to do "direct reclaim". I don't know what that is, but it sounds like we'd need to add Linux-specific code in order for this to fix anything. And what would we do in such a situation anyway? Seems like our best hope would be to get malloc() to return NULL and have the resulting transaction abort free enough memory that things in other backends can continue to run. *If* there is a way to have cgroups make Postgres do that, then that would be useful enough. > > So ditch cgroups. > > You cannot ditch cgroups if you are running in a container. And in fact most > non-container installations these days are also running in a cgroup under > systemd. I just meant that the cgroup abstraction doesn't offer any interfaces that we can use to improve this, not that we would be running without them. -- Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/ "How strange it is to find the words "Perl" and "saner" in such close proximity, with no apparent sense of irony. I doubt that Larry himself could have managed it." (ncm, http://lwn.net/Articles/174769/)