On 2025-04-07 15:21, Joe Conway wrote:
On 4/5/25 07:53, Ancoron Luciferis wrote:
I've been investigating this topic every now and then but to this day
have not come to a setup that consistently leads to a PostgreSQL backend
process receiving an allocation error instead of being killed externally
by the OOM killer.

Why this is a problem for me? Because while applications are accessing
their DBs (multiple services having their own DBs, some high-frequency),
the whole server goes into recovery and kills all backends/connections.

While my applications are written to tolerate that, it also means that
at that time, esp. for the high-frequency apps, events are piling up,
which then leads to a burst as soon as connectivity is restored. This in
turn leads to peaks in resource usage in other places (event store,
in-memory buffers from apps, ...), which sometimes leads to a series of
OOM killer events being triggered, just because some analytics query
went overboard.

Ideally, I'd find a configuration that only terminates one backend but
leaves the others working.

I am wondering whether there is any way to receive a real ENOMEM inside
a cgroup as soon as I try to allocate beyond its memory.max, instead of
relying on the OOM killer.

I know the recommendation is to have vm.overcommit_memory set to 2, but
then that affects all workloads on the host, including critical infra
like the kubelet, CNI, CSI, monitoring, ...

I have already gone through and tested the obvious:

https://www.postgresql.org/docs/current/kernel-resources.html#LINUX- MEMORY-OVERCOMMIT

Importantly vm.overcommit_memory set to 2 only matters when memory is constrained at the host level.

As soon as you are running in a cgroup with a hard memory limit, vm.overcommit_memory is irrelevant.

You can have terabytes of free memory on the host, but if cgroup memory usage exceeds memory.limit (cgv1) or memory.max (cgv2) the OOM killer will pick the process in the cgroup with the highest oom_score and whack it.

Unfortunately there is no equivalent to vm.overcommit_memory within the cgroup.

And yes, I know that Linux cgroups v2 memory.max is not an actual hard
limit:

https://www.kernel.org/doc/html/latest/admin-guide/cgroup- v2.html#memory-interface-files

Read that again -- memory.max *is* a hard limit (same as memory.limit in cgv1).

   "memory.max

     A read-write single value file which exists on non-root cgroups. The
     default is “max”.

     Memory usage hard limit. This is the main mechanism to limit memory
     usage of a cgroup. If a cgroup’s memory usage reaches this limit and
     can’t be reduced, the OOM killer is invoked in the cgroup."

Yes, I know it says "hard limit", but then any app still can go beyond (might just be on me here to assume any "hard limit" to imply an actual error when trying to go beyond). The OOM killer then will kick in eventually, but not in any way that any process inside the cgroup could prevent. So there is no signal that the app could react to saying "hey, you just went beyond what you're allowed, please adjust before I kill you".



If you want a soft limit use memory.high.

   "memory.high

     A read-write single value file which exists on non-root cgroups. The
     default is “max”.

     Memory usage throttle limit. If a cgroup’s usage goes over the high
     boundary, the processes of the cgroup are throttled and put under
     heavy reclaim pressure.

     Going over the high limit never invokes the OOM killer and under
     extreme conditions the limit may be breached. The high limit should
     be used in scenarios where an external process monitors the limited
     cgroup to alleviate heavy reclaim pressure.

You want to be using memory.high rather than memory.max.

Hm, so solely relying on reclaim? I think that'll just get the whole cgroup into ultra-slow mode and would not actually prevent too much memory allocation. While this may work out just fine for the PostgreSQL instance, it'll for sure have effects on the other workloads on the same node (which I have apparently, more PG instances).

Apparently, I also don't see a way to even try this out in a Kubernetes environment, since there doesn't seem to be a way to set this field through some workload manifests field.


Also, I don't know what kubernetes recommends these days, but it used to require you to disable swap. In more recent versions of kubernetes you are able to run with swap enabled but I have no idea what the default is -- make sure you run with swap enabled.

Yes, this is what I wanna try out next.


The combination of some swap being available, and the throttling under heavy reclaim will likely mitigate your problems.


Thank you for your insights, I have something to think about.

Cheers,

        Ancoron




Reply via email to