El dom, 23 may 2021 a las 13:10, Sergio Belkin (<seb...@gmail.com>)
escribió:

>
>
> El dom, 23 may 2021 a las 12:58, Neal Gompa (<ngomp...@gmail.com>)
> escribió:
>
>> On Sun, May 23, 2021 at 11:54 AM Sergio Belkin <seb...@gmail.com> wrote:
>> >
>> > Hi,
>> > I was reading the systemd-oomd documentation and it says:
>> > «More precisely, only cgroups with memory.oom.group set to 1 and leaf
>> cgroup nodes are eligible candidates.»
>> > (https://www.freedesktop.org/software/systemd/man/systemd-oomd.html)
>> >
>> > However I haven't found any "memory.oom.group" file set to 1:
>> >
>> > sudo find /sys -name "memory.oom.group" -exec grep -v '^0$'  '{}' \; |
>> wc -l
>> > 0
>> >
>> > So, Should I set memory.oom.group to 1?
>> >
>>
>> Do you have systemd-oomd-defaults installed? That's where the oomd
>> configuration is stored.
>>
>>
>>
>> --
>>
>> Hi Neal
>
> rpm -qil systemd-oomd-defaults
> Name        : systemd-oomd-defaults
> Version     : 248.3
> Release     : 1.fc34
> Architecture: x86_64
> Install Date: jue 20 may 2021 06:06:11
> Group       : Unspecified
> Size        : 145
> License     : LGPLv2+
> Signature   : RSA/SHA256, sáb 15 may 2021 17:50:23, Key ID 1161ae6945719a39
> Source RPM  : systemd-248.3-1.fc34.src.rpm
> Build Date  : sáb 15 may 2021 14:10:24
> Build Host  : buildvm-x86-09.iad2.fedoraproject.org
> Packager    : Fedora Project
> Vendor      : Fedora Project
> URL         : https://www.freedesktop.org/wiki/Software/systemd
> Bug URL     : https://bugz.fedoraproject.org/systemd
> Summary     : Configuration files for systemd-oomd
> Description :
> A set of drop-in files for systemd units to enable action from
> systemd-oomd,
> a userspace out-of-memory (OOM) killer.
> /usr/lib/systemd/oomd.conf.d
> /usr/lib/systemd/oomd.conf.d/10-oomd-defaults.conf
> /usr/lib/systemd/system/-.slice.d/10-oomd-root-slice-defaults.conf
> /usr/lib/systemd/system/user@.service.d/10-oomd-user-service-defaults.conf
>
> And:
>
> cat /usr/lib/systemd/oomd.conf.d/10-oomd-defaults.conf
> /usr/lib/systemd/system/-.slice.d/10-oomd-root-slice-defaults.conf
> /usr/lib/systemd/system/user@.service.d/10-oomd-user-service-defaults.conf
> [OOM]
> DefaultMemoryPressureDurationSec=20s
> [Slice]
> ManagedOOMSwap=kill
> [Service]
> ManagedOOMMemoryPressure=kill
> ManagedOOMMemoryPressureLimit=50%
>
> Just in case:
> systemd-analyze cat-config /etc/systemd/oomd.conf  | egrep -v '^$|#'
> [OOM]
> [OOM]
> DefaultMemoryPressureDurationSec=20s
>
> Still it's not clear for me if systemd-oomd is really enforced :)
>
> --
> --
> Sergio Belkin
> LPIC-2 Certified - http://www.lpi.org
>

I was looking at https://testdays.fedoraproject.org/events/105

Swap Based Killing worked for but "Memory Pressure Based Killing" didn't
(stress-ng is not killed by systemd-oomd as is in
https://fedoraproject.org/wiki/QA:Testcase_Memory_Pressure_Based_Killing#How_to_test
):

may 23 13:46:34 munster.belkin.home kernel: Timer invoked oom-killer:
gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0

This is oomctl output:
Dry Run: no
Swap Used Limit: 90.00%
Default Memory Pressure Limit: 60.00%
Default Memory Pressure Duration: 20s
System Context:
        Swap: Used: 6.0G Total: 7.9G
Swap Monitored CGroups:
Memory Pressure Monitored CGroups:
        Path: /user.slice/user-1000.slice/user@1000.service
                Memory Pressure Limit: 50.00%
                Pressure: Avg10: 0.00 Avg60: 0.00 Avg300: 1.83 Total: 51s
                Current Memory Usage: 5.9G
                Memory Min: 0B

Any ideas?

-- 
--
Sergio Belkin
LPIC-2 Certified - http://www.lpi.org
_______________________________________________
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

Reply via email to