[ceph-users] Re: OSD swapping on Pacific

2021-08-16 Thread Alexander Sporleder
Thanks. I found that in the release notes of 14.2.22: "This release sets bluefs_buffered_io to true by default to improve performance for metadata heavy workloads. Enabling this option has been reported to occasionally cause excessive kernel swapping under certain workloads. Currently, the most

[ceph-users] Re: OSD swapping on Pacific

2021-08-16 Thread i...@z1storage.com
Hi, Global swappiness and per-cgroup swappiness are managed separately. When you change vm.swappiness sysctl, only /sys/fs/cgroup/memory/memory.swappiness changes, but not memory.swappiness of the services under separate slices (like system.slice where ceph services are running). Check https:

[ceph-users] Re: OSD swapping on Pacific

2021-08-16 Thread David Caro
Found some option that seemed to cause some trouble in the past, `bluefs_buffered_io`, it has been disabled/enabled by default a couple times (disabled on v15.2.2, enabled on v15.2.13), it seems it might have a big effect on performance and swapping behavior, but might be a lead. On 08/16 14:10

[ceph-users] Re: OSD swapping on Pacific

2021-08-16 Thread Alexander Sporleder
Hello David, Unfortunately "vm.swapiness" dose not change the behavior. Tweaks on the container side (--memory-swappiness and -- memory-swap) might make sens but I did not found any Ceph related suggestion. Am Montag, dem 16.08.2021 um 13:52 +0200 schrieb David Caro: > Afaik the swapping beha

[ceph-users] Re: OSD swapping on Pacific

2021-08-16 Thread David Caro
Afaik the swapping behavior is controlled by the kernel, there might be some tweaks on the container engine side, but you might want to try to tweak the default behavior by lowering the 'vm.swapiness' of the kernel: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/pe