The OOM-killer is on the rampage and striking down hapless OSDs when
the cluster is under heavy client IO.

The memory target does not seem to be much of a limit, is this intentional?

root@cnx-11:~# ceph-conf --show-config|fgrep osd_memory_target
osd_memory_target = 4294967296
osd_memory_target_cgroup_limit_ratio = 0.800000

root@cnx-31:~# pmap 4327|fgrep total
 total          6794892K

Are there any tips for controlling the OSD memory consumption?

The hosts involved have 128GB or 192GB memory, 12 x OSDs (SATA), so
even with 4GB per OSD there should be a large amount of free memory.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to