Re: [ceph-users] out of memory bluestore osds

2019-08-08 Thread Jaime Ibar
Hi Mark, thanks a lot for your explanation and clarification. Adjusting osd_memory_target to fit in our systems did the trick. Jaime On 07/08/2019 14:09, Mark Nelson wrote: Hi Jaime, we only use the cache size parameters now if you've disabled autotuning.  With autotuning we adjust the cac

Re: [ceph-users] out of memory bluestore osds

2019-08-07 Thread Mark Nelson
Hi Jaime, we only use the cache size parameters now if you've disabled autotuning.  With autotuning we adjust the cache size on the fly to try and keep the mapped process memory under the osd_memory_target.  You can set a lower memory target than default, though you will have far less cache

[ceph-users] out of memory bluestore osds

2019-08-07 Thread Jaime Ibar
Hi all, we run a Ceph Luminous 12.2.12 cluster, 7 osds servers 12x4TB disks each. Recently we redeployed the osds of one of them using bluestore backend, however, after this, we're facing Out of memory errors(invoked oom-killer) and the OS kills one of the ceph-osd process. The osd is restarted a