I manage a historical cluster of severak ceph nodes with each 128 GB Ram and 36 
OSD each 8 TB size.

The cluster ist just for archive purpose and performance is not so important.

The cluster was running fine for long time using ceph luminous.

Last week I updated it to Debian 10 and Ceph Nautilus.

Now I can see that the memory usage of each osd grows slowly to 4 GB each and 
once the system has
no memory left it will oom-kill processes

I have already configured osd_memory_target = 1073741824 .
This helps for some hours but then memory usage will grow from 1 GB to 4 GB per 
OSD.

Any ideas what I can do to further limit osd memory usage ?

It would be good to keep the hardware running some more time without upgrading 
RAM on all
OSD machines.

Any Ideas ?

Thanks
  Christoph
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to