[ceph-users] Re: Out of Memory after Upgrading to Nautilus

2021-05-06 Thread Didier GAZEN
Hi Christoph, I am currently using Nautilus on a ceph cluster with osd_memory_target defined in ceph.conf on each node. By running : ceph config get osd.40 osd_memory_target you get the default value for the parameter osd_memory_target (4294967296 for nautilus) If you change the ceph.conf

[ceph-users] Re: Out of Memory after Upgrading to Nautilus

2021-05-06 Thread Christoph Adomeit
It looks that I have solved the issue. I tried: ceph.conf [osd] osd_memory_target = 1073741824 systemctl restart ceph-osd.target when i run ceph config get osd.40 osd_memory_target it returns: 4294967296 so this did not work. Next I tried: ceph tell osd.* injectargs '--osd_memory_targ

[ceph-users] Re: Out of Memory after Upgrading to Nautilus

2021-05-05 Thread Mark Nelson
=== Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Mark Nelson Sent: 05 May 2021 17:15:50 To: ceph-users@ceph.io Subject: [ceph-users] Re: Out of Memory after Upgrading to Nautilus Hi Cristoph, 1GB per OSD is tough! the osd memory ta

[ceph-users] Re: Out of Memory after Upgrading to Nautilus

2021-05-05 Thread Joachim Kraftmayer
Hi Christoph, can you send me the ceph config set ... command you used and/or the ceph config dump output? Regards, Joachim Clyso GmbH Homepage: https://www.clyso.com Am 05.05.2021 um 16:30 schrieb Christoph Adomeit: I manage a historical cluster of severak ceph nodes with each 128 GB R

[ceph-users] Re: Out of Memory after Upgrading to Nautilus

2021-05-05 Thread Mark Nelson
Hi Cristoph, 1GB per OSD is tough!  the osd memory target only shrinks the size of the caches but can't control things like osd map size, pg log length, rocksdb wal buffers, etc.  It's a "best effort" algorithm to try to fit the OSD mapped memory into that target but on it's own it doesn't re