[ceph-users] Re: Memory usage of OSD

2020-05-14 Thread Rafał Wądołowski
Mark, good news! Adam, if you need some more information or debug, feel free to contact me on IRC: xelexin I can confirm that this issue exist in luminous (12.2.12) Regards, Rafał Wądołowski CloudFerro sp. z o.o. ul. Fabryczna 5A 00-446 Warszawa www.cloudferro.com<http://www.cloudferro.

[ceph-users] Re: Memory usage of OSD

2020-05-13 Thread Rafał Wądołowski
mess. One more thing, we are running stupid allocator. Right now I am decrease the osd_memory_target to 3GiB and will wait if ram problem occurs. Regards, Rafał Wądołowski From: Mark Nelson Sent: Wednesday, May 13, 2020 3:30 PM To: ceph-users@ceph.io Subject

[ceph-users] Memory usage of OSD

2020-05-12 Thread Rafał Wądołowski
ckContents Does it mean that most of RAM is used by rocksdb? How can I take a deeper look into memory usage ? Regards, Rafał Wądołowski ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: leftover: spilled over 128 KiB metadata after adding db device

2020-03-03 Thread Rafał Wądołowski
Stefan, What version are you running? You wrote "Ceph automatically started to migrate all date from the hdd to the ssd db device", is that normal auto compaction or ceph developed a trigger to do it? Best Regards, Rafał Wądołowski ___

[ceph-users] Re: Migrating/Realocating ceph cluster

2020-02-19 Thread Rafał Wądołowski
Yeah, I saw your thread, the problem is more complicated due to size of the cluster... I'm trying to figure out the best solution, which will minimize the downtime and migration time. Best Regards, Rafał Wądołowski On 19.02.2020 14:23, Marc Roos wrote: > I asked the same not so long ag

[ceph-users] Migrating/Realocating ceph cluster

2020-02-19 Thread Rafał Wądołowski
Maybe somebody has experience about migration between DC? Any new ideas? Thougths? Every comment will be helpful -- Regards, Rafał Wądołowski ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io