Mark, good news!
Adam, if you need some more information or debug, feel free to contact me on
IRC: xelexin
I can confirm that this issue exist in luminous (12.2.12)
Regards,
Rafał Wądołowski
CloudFerro sp. z o.o.
ul. Fabryczna 5A
00-446 Warszawa
www.cloudferro.com<http://www.cloudferro.
mess.
One more thing, we are running stupid allocator. Right now I am decrease the
osd_memory_target to 3GiB and will wait if ram problem occurs.
Regards,
Rafał Wądołowski
From: Mark Nelson
Sent: Wednesday, May 13, 2020 3:30 PM
To: ceph-users@ceph.io
Subject
ckContents
Does it mean that most of RAM is used by rocksdb?
How can I take a deeper look into memory usage ?
Regards,
Rafał Wądołowski
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Stefan,
What version are you running? You wrote "Ceph automatically started to
migrate all date from the hdd to the ssd db device", is that normal auto
compaction or ceph developed a trigger to do it?
Best Regards,
Rafał Wądołowski
___
Yeah,
I saw your thread, the problem is more complicated due to size of the
cluster... I'm trying to figure out the best solution, which will
minimize the downtime and migration time.
Best Regards,
Rafał Wądołowski
On 19.02.2020 14:23, Marc Roos wrote:
> I asked the same not so long ag
Maybe somebody has experience about migration between DC?
Any new ideas? Thougths?
Every comment will be helpful
--
Regards,
Rafał Wądołowski
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io