My server provider usually does infrastructure maintenance and planned
downtimes on a per-datacenter-building granularity, and thus I have a Ceph
cluster with that set as the "datacenter" failure domain in CRUSH.
However, it now has a planned maintenance that affects two buildings
simultaneousl
Hi Tyler,
To be honest we don't have anything set by ourselves regarding compaction and
rocksdb:
When I check the socket with ceph daemon on nvme and on ssd both have default
false on compactL
"mon_compact_on_start": "false"
"osd_compact_on_start": "false",
Rocksdb also default:
bluestore_rocks
On Sun, Nov 3, 2024 at 1:28 AM Szabo, Istvan (Agoda)
wrote:
> Hi,
>
> I'm updating from octopus to quincy and all in our cluster when index pool
> recovery kicks off, cluster operation drops to 1%, slow ops comes non-stop.
> The recovery takes 1-2 hours/nodes.
>
> What I can see the iowait on the
Hi Joachim
Thank you for sharing the information. I appreciate it. I have successfully
activated the OSD after trimming, and it's a really useful tool."
Regards
Mosharaf Hossain
Manager, Product Development
Bangladesh Online (BOL)
Level 8, SAM Tower, Plot 4, Road 22, Gulshan 1, Dhaka 1212, Bangla