[ceph-users] Setting temporary CRUSH "constraint" for planned cross-datacenter downtime

2024-11-03 Thread Niklas Hambüchen
My server provider usually does infrastructure maintenance and planned downtimes on a per-datacenter-building granularity, and thus I have a Ceph cluster with that set as the "datacenter" failure domain in CRUSH. However, it now has a planned maintenance that affects two buildings simultaneousl

[ceph-users] Re: Slow ops during index pool recovery causes cluster performance drop to 1%

2024-11-03 Thread Szabo, Istvan (Agoda)
Hi Tyler, To be honest we don't have anything set by ourselves regarding compaction and rocksdb: When I check the socket with ceph daemon on nvme and on ssd both have default false on compactL "mon_compact_on_start": "false" "osd_compact_on_start": "false", Rocksdb also default: bluestore_rocks

[ceph-users] Re: Slow ops during index pool recovery causes cluster performance drop to 1%

2024-11-03 Thread Tyler Stachecki
On Sun, Nov 3, 2024 at 1:28 AM Szabo, Istvan (Agoda) wrote: > Hi, > > I'm updating from octopus to quincy and all in our cluster when index pool > recovery kicks off, cluster operation drops to 1%, slow ops comes non-stop. > The recovery takes 1-2 hours/nodes. > > What I can see the iowait on the

[ceph-users] Re: Assistance Required: Ceph OSD Out of Memory (OOM) Issue

2024-11-03 Thread Md Mosharaf Hossain
Hi Joachim Thank you for sharing the information. I appreciate it. I have successfully activated the OSD after trimming, and it's a really useful tool." Regards Mosharaf Hossain Manager, Product Development Bangladesh Online (BOL) Level 8, SAM Tower, Plot 4, Road 22, Gulshan 1, Dhaka 1212, Bangla