[ceph-users] Re: quincy v17.2.8 QE Validation status

2024-11-04 Thread Ilya Dryomov
On Sat, Nov 2, 2024 at 4:21 PM Yuri Weinstein wrote: > > Ilya, > > rbd rerunning > > https://github.com/ceph/ceph/pull/60586/ merged and cherry-picked into > quincy-release rbd and krbd approved based on additional reruns: https://pulpito.ceph.com/dis-2024-11-04_17:34:41-rbd-quincy-release-distr

[ceph-users] Re: Setting temporary CRUSH "constraint" for planned cross-datacenter downtime

2024-11-04 Thread Niklas Hambüchen
Hi Joachim, I'm currently looking for the general methodology and if it's possible without rebalancing everything. But of course I'd also appreciate tips directly for my deployment; here is the info: Ceph 18, Simple 3-replication (osd_pool_default_size = 3, default CRUSH rules Ceph creates fo

[ceph-users] Re: Setting temporary CRUSH "constraint" for planned cross-datacenter downtime

2024-11-04 Thread Joachim Kraftmayer
Hi Niklas, for a correct answer you need to provide more details about your failure domains, the available DCs, the replication size, the crushmap and the crush rules. Joachim joachim.kraftma...@clyso.com www.clyso.com Hohenzollernstr. 27, 80801 Munich Utting | HR: Augsburg | HRB: 25866

[ceph-users] Re: Slow ops during index pool recovery causes cluster performance drop to 1%

2024-11-04 Thread Frédéric Nass
Hi Istvan, Is you upgraded cluster using wpq or mclock scheduler? (ceph tell osd.X config show | grep osd_op_queue) Maybe your OSDs set their osd_mclock_max_capacity_iops_* capacity too low on start (ceph config dump | grep osd_mclock_max_capacity_iops) limiting their performance. You might w