We know very little about the whole cluster, can you add the usual
information like 'ceph -s' and 'ceph osd df tree'? Scrubbing has
nothing to do with the undersized PGs. Is the balancer and/or
autoscaler on? Please also add 'ceph balancer status' and 'ceph osd
pool autoscale-status'.
Than
Hi, the system is still in backfilling and still have the same pg in degraded.
I see that % of degraded object is in still.
I mean it never decrease belove 0.010% from days.
Is the backfilling connected to the degraded ?
System must finish backfilling before finishing the degraded one ?
[WRN] PG_
You may want to consider disabling deep scrubs and scrubs while attempting to
complete a backfill operation.
On Tue, Apr 18, 2023, at 01:46, Eugen Block wrote:
> I didn't mean you should split your PGs now, that won't help because
> there is already backfilling going on. I would revert the pg_n
I didn't mean you should split your PGs now, that won't help because
there is already backfilling going on. I would revert the pg_num
changes (since nothing actually happened yet there's no big risk) and
wait for the backfill to finish. You don't seem to have inactive PGs
so it shouldn't be
Thanks, I try to change the pg and pgp number to an higher value but pg do not
increase
ta:
pools: 8 pools, 1085 pgs
objects: 242.28M objects, 177 TiB
usage: 553 TiB used, 521 TiB / 1.0 PiB avail
pgs: 635281/726849381 objects degraded (0.087%)
91498351/7268
Hi
You can use the script available at
https://github.com/TheJJ/ceph-balancer/blob/master/placementoptimizer.py to
check the status of backfilling and PG state, and also to cancel
backfilling using upmap. To view the movement status of all PGs in the
backfilling state, you can execute the command
Hi,
your cluster is in backfilling state, maybe just wait for the backfill
to finish? What is 'ceph -s' reporting? The PG could be backfilling to
a different OSD as well. You could query the PG to see more details
('ceph pg 8.2a6 query').
By the way, the PGs you show are huge (around 174 GB