We know very little about the whole cluster, can you add the usual information like 'ceph -s' and 'ceph osd df tree'? Scrubbing has nothing to do with the undersized PGs. Is the balancer and/or autoscaler on? Please also add 'ceph balancer status' and 'ceph osd pool autoscale-status'.
Thanks,
Eugen

Zitat von xadhoo...@gmail.com:
Hi, the system is still in backfilling and still have the same pg in degraded. I see that % of degraded object is in still.
I mean it never decrease belove 0.010% from days.
Is the backfilling connected to the degraded ?
System must finish backfilling before finishing the degraded one ?

[WRN] PG_DEGRADED: Degraded data redundancy: 84469/826401567 objects degraded (0.010%), 1 pg degraded, 1 pg undersized pg 8.283 is stuck undersized for 92m, current state active+undersized+degraded+remapped+backfilling, last acting [17,59]
And stopping the scrub lead to inconsistent pgs.....

Thanks for any help.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to