Dear Eugen, dear Joachim,

Thanks for your feedback and input. The amount of PGs stuck is +-1 the same 
around 30 in total. The most PGs are located on the same OSD from what I see in 
the detail. And most of them are listed in scrub and deep-scrub.

Most are from not scrubbed since end of August …

I’ll look into your links and see what might help.

        Thanks once more and regards . Götz


> Am 23.10.2024 um 08:35 schrieb Götz Reinicke <goetz.reini...@filmakademie.de>:
> 
> Hello Ceph Community,
> 
> My cluster was hit by a power outage some month ago. Luckily no data was 
> destroyed and powering up the nodes and services went well. 
> 
> But till than some pgs are still shown as not scrubbed in time. Googling and 
> searching the list showed some debugging hints like  „ceph pg deep-scrub“ the 
> pgs or restarting osd deamons.
> 
> Nothing „solved“ that issue here. I’m on ceph version 18.2.4 now.
> 
> Is there anything special what I can do to have thous pgs scrubbed? I like 
> having the cluster health state ok not warning :) Or will time solve the 
> problem when the pgs are in there regular cycle for being scrubbed again?
> 
> 
>       Thanks for hints and suggestion . Best regards Götz
> 

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to