[ceph-users] pgs not deep-scrubbed in time

2024-12-18 Thread Jan Kasprzak
Hello, Ceph users, a question/problem related to deep scrubbing: I have a HDD-based Ceph 18 cluster currently with 34 osds and 600-ish pgs. In order to avoid latency peaks which apparently correlate with HDD being 100 % busy for several hours during a deep scrub, I wanted to relax the scr

[ceph-users] pgs not deep-scrubbed in time and pgs not scrubbed in time

2024-10-22 Thread Götz Reinicke
Hello Ceph Community, My cluster was hit by a power outage some month ago. Luckily no data was destroyed and powering up the nodes and services went well. But till than some pgs are still shown as not scrubbed in time. Googling and searching the list showed some debugging hints like „ceph pg

[ceph-users] PGs not deep-scrubbed in time

2024-09-07 Thread Eugen Block
Hi, I finally managed to take the time to get to the bottom of this infamous health warning. I decided to write it up in a blog post [0] and also contacted Zac to improve the documentation. The short version is: If you want to change the config setting for osd_deep_scrub_interval in gen

[ceph-users] pgs not deep scrubbed in time - false warning?

2020-08-11 Thread Dirk Sarpe
Hi, since some time (I think upgrade to nautilus) we get X pgs not deep scrubbed in time I deep-scrubbed the pgs when the error occurred and expected the cluster to recover over time, but no such luck. The warning comes up again and again. In our spinning rust cluster we allow deep scrubbing o