Hello, Ceph users,
a question/problem related to deep scrubbing:
I have a HDD-based Ceph 18 cluster currently with 34 osds and 600-ish pgs.
In order to avoid latency peaks which apparently correlate with HDD being
100 % busy for several hours during a deep scrub, I wanted to relax the
scr
Hello Ceph Community,
My cluster was hit by a power outage some month ago. Luckily no data was
destroyed and powering up the nodes and services went well.
But till than some pgs are still shown as not scrubbed in time. Googling and
searching the list showed some debugging hints like „ceph pg
Hi,
I finally managed to take the time to get to the bottom of this
infamous health warning. I decided to write it up in a blog post [0]
and also contacted Zac to improve the documentation. The short version
is:
If you want to change the config setting for osd_deep_scrub_interval
in gen
Hi,
since some time (I think upgrade to nautilus) we get
X pgs not deep scrubbed in time
I deep-scrubbed the pgs when the error occurred and expected the cluster to
recover over time, but no such luck. The warning comes up again and again.
In our spinning rust cluster we allow deep scrubbing o