On Mon, Dec 9, 2019 at 11:58 AM Paul Emmerich
wrote:
> solved it: the warning is of course generated by ceph-mgr and not ceph-mon.
>
> So for my problem that means: should have injected the option in ceph-mgr.
> That's why it obviously worked when setting it on the pool...
>
> The solution for yo
solved it: the warning is of course generated by ceph-mgr and not ceph-mon.
So for my problem that means: should have injected the option in ceph-mgr.
That's why it obviously worked when setting it on the pool...
The solution for you is to simply put the option under global and restart
ceph-mgr (
On Mon, Dec 9, 2019 at 5:17 PM Robert LeBlanc wrote:
> I've increased the deep_scrub interval on the OSDs on our Nautilus cluster
> with the following added to the [osd] section:
>
should have read the beginning of your email; you'll need to set the option
on the mons as well because they genera
Hi,
nice coincidence that you mention that today; I've just debugged the exact
same problem on a setup where deep_scrub_interval was increased.
The solution was to set the deep_scrub_interval directly on all pools
instead (which was better for this particular setup anyways):
ceph osd pool set d
I've increased the deep_scrub interval on the OSDs on our Nautilus cluster
with the following added to the [osd] section:
osd_deep_scrub_interval = 260
And I started seeing
1518 pgs not deep-scrubbed in time
in ceph -s. So I added
mon_warn_pg_not_deep_scrubbed_ratio = 1
since the default