This is a great illustration of the need for this to be global. Is it
documented that way?

There was a discussion on Slack a couple of weeks ago where someone was
asserting that it should be an osd value whereas we always use global -
well, ever since we hit the same problem as you, a few years ago!


On 26/5/25 15:21, Michel Jouvin wrote:
> Sorry for the noise, I found the mistake right after sending this
> message. We did a 'ceph config set osd osd_deep_scrub_interval' instead
> of a `ceph config set global...'. As a result only the OSDs saw the
> changes. Fixing this, the cluster was back to CEPH_OK immediatly!
> 
> Michel
> 
> Le 26/05/2025 à 09:17, Michel Jouvin a écrit :
>> Hi,
>>
>> Last week we increased osd_deep_scrub_interval from 10 days to 14 days
>> as we tended to have permanently 1 PG with a late deep scrub (the PG
>> changing all the time). We did it with `ceph config set ...`. From
>> what we have seen, the deep scrubs are now spread over 14 days (the
>> oldest are 14 days) meaning that OSDs took this change into account
>> (without being restarted). But the number of late deep scrubs reported
>> by `ceph -s) is ~700 which is unexpected. Does it mean that the mon
>> (who is in charge of the report if I am right) have not seen the
>> changes (they have not been restarted)?
>>
>> Cheers,
>>
>> Michel
>>
>>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

-- 
Gregory Orange

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to