Sorry for the noise, I found the mistake right after sending this message. We did a 'ceph config set osd osd_deep_scrub_interval' instead of a `ceph config set global...'. As a result only the OSDs saw the changes. Fixing this, the cluster was back to CEPH_OK immediatly!

Michel

Le 26/05/2025 à 09:17, Michel Jouvin a écrit :
Hi,

Last week we increased osd_deep_scrub_interval from 10 days to 14 days as we tended to have permanently 1 PG with a late deep scrub (the PG changing all the time). We did it with `ceph config set ...`. From what we have seen, the deep scrubs are now spread over 14 days (the oldest are 14 days) meaning that OSDs took this change into account (without being restarted). But the number of late deep scrubs reported by `ceph -s) is ~700 which is unexpected. Does it mean that the mon (who is in charge of the report if I am right) have not seen the changes (they have not been restarted)?

Cheers,

Michel


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to