So I have done some further digging. Seems similar to this : Bug #54172: ceph version 16.2.7 PG scrubs not progressing - RADOS - Ceph Apart from: 1/ I have restarted all OSD's/forced a re-peer 
and the issue is still there 2/ Setting noscrub stops the scrubs "appearing" Checking a PG seems its just stuck in an endless schedule loop "scrubber": { "active": 
false, "must_scrub": false, "must_deep_scrub": false, "must_repair": false, "need_auto": false, "scrub_reg_stamp": 
"2024-02-21T13:22:37.654028+0000", "schedule": "queued for deep scrub" On 24 Feb 2024, at 0:45, ash...@amerrick.co.uk wrote: Have just upgraded a cluster from 
17.2.7 to 18.2.1 Everything is working as expected apart from the amount of scrubs & deep scrubs is bouncing all over the place every second. I have the value set to 1 per OSD but currently 
the cluster reckons one minute it’s doing 60+ scrubs, and then second this will drop to 40, then back to 70. If I check the ceph live log’s I can see every second it’s reporting multiple PG’s 
starting either a scrub or deep scrub, it does not look like these are actually running as isn’t having a negative effect on the cluster’s performance. Is this something to be expected off the 
back of the upgrade and should sort it self out? A sample of the logs: 2024-02-24T00:41:20.055401+0000 osd.54 (osd.54) 3160 : cluster 0 12.9a deep-scrub starts 2024-02-24T00:41:19.658144+0000 
osd.41 (osd.41) 4103 : cluster 0 12.cd deep-scrub starts 2024-02-24T00:41:19.823910+0000 osd.33 (osd.33) 5625 : cluster 0 12.ae deep-scrub starts 2024-02-24T00:41:19.846736+0000 osd.65 (osd.65) 
3947 : cluster 0 12.53 deep-scrub starts 2024-02-24T00:41:20.007331+0000 osd.20 (osd.20) 7214 : cluster 0 12.142 scrub starts 2024-02-24T00:41:20.114748+0000 osd.10 (osd.10) 6538 : cluster 0 
12.2c deep-scrub starts 2024-02-24T00:41:20.247205+0000 osd.36 (osd.36) 4789 : cluster 0 12.16f deep-scrub starts 2024-02-24T00:41:20.908051+0000 osd.68 (osd.68) 3869 : cluster 0 12.d7 
deep-scrub starts _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to