There’s a table in the docs, in the dev tree I think, that shows when shallow / 
deep scrubs will / won’t run, and situations in which a shallow scrub will be 
promoted to a deep scrub.

I don’t *think* an OS would run both at the same time on a given PG but I’m not 
positive.

https://docs.ceph.com/en/quincy/rados/configuration/osd-config-ref/#scrubbing


With recent releases the mclock OSD scheduler is default, which endeavors to 
base scheduling on measured IOPS at OSD granularity, with 
client/recovery/balanced profiles that can be selected.  Today there are 
sometimes shortcomings however, especially with HDDs and EC, but improvements 
are in the funnel.

With the classic wpq scheduler one can titrate the above options to taste, 
including network and media performance.  HDD clusters especially with EC may 
find it advantageous to double the deep scrub interval, and one can adjust the 
number that a given OSD will run at once, hours/days when they may run, etc.  
Note that restricting the run window means that it will become more difficult 
to get them all in within the configured interval, and a longer interval is 
like only seeing the dentist once a decade.  So it’s a tradeoff.  Shallow 
scrubs are pretty lightweight and are usually best left alone.  Deep scrubs may 
require titration based on your needs.  

Arguably if your scrubs are noticeably impacting client performance, your 
architecture may be marginal to begin with.

> On May 15, 2025, at 9:04 AM, Devender Singh <deven...@netskrt.io> wrote:
> 
> Hello All
> 
> Can any one suggest on it and how to control scrub and deep scrubs not to
> impact IO or clients.
> Also, how to control that both scrub doesn’t run at one time or deep should
> have lower impact then normal scrubs other than scheduling and what are the
> parameters?
> 
> Regards
> Dev
> 
> On Mon, 12 May 2025 at 9:14 AM, Devender Singh <deven...@netskrt.io> wrote:
> 
>> Hello All
>> 
>> 
>> Need some help to tune scrubbing.
>> 
>> 1. *How to control not run scrubbing and deep scrubbing together and
>> which one should be started first?*
>> 
>> * 124  active+clean+remapped*
>> 
>> *             35   active+clean+scrubbing+deep*
>> 
>> *             24   active+clean+scrubbing*
>> 
>> *             4    active+clean+remapped+scrubbing+deep*
>> 
>> 
>> *I have **osd_max_scrubs =1 but still impacting IO’s special when both
>> are running together. *
>> 
>> 
>> 
>> *2. When changing some priorities to not to impact IO’s, Setting higher
>> number is lower priority or it works differently with different kind of
>> priority parameters?*
>> 
>>  * What is the standard of defining high and low priority(higher number
>> is lower priority or lower number is lower priory etc...) on CEPH?*
>> 
>> 
>> * eg: **osd_requested_scrub_priority - 170 (120 default)*
>> 
>> *Eg: **ceph config set osd osd_scrub_event_cost 8192 (4096 default)*
>> 
>> 
>> 
>> *Regards*
>> 
>> *Dev*
>> 
>> 
>> 
>> 
>> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to