Hi Manuel.

Thanks for your response. We will consider this settings when we enable
deep-scrubbing. For now i saw this write up from Nautilus release notes,

Configuration values mon_warn_not_scrubbed and
mon_warn_not_deep_scrubbed have been renamed. They are now
mon_warn_pg_not_scrubbed_ratio and mon_warn_pg_not_deep_scrubbed_ratio
respectively. This is to clarify that these warnings are related to
pg scrubbing and are a ratio of the related interval. These options
are now enabled by default.

So, we made mon_warn_pg_not_deep_scrubbed_ratio = 0  and after that cluster
not moving to warning state for not deep scrubbing.

Thanks,
Muthu

On Tue, May 14, 2019 at 4:30 PM EDH - Manuel Rios Fernandez <
mrios...@easydatahost.com> wrote:

> Hi Muthu
>
>
>
> We found the same issue near 2000 pgs not deep-scrubbed in time.
>
>
>
> We’re manually force scrubbing with :
>
>
>
> ceph health detail | grep -i not | awk '{print $2}' | while read i; do
> ceph pg deep-scrub ${i}; done
>
>
>
> It launch near 20-30 pgs to be deep-scrubbed. I think you can improve
>  with a sleep of 120 secs between scrub to prevent overload your osd.
>
>
>
> For disable deep-scrub you can use “ceph osd set nodeep-scrub” , Also you
> can setup deep-scrub with threshold .
>
> #Start Scrub 22:00
>
> osd scrub begin hour = 22
>
> #Stop Scrub 8
>
> osd scrub end hour = 8
>
> #Scrub Load 0.5
>
> osd scrub load threshold = 0.5
>
>
>
> Regards,
>
>
>
> Manuel
>
>
>
>
>
>
>
>
>
> *De:* ceph-users <ceph-users-boun...@lists.ceph.com> *En nombre de *nokia
> ceph
> *Enviado el:* martes, 14 de mayo de 2019 11:44
> *Para:* Ceph Users <ceph-users@lists.ceph.com>
> *Asunto:* [ceph-users] ceph nautilus deep-scrub health error
>
>
>
> Hi Team,
>
>
>
> After upgrading from Luminous to Nautilus , we see 654 pgs not
> deep-scrubbed in time error in ceph status . How can we disable this flag?
> . In our setup we disable deep-scrubbing for performance issues.
>
>
>
> Thanks,
>
> Muthu
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to