Hi Alex,

I'm okay myself with the number of scrubs performed, would you expect
tweaking any of those values to let the deep-scrubs finish in time/

Thanks,
  Michael

On Wed, 3 Apr 2019 at 10:30, Alexandru Cucu <m...@alexcucu.ro> wrote:

> Hello,
>
> You can increase *osd scrub max interval* and *osd deep scrub
> interval* if you don't want at least one scrub/deep scrub per week.
>
> I would also play with *osd max scrubs* and *osd scrub load threshold*
> to do more scrubbing work, but be careful as it will have a huge
> impact on performance.
>
> ---
> Alex Cucu
>
> On Wed, Apr 3, 2019 at 3:46 PM Michael Sudnick
> <michael.sudn...@gmail.com> wrote:
> >
> > Hello, was on IRC yesterday about this and got some input, but haven't
> figured out a solution yet. I have a 5 node, 41 OSD cluster which currently
> has the warning "295 pgs not deep-scrubbed in time". The number slowly
> increases as deep scrubs happen. In my cluster I'm primarily using 5400 RPM
> 2.5" disks, and that's my general bottleneck. Processors are 8/16 core
> IntelĀ® Xeon processor D-1541. 8 OSDs per node (one has 9), and each node
> hosts a MON, MGR and MDS.
> >
> > My CPU usage is low, it's a very low traffic cluster, just a home lab.
> CPU usage rarely spikes around 30%. RAM is fine, each node has 64GiB, and
> only about 33GiB is used. Network is overkill, 2x1GbE public, and 2x10GbE
> cluster. Disk %util when deep scrubs are happening can hit 80%, so that
> seems to be my bottleneck.
> >
> > I am running Nautilus 14.2.0. I've been running fine since release up to
> about 3 days ago where I had a disk die and replaced it.
> >
> > Any suggestions on what I can do? Thank you for any suggestions.
> >
> > -Michael
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to