Furthermore, presuming you are running Jewel or Luminous you can change
some settings in ceph.conf to mitigate the deep-scrub impact:

osd scrub max interval = 4838400
osd scrub min interval = 2419200
osd scrub interval randomize ratio = 1.0
osd scrub chunk max = 1
osd scrub chunk min = 1
osd scrub priority = 1
osd scrub sleep = 0.1
osd deep scrub interval = 2419200
osd deep scrub stride = 1048576
osd disk thread ioprio class = idle
osd disk thread ioprio priority = 7

Kind regards,
Caspar


Op ma 10 dec. 2018 om 12:06 schreef Vladimir Prokofev <v...@prokofev.me>:

> Hello list.
>
> Deep scrub totally kills cluster performance.
> First of all, it takes several minutes to complete:
> 2018-12-09 01:39:53.857994 7f2d32fde700  0 log_channel(cluster) log [DBG]
> : 4.75 deep-scrub starts
> 2018-12-09 01:46:30.703473 7f2d32fde700  0 log_channel(cluster) log [DBG]
> : 4.75 deep-scrub ok
>
> Second, while it runs, it consumes 100% of OSD time[1]. This is on an
> ordinary 7200RPM spinner.
> While this happens, VMs cannot access their disks, and that leads to
> service interruptions.
>
> I disabled scrub and deep-scrub operations for now, and have 2 major
> questions:
>  - can I disable 'health warning' status for noscrub and nodeep-scrub? I
> thought there was a way to do this, but can't find it. I want my cluster to
> think it's healthy, so if any new 'slow requests' or anything else pops -
> it will change status to 'health warning' again;
>  - is there a way to limit deepscrub impact on disk performance, or do I
> just have to go and buy SSDs?
>
> [1] https://imgur.com/a/TKH3uda
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to