Ray,

Do you known the IOPS/BW of the cluster?  The 16TB HDD is more suitable
for cold data, If the clients' bw/iops is too big, you can never  finish
the scrub.

And if you adjust the priority, it will have a great impact to the clients.

On 3/10/22 9:59 PM, Ray Cunningham wrote:
We have 16 Storage Servers each with 16TB HDDs and 2TB SSDs for DB/WAL, so we 
are using bluestore. The system is running Nautilus 14.2.19 at the moment, with 
an upgrade scheduled this month. I can't give you a complete ceph config dump 
as this is an offline customer system, but I can get answers for specific 
questions.

Off the top of my head, we have set:

osd_max_scrubs 20
osd_scrub_auto_repair true
osd _scrub_load_threashold 0.6
We do not limit srub hours.

Thank you,
Ray




-----Original Message-----
From: norman.kern <norman.k...@gmx.com>
Sent: Wednesday, March 9, 2022 7:28 PM
To: Ray Cunningham <ray.cunning...@keepertech.com>
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Scrubbing

Ray,

Can you  provide more information about your cluster(hardware and software 
configs)?

On 3/10/22 7:40 AM, Ray Cunningham wrote:
   make any difference. Do
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to