Hi,

I used these settings and there are no more slow requests in the cluster.

---------
ceph tell osd.* injectargs '--osd_scrub_sleep 0.1'
ceph tell osd.* injectargs '--osd_scrub_load_threshold 0.3'
ceph tell osd.* injectargs '--osd_scrub_chunk_max 6'
----------

Yes, scrubbing is slower now, but there has been no osd flapping and slow
requests!

Thanks for all your help!


Karun Josy

On Sun, Jan 28, 2018 at 9:25 PM, David Turner <drakonst...@gmail.com> wrote:

> Use a get with the second syntax to see the currently running config.
>
> On Sun, Jan 28, 2018, 3:41 AM Karun Josy <karunjo...@gmail.com> wrote:
>
>> Hello,
>>
>> Sorry for bringing this up again.
>>
>> What is the proper way to adjust the scrub settings ?
>> Can I use injectargs ?
>> -------
>> ceph tell osd.* injectargs '--osd_scrub_sleep .1'
>> -------
>>
>> Or do I have to use set manually in each osd daemons ?
>> ---
>> ceph daemon osd.21 set osd_scrub_sleep .1
>> ----
>>
>> While using both it shows (not observed, change may require restart)
>> So is it not set ?
>>
>>
>> Karun Josy
>>
>> On Mon, Jan 15, 2018 at 7:16 AM, shadow_lin <shadow_...@163.com> wrote:
>>
>>> hi,
>>> you can try to adjusting osd_scrub_chunk_min,osd_scrub_chunk_max and
>>> osd_scrub_sleep.
>>>
>>>
>>> osd scrub sleep
>>>
>>> Description: Time to sleep before scrubbing next group of chunks.
>>> Increasing this value will slow down whole scrub operation while client
>>> operations will be less impacted.
>>> Type: Float
>>> Default: 0
>>>
>>> osd scrub chunk min
>>>
>>> Description: The minimal number of object store chunks to scrub during
>>> single operation. Ceph blocks writes to single chunk during scrub.
>>> Type: 32-bit Integer
>>> Default: 5
>>>
>>>
>>> 2018-01-15
>>> ------------------------------
>>> lin.yunfan
>>> ------------------------------
>>>
>>> *发件人:*Karun Josy <karunjo...@gmail.com>
>>> *发送时间:*2018-01-15 06:53
>>> *主题:*[ceph-users] Limit deep scrub
>>> *收件人:*"ceph-users"<ceph-users@lists.ceph.com>
>>> *抄送:*
>>>
>>> Hello,
>>>
>>> It appears that cluster is having many slow requests while it is
>>> scrubbing and deep scrubbing. Also sometimes we can see osds flapping.
>>>
>>> So we have put the flags : noscrub,nodeep-scrub
>>>
>>> When we unset it, 5 PGs start to scrub.
>>> Is there a way to limit it to one at a time?
>>>
>>> # ceph daemon osd.35 config show | grep scrub
>>>     "mds_max_scrub_ops_in_progress": "5",
>>>     "mon_scrub_inject_crc_mismatch": "0.000000",
>>>     "mon_scrub_inject_missing_keys": "0.000000",
>>>     "mon_scrub_interval": "86400",
>>>     "mon_scrub_max_keys": "100",
>>>     "mon_scrub_timeout": "300",
>>>     "mon_warn_not_deep_scrubbed": "0",
>>>     "mon_warn_not_scrubbed": "0",
>>>     "osd_debug_scrub_chance_rewrite_digest": "0",
>>>     "osd_deep_scrub_interval": "604800.000000",
>>>     "osd_deep_scrub_randomize_ratio": "0.150000",
>>>     "osd_deep_scrub_stride": "524288",
>>>     "osd_deep_scrub_update_digest_min_age": "7200",
>>>     "osd_max_scrubs": "1",
>>>     "osd_op_queue_mclock_scrub_lim": "0.001000",
>>>     "osd_op_queue_mclock_scrub_res": "0.000000",
>>>     "osd_op_queue_mclock_scrub_wgt": "1.000000",
>>>     "osd_requested_scrub_priority": "120",
>>>     "osd_scrub_auto_repair": "false",
>>>     "osd_scrub_auto_repair_num_errors": "5",
>>>     "osd_scrub_backoff_ratio": "0.660000",
>>>     "osd_scrub_begin_hour": "0",
>>>     "osd_scrub_chunk_max": "25",
>>>     "osd_scrub_chunk_min": "5",
>>>     "osd_scrub_cost": "52428800",
>>>     "osd_scrub_during_recovery": "false",
>>>     "osd_scrub_end_hour": "24",
>>>     "osd_scrub_interval_randomize_ratio": "0.500000",
>>>     "osd_scrub_invalid_stats": "true",
>>>     "osd_scrub_load_threshold": "0.500000",
>>>     "osd_scrub_max_interval": "604800.000000",
>>>     "osd_scrub_min_interval": "86400.000000",
>>>     "osd_scrub_priority": "5",
>>>     "osd_scrub_sleep": "0.000000",
>>>
>>>
>>> Karun
>>>
>>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to