Hi everyone,
Just to make sure everyone reading this thread gets the info, setting
osd_scrub_disable_reservation_queuing to 'true' is a temporary workaround, as
confirmed by Laimis on the tracker [1].
Cheers,
Frédéric.
[1] https://tracker.ceph.com/issues/69078
- Le 5 Déc 24, à 23:09, Laim
Hi all,
Just came back from this years Cephalocon and managed to get a quick chat with
Ronen regarding this issue. He had a great presentation[1, 2] on the upcoming
changes to scrubbing in Tentacle as well as some changes already made in Squid
release.
The primary suspect here is the mclock sch
No, Marc. The recommended value is always the one that devs agreed on at a
point in time.
Keep it to the defaults.
Frédéric.
De : Marc
Envoyé : samedi 30 novembre 2024 22:49
À : Frédéric Nass; Laimis Juzeliūnas
Cc: ceph-users
Objet : RE: Squid: deep scrub issue
So is this recommend for all new squid clusters?
osd_scrub_chunk_max from 25
>
> To clarify, Squid reduced osd_scrub_chunk_max from 25 to 15 to limit the
> impact on client I/Os which may had led to increased (deep)scrubbing
> times.
> My advise was to raise this value back to 25 and see the in
Hello Laimis,
To clarify, Squid reduced osd_scrub_chunk_max from 25 to 15 to limit the impact
on client I/Os which may had led to increased (deep)scrubbing times.
My advise was to raise this value back to 25 and see the influence of this
change. But clearly, this is a more serious matter.
Thank
Hi Anthony,
No we dont have any hours set - scrubbing happens at all times. The only thing
we changed from default and kept was increasing osd_max_scrubs to 5 to try and
catch up. Other than that it was just expanding the window of scrubbing
intervals as pgs not deep-scrubbed in time alerts kept
Hi Frédéric,
Thanks for pointing out! I see we have 25 set for osd_scrub_chunk_max
(default).
I will try reducing it back to 15 and see if that helps this case.
Regards,
Laimis J.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send
Hi all, sveikas,
Thanks everyone for the tips and trying to help out!
I've eventually raised a bug tracker for the case to get more developers
involved: https://tracker.ceph.com/issues/69078
We tried decreasing osd_scrub_chunk_max from 25 to 15 as per Frédéric
suggestion, but unfortunately did
Sveikas,
Can you try to set 'ceph config set osd osd_mclock_profile high_recovery_ops'
and see how will it effect you?
For some PG deep scrub runned for about 20h for me. After I gave more priority
1,2 hour was enaught to finish.
- Original Message -
From: Laimis Juzeliūnas
To
s
Cc: ceph-users
Objet : [ceph-users] Re: Squid: deep scrub issues
Hi Laimis,
Might be the result of osd_scrub_chunk_max now being 15 instead of 25
previously. See [1] and [2].
Cheers,
Frédéric.
[1] https://tracker.ceph.com/issues/68057
[2]
https://github.com/ceph/ceph/pull/597
Do you have osd_scrub_begin_hour / osd_scrub_end_hour set? Constraining times
when scrubs can run can result in them piling up.
Are you saying that an individual PG may take 20+ elapsed days to perform a
deep scrub?
> Might be the result of osd_scrub_chunk_max now being 15 instead of 25
> p
Hi Laimis,
Might be the result of osd_scrub_chunk_max now being 15 instead of 25
previously. See [1] and [2].
Cheers,
Frédéric.
[1] https://tracker.ceph.com/issues/68057
[2]
https://github.com/ceph/ceph/pull/59791/commits/0841603023ba53923a986f2fb96ab7105630c9d3
- Le 26 Nov 24, à 23:36, L
12 matches
Mail list logo