Backfill proceeds in a make-before-break fashion to safeguard data, because
Ceph is first and foremost about strong consistency. Say you have a 3R
(replicated, size=3) pool and you make a change that moves data around.
For a given PG, Ceph will complete a fourth copy of data before removing one
I can confirm that now.
It seems after the backfill completes, usage starts dropping.
Thx.
On Fri, Dec 20, 2024 at 10:46 AM Eugen Block wrote:
> Could you be a little more specific? Do you have any numbers/terminal
> outputs to show? In general, a higher usage is expected (temporarily)
> duri
Could you be a little more specific? Do you have any numbers/terminal
outputs to show? In general, a higher usage is expected (temporarily)
during backfill.
Zitat von Rok Jaklič :
After a new rule has been set, is it normal that usage is growing
significantly while objects number stay prett
After a new rule has been set, is it normal that usage is growing
significantly while objects number stay pretty much the same?
Rok
On Mon, Dec 2, 2024 at 10:45 AM Eugen Block wrote:
> Yes, there will be a lot of data movement. But you can throttle
> backfill (are you on wpq instead of mclock?)
Hi,
may i ask which commands did you use to achieve that?
Thank you
Am 2. Dezember 2024 11:04:19 MEZ schrieb "Rok Jaklič" :
>Yes, I've disabled mclock until backfill completes.
>
>On Mon, Dec 2, 2024 at 10:45 AM Eugen Block wrote:
>
>> Yes, there will be a lot of data movement. But you can throt
https://docs.ceph.com/en/reef/rados/configuration/mclock-config-ref/#steps-to-modify-mclock-max-backfills-recovery-limits
On Mon, Dec 2, 2024 at 1:28 PM wrote:
> Hi,
> may i ask which commands did you use to achieve that?
>
> Thank you
>
> Am 2. Dezember 2024 11:04:19 MEZ schrieb "Rok Jaklič" :
Yes, I've disabled mclock until backfill completes.
On Mon, Dec 2, 2024 at 10:45 AM Eugen Block wrote:
> Yes, there will be a lot of data movement. But you can throttle
> backfill (are you on wpq instead of mclock?) and it will slowly drain
> the PGs from SSDs to HDDs to minimize client impact.
Yes, there will be a lot of data movement. But you can throttle
backfill (are you on wpq instead of mclock?) and it will slowly drain
the PGs from SSDs to HDDs to minimize client impact.
Zitat von Rok Jaklič :
I didn't have any bad mappings.
I'll wait until the backfill completes then try
I didn't have any bad mappings.
I'll wait until the backfill completes then try to apply new rules.
Then I can probably expect some recovery will start so it can move
everything from ssd to hdd?
On Sun, Dec 1, 2024 at 9:36 AM Eugen Block wrote:
> It means that in each of the 1024 attempts, cru
It means that in each of the 1024 attempts, crush was able to find
num-rep OSDs. Those are the OSD IDs in the brackets (like the acting
set). You can then check the IDs (or at least some of them) for their
device class, in case you have doubts (I always do that with a couple
of random sets)
Thx.
Can you explain mappings.txt a little bit?
I assume that for every line in mappings.txt apply crush rule 1 for osds in
square brackets?
Rok
On Thu, Nov 28, 2024 at 8:53 AM Eugen Block wrote:
> Of course it's possible. You can either change this rule by extracting
> the crushmap, decompil
Oh right, I always forget the reclassify command! It worked perfectly
last time I used it. Thanks!
Zitat von Anthony D'Atri :
Apologies for the empty reply to this I seem to have sent. I blame
my phone :o
This process can be somewhat automated with crushtool’s
reclassification directive
Apologies for the empty reply to this I seem to have sent. I blame my phone :o
This process can be somewhat automated with crushtool’s reclassification
directives, which can help avoid omissions or typos (/me whistles innocently):
https://docs.ceph.com/en/latest/rados/operations/crush-map-edits
Of course it's possible. You can either change this rule by extracting
the crushmap, decompiling it, editing the "take" section, compile it
and inject it back into the cluster. Or you simply create a new rule
with the class hdd specified and set this new rule for your pools. So
the first ap
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
15 matches
Mail list logo