Well, sure, if you want to be all elegant about it ;)
>
>> I ran ceph osd crush reweight 75 1.0 and it started recovering right away
>> 3-4 Gbit/s sustained throughput. I know this is a bandaid, waiting on
>> your guidance on how to adjust the wrights above.
>
> Use something like this:
>
> c
> On Feb 26, 2025, at 9:07 AM, Deep Dish wrote:
>
> I appreciate all the tips! And thanks for the observation on weights. I
> don't know how it got to 1 for all OSDs. The custer has a mixture of 8 and
> 10T drives. Is there a way to automatically readjust them or this is done
> manua
Am 2/26/25 um 15:07 schrieb Deep Dish:
I ran ceph osd crush reweight 75 1.0 and it started recovering right away
3-4 Gbit/s sustained throughput. I know this is a bandaid, waiting on
your guidance on how to adjust the wrights above.
Use something like this:
ceph osd set norebalance
ceph osd
> On Feb 26, 2025, at 9:07 AM, Deep Dish wrote:
>
> I appreciate all the tips! And thanks for the observation on weights. I
> don't know how it got to 1 for all OSDs. The custer has a mixture of 8
> and 10T drives. Is there a way to automatically readjust them or this is
> done manually
I appreciate all the tips! And thanks for the observation on weights. I
don't know how it got to 1 for all OSDs. The custer has a mixture of 8
and 10T drives. Is there a way to automatically readjust them or this is
done manually in the crush map (decompile/edit/compile)?
I ran ceph osd cr
Hi,
after one PG finish backfilling another PG will start to backfill. You can rise
osd_max_backfills if you want to backfill more at the same time.
backfill_toofull will decrease in time.
Why you see toofull ? Ceph remove old data only then new are in place. While it
haven`t done he calcula
On Feb 26, 2025, at 7:47 AM, Deep Dish wrote:
Your parents had quite the sense of humor.
> Hello,
>
> I have an 80 OSD cluster (across 8 nodes). The average utilization across my
> OSDs is ~ 32%.
Average isn’t what factors in here ...
> Recently the cluster had a bad drive, and it was re
Hi,
Did you change osd.75 weight on purpose?
75hdd 7.15359 1.0 7.2 TiB 4.5 TiB 4.5 TiB 158 MiB 13 GiB 2.6
TiB 63.47 1.96 356 up
Setting it back to 1 with 'ceph osd reweight 75 1' may help.
Regards,
Frédéric.
- Le 26 Fév 25, à 13:47, Deep Dish deeepd...@gmail.com a
So the one thing that sticks out straight away is OSD.75 and it having a
different weight to all the other devices.
Why is this?
Is this the drive that was replaced?
Darren
> On 26 Feb 2025, at 12:47, Deep Dish wrote:
>
> Hello,
>
> I have an 80 OSD cluster (across 8 nodes). The average ut