Don’t use legacy override reweights.  When you have upmap balancing enabled 
they confuse the balancer.

Look for `rados bench` leftovers

> On May 5, 2025, at 6:01 AM, Yunus Emre Sarıpınar 
> <yunusemresaripi...@gmail.com> wrote:
> 
> I have a ceph cluster created with nautilus and I upgraded to octopus later.
> 
> I was have 24x node and I added 8x new nodes to the cluster. Balancer is 
> upmap enabled. I increased my PG count from 8192 to 16384.
> 
> I had to set reweight 0.8 on new OSD's to solve full usage (There was only 
> 1TB left because of the faulted balance)
> 
> Now cluster health is OK and distribution is balanced by my manual fix.
> 
>   data:
>     pools:   4 pools, 16545 pgs
>     objects: 173.32M objects, 82 TiB
>     usage:   436 TiB used, 235 TiB / 671 TiB avail
>     pgs:     15930 active+clean
>              579   active+clean+snaptrim_wait
>              36    active+clean+snaptrim
> 
> Before adding 8 nodes to the cluster, the usage was 215 TiB used. I did the 
> update about 2 months ago and my usage area still hasn't decreased.
> 
> Why did the usage double,  how can I solve it?
> 
>  
> _______________________________________________
> Dev mailing list -- d...@ceph.io
> To unsubscribe send an email to dev-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to