A while ago - before ceph balancer - probably on Jewel
We had a bunch of disks with different re-weights to help control pg
We upgraded to luminous
All our disks are the same, so we set them all back to 1.0 then let them fill
accordingly
Then ran balancer about 4-5 times, letting each run finish
I talked to some guys on IRC about going back to the non-1 reweight
OSD's and setting them to 1.
I went from a standard deviation of 2+ to 0.5.
Awesome.
On Wed, Feb 26, 2020 at 10:08 AM shubjero wrote:
>
> Right, but should I be proactively returning any reweighted OSD's that
> are not 1. t
Right, but should I be proactively returning any reweighted OSD's that
are not 1. to 1.?
On Wed, Feb 26, 2020 at 3:36 AM Konstantin Shalygin wrote:
>
> On 2/26/20 3:40 AM, shubjero wrote:
> > I'm running a Ceph Mimic cluster 13.2.6 and we use the ceph-balancer
> > in upmap mode. This clus
On 2/26/20 3:40 AM, shubjero wrote:
I'm running a Ceph Mimic cluster 13.2.6 and we use the ceph-balancer
in upmap mode. This cluster is fairly old and pre-Mimic we used to set
osd reweights to balance the standard deviation of the cluster. Since
moving to Mimic about 9 months ago I enabled the ce