A while ago - before ceph balancer - probably on Jewel
We had a bunch of disks with different re-weights to help control pg
We upgraded to luminous
All our disks are the same, so we set them all back to 1.0 then let them fill 
accordingly
 
Then ran balancer about 4-5 times, letting each run finish , before then next - 
worked great - took a while too
Note that when balancer kicks off it can really move a lot of data and involve 
a lot of objects
 
using it currently to help evacuate and redeploy hosts
 
HTH  Joe

>>> shubjero <shubj...@gmail.com> 2/28/2020 11:43 AM >>>
I talked to some guys on IRC about going back to the non-1 reweight
OSD's and setting them to 1.

I went from a standard deviation of 2+ to 0.5.

Awesome.

On Wed, Feb 26, 2020 at 10:08 AM shubjero <shubj...@gmail.com> wrote:
>
> Right, but should I be proactively returning any reweighted OSD's that
> are not 1.0000 to 1.0000?
>
> On Wed, Feb 26, 2020 at 3:36 AM Konstantin Shalygin <k0...@k0ste.ru> wrote:
> >
> > On 2/26/20 3:40 AM, shubjero wrote:
> > > I'm running a Ceph Mimic cluster 13.2.6 and we use the ceph-balancer
> > > in upmap mode. This cluster is fairly old and pre-Mimic we used to set
> > > osd reweights to balance the standard deviation of the cluster. Since
> > > moving to Mimic about 9 months ago I enabled the ceph-balancer with
> > > upmap mode and let it do its thing but I did not think about setting
> > > the previously modified reweights back to 1.00000 (not sure if this is
> > > fine or would have been a best practice?)
> > >
> > > Does the ceph-balancer in upmap mode manage the osd reweight
> > > dynamically? Just wondering if I need to proactively go back and set
> > > all non 1.00000 reweights to 1.00000.
> >
> > Balancer in upmap mode should always work on not reweighed (e.g. 1.0000)
> > OSD's.
> >
> >
> >
> > k
> >
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to