ceph osd reweight-by-utilization <percentage> needs another argument to do
something.  The recommended starting value is 120.  Run it again with lower
and lower values until you're happy.  The value is a percentage, and I'm
not sure what happens if you go below 100.  If you get into trouble with
this (too much backfilling causing problems), you can use ceph osd weight
<osdid> 1 to go back to "normal", just look at ceph osd tree to see the
reweighted osds.

Bear in mind that reweight-by-utilization adjusts the osd weight, which is
not a permanent value.  in/out events will reset this weight.

But that's ok, because you don't need the reweight to last very long.  Even
if you get it perfectly balanced, you're going to be at ~75%.  I order more
hardware when I hit 70% utilization.  Once you start adding hardware, the
data distribution will change, so any permanent weights you set will
probably be wrong.


If you do want the weights to be permanent, you should look at ceph osd
crush reweight <osdid> <weight>.  This permanently changes the weight in
the crush map, and it's not affected by in/out events.  Bear in mind that
you'll probably have to revisit all of these weights anytime your cluster
changes.  Also note that this <weight> is different that ceph osd
reweight.  This weight is the disk size in TiB.  I recommend small change
to all over and under utilizied disks, then re-evaluate each pass.


On Wed, Apr 22, 2015 at 4:12 AM, Stefan Priebe - Profihost AG <
s.pri...@profihost.ag> wrote:

> Hello,
>
> i've heavily unbalanced OSDs.
>
> Some are at 61% usage and some at 86%.
>
> Which is 372G free space vs 136G free space.
>
> All are up and are weightet at 1.
>
> I'm running firefly with tunables to optimal and hashpspool 1.
>
> Also a reweight-by-utilization does nothing.
>
> # ceph osd reweight-by-utilization
> no change: average_util: 0.714381, overload_util: 0.857257. overloaded
> osds: (none)
>
> Stefan
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to