On Wed, Apr 22, 2015 at 7:12 AM, Stefan Priebe - Profihost AG
<s.pri...@profihost.ag> wrote:
> Also a reweight-by-utilization does nothing.

As a fellow sufferer from this issue, mostly what I can offer you is
sympathy rather than actual help.  However, this may be beneficial:

By default, reweight-by-utilization only alters OSD's that are 20%
above average.  This is really too conservative in our case,
especially for smaller OSD's.  It also isn't helpful if the problem
isn't a couple of OSD's way above average, but rather some OSD's way
below.

Try:

# ceph osd reweight-by-utilization 110

or possibly even:

# ceph osd reweight-by-utilization 105

This should give more helpful results.

To the extent that you still have problems after running that, like if
running it consistently fixes osd.1 but pushes utilizations of osd.2
up too high and leaves osd.3 mostly empty, then you may have to start
assigning reweights by hand.

Also, you didn't mention it explicitly, so if this cluster predates
0.80.9 at all you may need to set:

ceph osd crush set-tunable straw_calc_version 1

This is supposed to (eventually) cut down on the amount of migration
that happens when you reweight.  Which is important if you're stuck
reweighting often.  See here under V0.80.9 FIREFLY for details:

http://ceph.com/docs/master/release-notes/

Good luck!
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to