Greetings,
You need to set the following configuration option under [osd] in your
ceph.conf file for your new OSDs.
[osd]
osd_crush_initial_weight = 0
This will ensure your new OSDs come up with a 0 crush weight, thus preventing
the automatic rebalance that you see occuring.
Good luck,
_
Den fre 23 nov. 2018 kl 11:08 skrev Marco Gaiarin :
> Reading ceph docs lead to me that 'ceph osd reweight' and 'ceph osd crush
> reweight' was roughly the same, the first is effectively 'temporary'
> and expressed in percentage (0-1), while the second is 'permanent' and
> expressed, normally, as
Mandi! Paweł Sadowski
In chel di` si favelave...
> This is most probably due to big difference in weights between your hosts (the
> new one has 20x lower weight than the old ones) which in combination with
> straw
> algorithm is a 'known' issue.
Ok. I've reweighted back that disk to '1' and st
Mandi! Paweł Sadowsk
In chel di` si favelave...
> Exactly, your 'new' OSD have weight 1.81999 (osd.12, osd.13) and 0.90999
> (osd.14, osd.15). As Jarek pointed out you should add them using
> 'osd crush initial weight = 0'
> and the use
> 'ceph osd crush reweight osd.x 0.05'
> to slowly incr
On 11/22/18 6:12 PM, Marco Gaiarin wrote:
Mandi! Paweł Sadowsk
In chel di` si favelave...
From your osd tree it looks like you used 'ceph osd reweight'.
Yes, and i supposed also to do the right things!
Now, i've tried to lower the to-dimissi OSD, using:
ceph osd reweight 2 0.95
l
Mandi! Paweł Sadowsk
In chel di` si favelave...
> From your osd tree it looks like you used 'ceph osd reweight'.
Yes, and i supposed also to do the right things!
Now, i've tried to lower the to-dimissi OSD, using:
ceph osd reweight 2 0.95
leading to an osd map tree like:
root@blackp
On 11/22/18 12:22 PM, Jarek wrote:
> On Thu, 22 Nov 2018 12:05:12 +0100
> Marco Gaiarin wrote:
>
>> Mandi! Paweł Sadowsk
>> In chel di` si favelave...
>>
>>> We did similar changes a many times and it always behave as
>>> expected.
>>
>> Ok. Good.
>>
>>> Can you show you crushmap/ceph osd tre
Mandi! Zongyou Yao
In chel di` si favelave...
> The reason for the rebalance is you are using straw algorithms. If you swift
> to straw2, no data will be moved.
I'm still on hammer, so:
http://docs.ceph.com/docs/hammer/rados/operations/crush-map/
seems there's no 'staw2'...
--
dot
: [ceph-users] New OSD with weight 0, rebalance still happen...
On Thu, 22 Nov 2018 12:05:12 +0100
Marco Gaiarin wrote:
> Mandi! Paweł Sadowsk
> In chel di` si favelave...
>
> > We did similar changes a many times and it always behave as
> > expected.
>
> Ok. Good.
>
On Thu, 22 Nov 2018 12:05:12 +0100
Marco Gaiarin wrote:
> Mandi! Paweł Sadowsk
> In chel di` si favelave...
>
> > We did similar changes a many times and it always behave as
> > expected.
>
> Ok. Good.
>
> > Can you show you crushmap/ceph osd tree?
>
> Sure!
>
> root@blackpanther:~# c
Mandi! Paweł Sadowsk
In chel di` si favelave...
> We did similar changes a many times and it always behave as expected.
Ok. Good.
> Can you show you crushmap/ceph osd tree?
Sure!
root@blackpanther:~# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 2
Hi Marco,
On 11/22/18 9:22 AM, Marco Gaiarin wrote:
>
> ...
> But, despite of the fact that weight is zero, rebalance happen, and
> using percentage of rebalance 'weighted' to the size of new disk (eg,
> i've had 18TB circa of space, i've added a 2TB disks and roughly 10% of
> data start to rebal
12 matches
Mail list logo