I have used the gentle reweight script many times in the past. But more
recently, I expanded one cluster from 334 to 1114 OSDs, by just changing
the crush weight 100 OSDs at a time. Once all pgs from those 100 were
stable and backfilling, add another hundred. I stopped at 500 and let the
backfill finish. I repeated the process for the last 500 drives and it was
finished in a weekend without any complaints.
Don't forget to adjust your PG count for the new OSDs once rebalancing is
done.

-Brett

On Sun, Jun 23, 2019, 2:51 PM <c...@elchaka.de> wrote:

> Hello,
>
> I would advice to use this Script from dan:
>
> https://github.com/cernceph/ceph-scripts/blob/master/tools/ceph-gentle-reweight
>
> I have Used it many Times and it works Great - also if you want to drain
> the OSDs.
>
> Hth
> Mehmet
>
> Am 30. Mai 2019 22:59:05 MESZ schrieb Michel Raabe <rmic...@devnu11.net>:
>>
>> Hi Mike,
>>
>> On 30.05.19 02:00, Mike Cave wrote:
>>
>>> I’d like a s little friction for the cluster as possible as it is in
>>> heavy use right now.
>>>
>>> I’m running mimic (13.2.5) on CentOS.
>>>
>>> Any suggestions on best practices for this?
>>>
>>
>> You can limit the recovery for example
>>
>> * max backfills
>> * recovery max active
>> * recovery sleep
>>
>> It will slow down the rebalance but will not hurt the users too much.
>>
>>
>> Michel.
>> ------------------------------
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to