+1 on that. We are going to add 384 OSDs next week to a 2K+ cluster. The proposed solution really works well!
Kaspar
Op 24 juli 2019 om 21:06 schreef Paul Emmerich <paul.emmer...@croit.io>:
+1 on adding them all at the same time.All these methods that gradually increase the weight aren't really necessary in newer releases of Ceph.Paul--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90_______________________________________________On Wed, Jul 24, 2019 at 8:59 PM Reed Dier < reed.d...@focusvq.com> wrote:Just chiming in to say that this too has been my preferred method for adding [large numbers of] OSDs._______________________________________________Set the norebalance nobackfill flags.Create all the OSDs, and verify everything looks good.Make sure my max_backfills, recovery_max_active are as expected.Make sure everything has peered.Unset flags and let it run.One crush map change, one data movement.Reed
That works, but with newer releases I've been doing this:
- Make sure cluster is HEALTH_OK
- Set the 'norebalance' flag (and usually nobackfill)
- Add all the OSDs
- Wait for the PGs to peer. I usually wait a few minutes
- Remove the norebalance and nobackfill flag
- Wait for HEALTH_OK
Wido
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com