Hello David, Can you help me with steps/Procedure to uninstall Ceph storage from openstack environment?
Regards Gaurav Goyal On Tue, Aug 2, 2016 at 11:57 AM, Gaurav Goyal <er.gauravgo...@gmail.com> wrote: > Hello David, > > Thanks a lot for detailed information! > > This is going to help me. > > > Regards > Gaurav Goyal > > On Tue, Aug 2, 2016 at 11:46 AM, David Turner < > david.tur...@storagecraft.com> wrote: > >> I'm going to assume you know how to add and remove storage >> http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-osds/. The >> only other part of this process is reweighting the crush map for the old >> osds to a new weight of 0.0 >> http://docs.ceph.com/docs/master/rados/operations/crush-map/. >> >> I would recommend setting the nobackfill and norecover flags. >> >> ceph osd set nobackfill >> ceph osd set norecover >> >> Next you would add all of the new osds according to the ceph docs and >> then reweight the old osds to 0.0. >> >> ceph osd crush reweight osd.1 0.0 >> >> Once you have all of that set, unset nobackfill and norecover. >> >> ceph osd unset nobackfill >> ceph osd unset norecover >> >> Wait until all of the backfilling finishes and then remove the old SAN >> osds as per the ceph docs. >> >> >> There is a thread from this mailing list about the benefits of weighting >> osds to 0.0 instead of just removing them. The best thing that you gain >> from doing it this way is that you can remove multiple nodes/osds at the >> same time without having degraded objects and especially without losing >> objects. >> >> ------------------------------ >> >> <https://storagecraft.com> David Turner | Cloud Operations Engineer | >> StorageCraft >> Technology Corporation <https://storagecraft.com> >> 380 Data Drive Suite 300 | Draper | Utah | 84020 >> Office: 801.871.2760 | Mobile: 385.224.2943 >> >> ------------------------------ >> >> If you are not the intended recipient of this message or received it >> erroneously, please notify the sender and delete it, together with any >> attachments, and be advised that any dissemination or copying of this >> message is prohibited. >> >> ------------------------------ >> >> >
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com