[ceph-users] Re: Reducing ceph cluster size in half

2022-02-22 Thread Jason Borden
Thank you Matt, Etienne, and Frank for your great advice. I'm going to set up a small test cluster to familiarize myself with the process before making the change on my production environment. Thank you all again, I really appreciate it! Jason On 2022-02-21 17:58, Jason Borden wrote: > Hi all

[ceph-users] Re: Reducing ceph cluster size in half

2022-02-21 Thread Etienne Menguy
Hi, There are different ways, but I would : - Change OSD weight (and not reweight) I want to remove to 0 - Wait for cluster health - Stop OSD I want to remove - If data are ok, remove osd from crushmap. - There is a no reason stopping osd impacts your service as they have no data, it’s

[ceph-users] Re: Reducing ceph cluster size in half

2022-02-21 Thread Matt Vandermeulen
This might be easiest to work about in two steps: Draining hosts, and doing a PG merge. You can do it in either order (though thinking about it, doing the merge first will give you more cluster-wide resources to do it faster). Draining the hosts can be done in a few ways, too. If you want t