I've done something similar. I used a process like this:

ceph osd set noout
ceph osd set nodown
ceph osd set nobackfill
ceph osd set norebalance
ceph osd set norecover

Then I did my work to manually remove/destroy the OSDs I was replacing,
brought the replacements online, and unset all of those options. Then the
I/O world collapsed for a little while as the new OSDs were backfilled.

Some of those might be redundant and/or unnecessary. I'm not a ceph expert.
Do this at your own risk. Etc.

jonathan

On Wed, Jan 9, 2019 at 7:58 AM Mosi Thaunot <pourlesma...@gmail.com> wrote:

> Hello,
>
> I have a cluster of 3 nodes, 3 OSD per nodes (so 9 OSD in total),
> replication set to 3 (os each node has a copy).
>
> For some reason, I would like to recreate the node 1. What I have done :
> 1. out the 3 OSDs of node 1, stop then, then destroy them (almost in the
> same time)
> 2. recreate the new node 1 and add the 3 new OSDs
>
> My problem is that after step 1, I had to wait for backfilling to complete
> (to get only active+clean+remapped and active+undersized+degraded PGs).
> Then, wait again in step 2 to get the cluster healthy.
>
> Could I avoid the wait of step 1 ? What should I do then ? I was thinking :
> - set the OSDs to noout
> - out/stop/destroy the 3 OSDs of node 1 (in the same time)
> - reinstall node 1 (I have a copy of all the configuration files) and add
> the 3 nodes
>
> Would that work ?
>
> Thanks and regards,
> Mosi
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


-- 
Jonathan Woytek
http://www.dryrose.com
KB3HOZ
PGP:  462C 5F50 144D 6B09 3B65  FCE8 C1DC DEC4 E8B6 AABC
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to