> On Oct 19, 2024, at 2:47 PM, Shain Miley <smi...@npr.org> wrote:
> 
> We are running octopus but will be upgrading to reef or squid in the next few 
> weeks.  As part of that upgrade I am planning on switching over to using 
> cephadm as well.
> 
> Part of what I am doing right now is going through and replacing old drives 
> and removing some of our oldest nodes and replacing them with new ones…then I 
> will convert the rest of the filestore osd over to bluestore so that I can 
> upgrade.
>  
> One other question based on your suggestion below…my typical process of 
> removing or replacing an osd involves the following:
> 
> ceph osd crush reweight osd.id <http://osd.id/> 0.0
> ceph osd out osd.id <http://osd.id/>
> service ceph stop osd.id <http://osd.id/>
> ceph osd crush remove osd.id <http://osd.id/>
> ceph auth del osd.id <http://osd.id/>
> ceph osd rm id
>  
> Does `ceph osd destroy` do something other than the last 3 commands above or 
> am I just doing the same thing using multiple commands?  If I need to start 
> issuing the destroy command as well I can.
> 

I don’t recall if it will stop the service if running, but it does leave the 
OSD in the CRUSH map marked as ‘destroyed’.  I *think* it leaves the auth but 
I’m not sure.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to