hi, 
in case i care to keep an osd id (up to now) i do 

... 
ceph osd destroy <osd id> --yes-i-really-mean-it 
... replace disk ... 
[ ceph-volume lvm zap --destroy /dev/<new disk> ] 
ceph-volume lvm prepare --bluestore --osd-id <osd id> --data /dev/<new disk> [ 
--block.db /dev/<some volume group/<some logical volume> ] [ --block.wal 
/dev/<some other volume group/<some other logical volume> ] 
... retrieve <osd fsid> i.e. with ceph-volume lvm list ... 
ceph-volume lvm activate --bluestore <osd id> <osd fsid> 
... 

up to now that seems to work fine 
cheers, toBias 


From: "Nicola Mori" <m...@fi.infn.it> 
To: "Janne Johansson" <icepic...@gmail.com> 
Cc: "ceph-users" <ceph-users@ceph.io> 
Sent: Wednesday, 11 December, 2024 11:44:57 
Subject: [ceph-users] Re: Correct way to replace working OSD disk keeping the 
same OSD ID 

Thanks for your insight. So if I remove an OSD without --replace its ID 
won't be reused when I e.g. add a new host with new disks? Even if I 
completely remove it from the cluster? I'm asking because I maintain a 
failure log per OSD and I'd like to avoid that an OSD previously in host 
A migrates to host B at a certain point. 
thanks again, 

Nicola 
_______________________________________________ 
ceph-users mailing list -- ceph-users@ceph.io 
To unsubscribe send an email to ceph-users-le...@ceph.io 

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to