Hi,

I don't know if the ceph version is relevant here but I could undo that quite quickly in my small test cluster (Octopus native, no docker). After the OSD was marked as "destroyed" I recreated the auth caps for that OSD_ID (marking as destroyed removes cephx keys etc.), changed the keyring in /var/lib/ceph/osd/ceph-1/keyring to reflect that and restarted the OSD, now it's up and in again. Is the OSD in your case actually up and running?

Regards,
Eugen


Zitat von Michael Fladischer <mich...@fladi.at>:

Hi,

I accidentally destroyed the wrong OSD in my cluster. It is now marked as "destroyed" but the HDD is still there and data was not touch AFAICT. I was able to avtivate it again using ceph-volume lvm activate and I can make the OSD as "in" but it's status is not changing from "destroyed".

Is there a way to unmark it so I can reintegrate it in the cluster?

Regards,
Michael
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to