emove osd.x
>
> ceph osd rm osd.x
>
> ceph auth del osd.x
>
> maybe "wipefs -a /dev/sdxxx" or dd if=/dev/zero of=dev/sdxx count=1
> bs=1m ...
>
>
> Then you should be able deploy the disk again with the tool that you
> used originally. The disk should
We running a small Ceph cluster with two nodes. Our failureDomain is
set to host to have the data replicated between the two hosts. The
other night one host crashed hard and three OSDs won't recovert with
either
debug 2021-01-13T08:13:17.855+ 7f9bfbd6ef40 -1 osd.23 0 OSD::init()
: unable to r