OK, so just an update that the recovery did finally complete, and I am pretty 
sure that the "inconsistent" PGs were PGs that the failed OSD were part of.  
Running 'ceph pg repair' has them sorted out, along with the 600+ "scrub 
errors" I had.

I was able to remove the OSD from the cluster, and am now just awaiting a 
replacement drive.  My cluster is now showing healthy.

Related question: the OSD had its DB/WAL on a partition on an SSD.  Would I 
just "zap" the partition like I would a drive, so it is available to be used 
again when I replace the HDD, or is there another method for "reclaiming" that 
DB/WAL partition?
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to