On 24/06/2013 20:27, Dave Spano wrote:
If you remove the OSD after it fails from the cluster and the
crushmap, the cluster will automatically re-assign that number to the
new OSD when you run ceph osd create with no arguments.
OK - although obviously if you're going to make a disk with a label like
"osd1" then it seems you need to know in advance what OSD number to use.
Here's my procedure for manually adding OSDs.
That's very useful, thank you. I'm not sure I've got a one true document
to give to operations saying "here's how you replace a failed disk" yet
though, but maybe I can assemble one from this info, and then test it.
Regards,
Brian.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com