On Tue, 13 Aug 2013, Dmitry Postrigan wrote:
> 
> I just got my small Ceph cluster running. I run 6 OSDs on the same server to 
> basically replace mdraid.
> 
> I have tried to simulate a hard drive (OSD) failure: removed the OSD 
> (out+stop), zapped it, and then
> prepared and activated it. It worked, but I ended up with one extra OSD (and 
> the old one still showing in the ceph -w output).
> I guess this is not how I am supposed to do it?

It is.  You can remove the old entry with 'ceph osd crush rm N' and/or 
'ceph osd rm N', or just leave it there.

> Documentation recommends manually editing the configuration, however, there 
> are no osd entries in my /etc/ceph/ceph.conf

That's old info; where did you read it so we can adjust the docs?

Thanks!
sage


 
> So what would be the best way to replace a failed OSD?
> 
> Dmitry
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to