Along the lines of this thread, if I have OSD(s) on rotational HDD(s), but have 
the journal(s) going to an SSD, I am curious about the best procedure for 
replacing the SSD should it fail.

-Joe

From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Scottix
Sent: Wednesday, October 02, 2013 10:37 AM
To: Andy Paluch
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Newbie question

I actually am looking for a similar answer. If 1 osd = 1 HDD, in dumpling it 
will relocate the data for me after the timeout which is great. If I just want 
to replace the osd with an unformated new HDD what is the procedure?

One method that has worked for me is to remove it out of the crush map then re 
add the osd drive to the cluster. This works but seems like a lot of overhead 
just to replace a single drive. Is there a better way to do this?

On Wed, Oct 2, 2013 at 8:10 AM, Andy Paluch 
<a...@webguyz.net<mailto:a...@webguyz.net>> wrote:
What happens when a drive goes bad in ceph and has to be replaced (at the 
physical level) . In the Raid world you pop out the bad disk and stick a new 
one in and the controller takes care of getting it back into the system. With 
what I've been reading so far, it probably going be a mess to do this with ceph 
 and involve a lot of low level linux tweaking to remove and replace the disk 
that failed. Not a big Linux guy so was wondering if anyone can point to any 
docs on how to recover from a bad disk in a ceph node.

Thanks


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Follow Me: @Scottix<http://www.twitter.com/scottix>
http://about.me/scottix
scot...@gmail.com<mailto:scot...@gmail.com>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to