Re: [ceph-users] How do you replace an OSD?

2013-08-13 Thread Dmitry Postrigan
>> I have tried to simulate a hard drive (OSD) failure: removed the OSD >> (out+stop), zapped it, and then >> prepared and activated it. It worked, but I ended up with one extra OSD (and >> the old one still showing in the ceph -w output). >> I guess this is not how I am supposed to do it? > It

Re: [ceph-users] Ceph instead of RAID

2013-08-13 Thread Dmitry Postrigan
>> I am currently installing some backup servers with 6x3TB drives in them. I >> played with RAID-10 but I was not >> impressed at all with how it performs during a recovery. >> >> Anyway, I thought what if instead of RAID-10 I use ceph? All 6 disks will be >> local, so I could simply create >>

Re: [ceph-users] Ceph instead of RAID

2013-08-13 Thread Dmitry Postrigan
>> This will be a single server configuration, the goal is to replace mdraid, >> hence I tried to use localhost >> (nothing more will be added to the cluster). Are you saying it will be less >> fault tolerant than a RAID-10? > Ceph is a distributed object store. If you stay within a single machi

[ceph-users] How do you replace an OSD?

2013-08-12 Thread Dmitry Postrigan
I just got my small Ceph cluster running. I run 6 OSDs on the same server to basically replace mdraid. I have tried to simulate a hard drive (OSD) failure: removed the OSD (out+stop), zapped it, and then prepared and activated it. It worked, but I ended up with one extra OSD (and the old one s

Re: [ceph-users] Ceph instead of RAID

2013-08-12 Thread Dmitry Postrigan
> On 08/12/2013 06:49 PM, Dmitry Postrigan wrote: >> Hello community, >> >> I am currently installing some backup servers with 6x3TB drives in them. I >> played with RAID-10 but I was not >> impressed at all with how it performs during a recovery. >> >&

[ceph-users] Ceph instead of RAID

2013-08-12 Thread Dmitry Postrigan
Hello community, I am currently installing some backup servers with 6x3TB drives in them. I played with RAID-10 but I was not impressed at all with how it performs during a recovery. Anyway, I thought what if instead of RAID-10 I use ceph? All 6 disks will be local, so I could simply create 6 l