>> I have tried to simulate a hard drive (OSD) failure: removed the OSD
>> (out+stop), zapped it, and then
>> prepared and activated it. It worked, but I ended up with one extra OSD (and
>> the old one still showing in the ceph -w output).
>> I guess this is not how I am supposed to do it?
> It
>> I am currently installing some backup servers with 6x3TB drives in them. I
>> played with RAID-10 but I was not
>> impressed at all with how it performs during a recovery.
>>
>> Anyway, I thought what if instead of RAID-10 I use ceph? All 6 disks will be
>> local, so I could simply create
>>
>> This will be a single server configuration, the goal is to replace mdraid,
>> hence I tried to use localhost
>> (nothing more will be added to the cluster). Are you saying it will be less
>> fault tolerant than a RAID-10?
> Ceph is a distributed object store. If you stay within a single machi
I just got my small Ceph cluster running. I run 6 OSDs on the same server to
basically replace mdraid.
I have tried to simulate a hard drive (OSD) failure: removed the OSD
(out+stop), zapped it, and then
prepared and activated it. It worked, but I ended up with one extra OSD (and
the old one s
> On 08/12/2013 06:49 PM, Dmitry Postrigan wrote:
>> Hello community,
>>
>> I am currently installing some backup servers with 6x3TB drives in them. I
>> played with RAID-10 but I was not
>> impressed at all with how it performs during a recovery.
>>
>&
Hello community,
I am currently installing some backup servers with 6x3TB drives in them. I
played with RAID-10 but I was not
impressed at all with how it performs during a recovery.
Anyway, I thought what if instead of RAID-10 I use ceph? All 6 disks will be
local, so I could simply create
6 l