The reason it is recommended not to raid your disks is to give them all to
Ceph.  When a disk fails, Ceph can generally recover faster than the raid
can.  The biggest problem with raid is that you need to replace the disk
and rebuild the raid asap.  When a disk fails in Ceph, the cluster just
moves some data around and is fully redundant and you can replace the disk
whenever you want/need later... There is no rush to do it now before
another disk in the raid fails.

This also depends on which type of raid you're thinking about using.  There
are arguments that RAID 0 is a viable scenario in Ceph, but that there is
no need for redundant raids like 1, 10, 5, or 6.  The only arguments I've
heard for RAID 0 in Ceph are for massively large clusters or storage nodes
with 40+ disks in them to reduce the OSD daemon count in the host while
understanding that a single disk failing will cause more backfilling than
would normally happen for a single disk failure.

Most other uses of RAID can be combatted by just increasing your replica
size in Ceph.  If only being able to lose 2 disks is not acceptable before
the possibility of data loss, then increase your replica size to 4 instead
of the default 3.  You're running 4x data to raw, but can now lose 3 disks
across multiple nodes without losing data.  Doing that with RAID is
generally inefficient when Ceph handles it so nicely.

On Fri, Sep 29, 2017 at 11:15 AM Hauke Homburg <hhomb...@w3-creative.de>
wrote:

> Hello,
>
> Ich think that the Ceph Users don't recommend on ceph osd on Hardware
> RAID. But i haven't found a technical Solution for this.
>
> Can anybody give me so a Solution?
>
> Thanks for your help
>
> Regards
>
> Hauke
>
> --
> www.w3-creative.de
>
> www.westchat.de
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to