In addition to the points that you made :
I noticed on RAID0 disk that read IO errors are not always trapped by
ceph leading to unattended behaviour of the impacted OSD daemon.
On both RAID0 disk or non-RAID disk, a IO error is trapped on /var/log/messages
Oct 2 15:20:37 os-ceph05 kernel: sd 0:
In addition to the points that others made so well:
- When using parity RAID, eg. RAID5 to create OSD devices, one reduces
aggregate write speed — specially if using HDD’s — due to write amplification.
- If using parity or replicated RAID, one might semi-reasonably get away with
reducing Ceph’s
On 2017-09-29 17:14, Hauke Homburg wrote:
> Hello,
>
> Ich think that the Ceph Users don't recommend on ceph osd on Hardware
> RAID. But i haven't found a technical Solution for this.
>
> Can anybody give me so a Solution?
>
> Thanks for your help
>
> Regards
>
> Hauke
You get better perform
The reason it is recommended not to raid your disks is to give them all to
Ceph. When a disk fails, Ceph can generally recover faster than the raid
can. The biggest problem with raid is that you need to replace the disk
and rebuild the raid asap. When a disk fails in Ceph, the cluster just
moves
Hello,
Ich think that the Ceph Users don't recommend on ceph osd on Hardware
RAID. But i haven't found a technical Solution for this.
Can anybody give me so a Solution?
Thanks for your help
Regards
Hauke
--
www.w3-creative.de
www.westchat.de
___