Re: [ceph-users] Ceph OSD on Hardware RAID

2017-10-02 Thread Vincent Godin
In addition to the points that you made : I noticed on RAID0 disk that read IO errors are not always trapped by ceph leading to unattended behaviour of the impacted OSD daemon. On both RAID0 disk or non-RAID disk, a IO error is trapped on /var/log/messages Oct 2 15:20:37 os-ceph05 kernel: sd 0:

Re: [ceph-users] Ceph OSD on Hardware RAID

2017-09-29 Thread Anthony D'Atri
In addition to the points that others made so well: - When using parity RAID, eg. RAID5 to create OSD devices, one reduces aggregate write speed — specially if using HDD’s — due to write amplification. - If using parity or replicated RAID, one might semi-reasonably get away with reducing Ceph’s

Re: [ceph-users] Ceph OSD on Hardware RAID

2017-09-29 Thread Maged Mokhtar
On 2017-09-29 17:14, Hauke Homburg wrote: > Hello, > > Ich think that the Ceph Users don't recommend on ceph osd on Hardware > RAID. But i haven't found a technical Solution for this. > > Can anybody give me so a Solution? > > Thanks for your help > > Regards > > Hauke You get better perform

Re: [ceph-users] Ceph OSD on Hardware RAID

2017-09-29 Thread David Turner
The reason it is recommended not to raid your disks is to give them all to Ceph. When a disk fails, Ceph can generally recover faster than the raid can. The biggest problem with raid is that you need to replace the disk and rebuild the raid asap. When a disk fails in Ceph, the cluster just moves

[ceph-users] Ceph OSD on Hardware RAID

2017-09-29 Thread Hauke Homburg
Hello, Ich think that the Ceph Users don't recommend on ceph osd on Hardware RAID. But i haven't found a technical Solution for this. Can anybody give me so a Solution? Thanks for your help Regards Hauke -- www.w3-creative.de www.westchat.de ___