The only raid I would consider using for a ceph osd is raid 0.  Ceph deals
with the redundancy very nicely and you won't get the impact of running on
a parity raid.  I wouldn't suggest doing all 8 drives in a node in a single
raid 0, but you could reduce your osd count in half by doing 4x 2 drive
raid 0's.

I'm still going to say that there's a reason that most of the clusters
running thousands of osds don't do raid in their storage nodes of any type.

On Mon, Nov 13, 2017, 9:48 AM Oscar Segarra <oscar.sega...@gmail.com> wrote:

> Thanks Mark, Peter,
>
> For clarification, the configuration with RAID5 is having many servers (2
> or more) with RAID5 and CEPH on top of it. Ceph will replicate data between
> servers. Of course, each server will have just one OSD daemon managing a
> big disk.
>
> It looks functionally is the same using RAID5 +  1 Ceph daemon as 8 CEPH
> daemons.
>
> I appreciate a lot your comments!
>
> Oscar Segarra
>
>
>
> 2017-11-13 15:37 GMT+01:00 Marc Roos <m.r...@f1-outsourcing.eu>:
>
>>
>> Keep in mind also if you want to have fail over in the future. We were
>> running a 2nd server and were replicating via DRBD the raid arrays.
>> Expanding this storage is quite hastle, compared to just adding a few
>> osd's.
>>
>>
>>
>> -----Original Message-----
>> From: Oscar Segarra [mailto:oscar.sega...@gmail.com]
>> Sent: maandag 13 november 2017 15:26
>> To: Peter Maloney
>> Cc: ceph-users
>> Subject: Re: [ceph-users] HW Raid vs. Multiple OSD
>>
>> Hi Peter,
>>
>> Thanks a lot for your consideration in terms of storage consumption.
>>
>> The other question is considering having one OSDs vs 8 OSDs... 8 OSDs
>> will consume more CPU than 1 OSD (RAID5) ?
>>
>> As I want to share compute and osd in the same box, resources consumed
>> by OSD can be a handicap.
>>
>> Thanks a lot.
>>
>> 2017-11-13 12:59 GMT+01:00 Peter Maloney
>> <peter.malo...@brockmann-consult.de>:
>>
>>
>>         Once you've replaced an OSD, you'll see it is quite simple...
>> doing
>> it for a few is not much more work (you've scripted it, right?). I don't
>> see RAID as giving any benefit here at all. It's not tricky...it's
>> perfectly normal operation. Just get used to ceph, and it'll be as
>> normal as replacing a RAID disk. And for performance degradation, maybe
>> it could be better on either... or better on ceph if you don't mind
>> setting the rate to the lowest... but when the QoS functionality is
>> ready, probably ceph will be much better. Also RAID will cost you more
>> for hardware.
>>
>>         And raid5 is really bad for IOPS. And ceph already replicates, so
>> you will have 2 layers of redundancy... and ceph does it cluster wide,
>> not just one machine. Using ceph with replication is like all your free
>> space as hot spares... you could lose 2 disks on all your machines, and
>> it can still run (assuming it had time to recover in between, and enough
>> space). And you don't want min_size=1, and if you have 2 layers of
>> redundancy, you'll be tempted to do that probably.
>>
>>         But for some workloads, like RBD, ceph doesn't balance out the
>> workload very evenly for a specific client, only many clients at once...
>> raid might help solve that, but I don't see it as worth it.
>>
>>         I would just software RAID1 the OS and mons, and mds, not the
>> OSDs.
>>
>>
>>         On 11/13/17 12:26, Oscar Segarra wrote:
>>
>>
>>                 Hi,
>>
>>                 I'm designing my infraestructure. I want to provide 8TB (8
>> disks x 1TB each) of data per host just for Microsoft Windows 10 VDI. In
>> each host I will have storage (ceph osd) and compute (on kvm).
>>
>>                 I'd like to hear your opinion about theese two
>> configurations:
>>
>>                 1.- RAID5 with 8 disks (I will have 7TB but for me it is
>> enough) + 1 OSD daemon
>>                 2.- 8 OSD daemons
>>
>>                 I'm a little bit worried that 8 osd daemons can affect
>> performance because all jobs running and scrubbing.
>>
>>                 Another question is the procedure of a replacement of a
>> failed
>> disk. In case of a big RAID, replacement is direct. In case of many
>> OSDs, the procedure is a little bit tricky.
>>
>>
>>
>> http://ceph.com/geen-categorie/admin-guide-replacing-a-failed-disk-in-a-ceph-cluster/
>> <
>> http://ceph.com/geen-categorie/admin-guide-replacing-a-failed-disk-in-a-ceph-cluster/
>> >
>>
>>
>>                 What is your advice?
>>
>>                 Thanks a lot everybody in advance...
>>
>>
>>
>>                 _______________________________________________
>>                 ceph-users mailing list
>>                 ceph-users@lists.ceph.com
>>                 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>
>>
>>
>>
>>         --
>>
>>         --------------------------------------------
>>         Peter Maloney
>>         Brockmann Consult
>>         Max-Planck-Str. 2
>>         21502 Geesthacht
>>         Germany
>>         Tel: +49 4152 889 300 <tel:+49%204152%20889300>
>>         Fax: +49 4152 889 333 <tel:+49%204152%20889333>
>>         E-mail: peter.malo...@brockmann-consult.de
>> <mailto:peter.malo...@brockmann-consult.de>
>>         Internet: http://www.brockmann-consult.de
>> <http://www.brockmann-consult.de>
>>         --------------------------------------------
>>
>>
>>
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to