On Fri, Dec 1, 2017 at 11:58 AM, Wols Lists <antli...@youngman.org.uk> wrote:
> On 27/11/17 22:30, Bill Kenworthy wrote:
>> Hi all,
>>       I need to expand two bcache fronted 4xdisk btrfs raid 10's - this
>> requires purchasing 4 drives (and one system does not have room for two
>> more drives) so I am trying to see if using raid 5 is an option
>>
>> I have been trying to find if btrfs raid 5/6 is stable enough to use but
>> while there is mention of improvements in kernel 4.12, and fixes for the
>> write hole problem I cant see any reports that its "working fine now"
>> though there is a phoronix article saying Oracle is using it since the
>> fixes.
>>
>> Is anyone here successfully using btrfs raid 5/6?  What is the status of
>> scrub and self healing?  The btrfs wiki is woefully out of date :(
>>
> Or put btrfs over md-raid?
>
> Thing is, with raid-6 over four drives, you have a 100% certainty of
> surviving a two-disk failure. With raid-10 you have a 33% chance of
> losing your array.
>

I tend to be a fan of parity raid in general for these reasons.  I'm
not sure the performance gains with raid-10 are enough to warrant the
waste of space.

With btrfs though I don't really see the point of "Raid-10" vs just a
pile of individual disks in raid1 mode.  Btrfs will do a so-so job of
balancing the IO across them already (they haven't really bothered to
optimize this yet).

I've moved away from btrfs entirely until they sort things out.
However, I would not use btrfs for raid-5/6 under any circumstances.
That has NEVER been stable, and if anything has gone backwards.  I'm
sure they'll sort it out sometime, but I have no idea when.  RAID-1 on
btrfs is reasonably stable, but I've still had it run into issues
(nothing that kept me from reading the data off the array, but I've
had various issues with it, and when I finally moved it to ZFS it was
in a state where I couldn't run it in anything other than degraded
mode).

You could run btrfs over md-raid, but other than the snapshots I think
this loses a lot of the benefit of btrfs in the first place.  You are
vulnerable to the write hole, the ability of btrfs to recover data
with soft errors is compromised (though you can detect it still), and
you're potentially faced with more read-write-read cycles when raid
stripes are modified.  Both zfs and btrfs were really designed to work
best on raw block devices without any layers below.  They still work
of course, but you don't get some of those optimizations since they
don't have visibility into what is happening at the disk level.

-- 
Rich

Reply via email to