On Mon, Jan 4, 2021 at 10:27 PM Chris Murphy <li...@colorremedies.com>
wrote:

> On Sun, Jan 3, 2021 at 3:09 PM Richard Shaw <hobbes1...@gmail.com> wrote:
> > On Sun, Jan 3, 2021 at 3:34 PM Chris Murphy <li...@colorremedies.com>
> wrote:
> >> On Sun, Jan 3, 2021, 6:26 AM Richard Shaw <hobbes1...@gmail.com> wrote:
> >
> > Yeah, the RAID1 seems a lot easier with the caveat that the free space
> reporting is bogus, which may be important for a media drive. :) The RAID5
> caveats don't scare me too much.
>
> The odd number device raid1 free space reporting issue is 'df'
> specific. If you try it out and fallocate a bunch of files 10G at a
> time (no writes for fallocate, it's fast) you can see the goofy thing
> that happens in the bug report. It isn't ever 100% wrong, but it is
> confusing. The btrfs specific commands tell the truth always: btrf fi
> df is short and sweet; btrfs fi us is very information dense.
>

Ok, so not so bad. The main reason I'm considering raid5 is that I have one
4TB drive right now, if I add 2 more with raid one, I'm only going to get
2TB. I know it's kind of a duh, you're mirroring and right now I have no
redundancy, but this is for home use and $$$/TB is important and I can't
fit any more drives in this case :)


Toss up on xxhash64, it's as fast or faster than crc32c to compute,
> collision resistance, but csum algo is a mkfs time only option - only
> reason why I mention it. I can write more upon request.
>

That's interesting. I hadn't read into the documentation that far yet. Are
those the only two options?



> If these are consumer drives: (a) timeout mismatch (b) disable each
> drive's write cache. This is not btrfs specific advice, applies to
> mdadm and LVM raid as well. Maybe someone has udev rules for this
> somewhere and if not we ought to get them into Fedora somehow. hdparm
> -W is the command, -w is dangerous (!). It is ok to use the write
> cache if it's determined that the drive firmware honors flush/fua,
> which they usually do, but the penalty is so bad if they don't and you
> get a crash that it's maybe not worth taking the risk. Btrfs raid1
> metadata helps here, as will using different drive make/models,
> because if one drive does the wrong thing, btrfs self heals from
> another drive even passively - but for parity raid you really ought to
> scrub following a crash. Or hey, just avoid crashes :)
>

I guess I could test for this? The current drive is ext4 formatted so my
original plan was to create a 2 drive raid1 and copy the files over, format
the old drive and then add it to the array and rebalance (a +1 for raid1!).
I could switch over to the 2 drive raid1 array for a while and "wait and
see" or is there a more proactive method?

Obviously if I go raid5 I won't have this option unless I can temporarily
house my data on a separate drive.

Looking at the link it looks like I'm OK?

# smartctl -l scterc /dev/sda1
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.9.16-200.fc33.x86_64] (local
build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

SCT Error Recovery Control:
           Read:    100 (10.0 seconds)
          Write:    100 (10.0 seconds)

The drive is a Seagate Terascale which is supposedly designed for cloud
computing / datacenters.

So raid1 or raid5...

Thanks,
Richard
_______________________________________________
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org

Reply via email to