On 2020-01-25 17:34, Aidan Gauland wrote:
I want to set up a file server on my home LAN with just consumer-grade
hardware, and run Debian stable on it. For hardware, I am probably
going to get a refurbished mid-range tower with a four to six 3.5" SATA
drive capacity, and put WD Reds in it.
What I'm not sure of which filesystem to use. I could just use ext4
with RAID5 or RAID6 to get striping with some fault tolerance (i.e. time
to replace a failed drive without losing everything). ZFS looks easier,
but only if you're on BSD. btrfs sounds like ZFS for Linux, but it
appears to still be of beta quality, and I can't tell whether it can yet
do striping with parity. Any advice?
Regards,
Aidan Gauland
I have a SOHO network with various Debian, FreeBSD, iOS, macOS, and
Windows devices. I have run CVS and Samba servers for many years.
When my data was previously on Debian, single desktop drive, LUKS, and
ext4, I saw signs of bit rot.
When my data was previously on Debian, desktop drives, md mirror, LUKS,
and ext4, I thought it was okay. Now, I'm not so sure.
Unfortunately, my backups, archives, and images are on Debian, single
desktop drive, LUKS, and ext4. Fortunately, I have four such drives and
stacks of CD, DVD, and BD discs. But, I am worried.
btrfs offers bit rot protection and is well supported on Debian.
My Debian system drives are btrfs (including home), created with the
installer. I ran them in bliss for a year or more. Then I started
losing e-mail:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=933378
btrfs requires maintenance:
# btrfs filesystem show
Label: 'tinkywinky_r' uuid: f68d1e7f-bd68-49c4-a04f-ddf92beccd17
Total devices 1 FS bytes used 7.39GiB
devid 1 size 11.17GiB used 11.17GiB path
/dev/mapper/sda3_crypt
At this point, balancing by hand was pointless. I had to write a Perl
script to run the balance command repeatedly for hours to make a dent.
Tens of thousands of relocated chunks later, all but one disk are kinda
sorta fixed. The one refused to balance -- by hand or otherwise. I
backed up, wiped, installed with ext4, and restored. It works, but that
system is now exposed to bit rot. Damned if you do and damned if you
don't...
ZFS offers bit rot protection.
I ran zfs-fuse and ZOL on Debian several years ago, and had to do my own
SysV init integration. I have read ZOL is now better supported on
Debian, but I haven't tried it. I need to.
ZFS is best supported on the various BSD's.
My SOHO file server is built upon on a used previous-generation Intel
server board with a single quad-core Xeon CPU and 8 GB ECC memory ($125
on eBay). I installed FreeBSD-12.1-RELEASE-amd64 with ZFS boot and
root, with copies=2, onto a single 2.5" older high-end desktop SSD.
Data is on two new previous-generation 3 TB enterprise drives,
configured as a ZFS mirror. zfs-auto-snapshot covers everything. This
is the best solution I have found; ever.
My challenge now is finding and/or building automation to crawl through
the mountains of stuff and find the bit rot. And then, do something
about it.
These articles are helpful:
https://www.techrepublic.com/blog/it-security/use-mtree-for-filesystem-integrity-auditing/
https://forums.freebsd.org/threads/small-guide-on-using-mtree.61113/
mtree is a key tool:
https://www.freebsd.org/cgi/man.cgi?mtree(8)
Unfortunately, fmtree on Debian is behind:
https://manpages.debian.org/testing/freebsd-buildutils/fmtree.8.en.html
I was able to build a local backport, but it's still behind:
https://lists.debian.org/debian-user/2020/01/msg00488.html
David