Quoting Hendrik Boom (hend...@topoi.pooq.com): > As I understand it, there are a few new file systems somewhat > available on Linux -- ZFS, XFS, and Btrfs. > > But soe are still under development, ZFS is pparently under a > prolematic license, and I don't know about XFS. > > I've onece heard about one of the new systems that one shouldn't > bother using it unless one has at least 8 gigabytes of RAM. > > Now, just how mature are these, how easily managed, how reliable. > > I'll be populating a new device with a (I hope) high-reliablity file > system soon. It doesn't have a lot of RAM, but the RAM does have > parity checking. > > Long-term data preservation is more important than speed. > > Currently on another system I'm using ext4 over LLVM over software > RAID-1. I know RAID isn't a reliable backup system; I make separate > off-line backups.
Specifically, RAID isn't backup at all. It's redundancy (except for varieties like RAID0 that aren't even that). See: 'Backup Fallacies / Pitfalls' on http://linuxmafia.com/kb/Admin/ > What should I be considering for the new system? The same? You've just asked one of the more inherently debatable questions in all of Linux system administration. I can only recommend that you study what the strengths and weaknesses, advantages and disadvantages, are of the various options at hand, and then design a system that implements your choices. For my own home server rebuild, I'm going with ext4, with all filesystems RAID1-mirrored across a pair of SSDs, and a weekly cron job applying TRIM. No swap (because SSDs). XFS is mainline kernel code under GPLv2. It is particularly good for filesystems with mny very large files, e.g., audio/video. It isn't quite as fast and massively QAed as ext3/ext4 (though the performance difference is smaller than it used to be) . XFS is _not_ new. SGI ported it to Linux in 2000. Like ext3/ext4 and unlike ZFS/btrfs, XFS lacks checksum protection against silent data corruption. ZFS is indeed under a GPLv2-incompatible licence[1] (CDDL). It's the one that requires larger RAM overhead, but has a number of very compelling features[2] especially for extremely large (multi-terabyte) filesystems. The driver code is (obviously) not part of the mainline kernel, but rather runnable either as a large external patchset or as a FUSE Filesystem in Userspace subsystem. The latter has a performance penalty. The former... entails running an out-of-tree kernel. btrfs is still scarily beta after rather a lot of years of development. Its prospects have dimmed further now that Red Hat have dropped it from their roadmap. [1] Canonical, Ltd. have asserted their recent distribution of binary-compiled ZFS module code for Ubuntu to be lawful. My interpretation is that they know this is false, that it is clearly copyright infringement, but have taken a calculated risk that kernel stakeholders won't sue them, and that the Linux-using public won't object overly to Canonical lying to them for PR advantage. [2] Volume manager is integrated into the filesystmem. Snapshots and replication built in. All storage kept vetted by checksumming and as necessary corrected. Automated self-healing. Smarter data-striping ('ZRAID') than conventional RAID modes. Native data compression / deduping (which, however, is RAM-hungry). And a lot more: It's pretty impressive. _______________________________________________ Dng mailing list Dng@lists.dyne.org https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng