On Tue, Apr 23, 2013 at 11:39 PM, Hilco Wijbenga <hilco.wijbe...@gmail.com> wrote: > [...] So when I needed to install a > new machine, I looked around and settled on JFS. This box has been > running for about half a year now (so that includes several power > failures) without any problems. I certainly am very pleased with JFS > so perhaps you might want to consider it.
I've also used (and still use) JFS on a lot of partitions (LVM actually), from my laptops (both rotating and SSD), desktop, VM's, etc. I've moved to it a few years ago after getting tired of all the Ext3 fsck's. Although JFS is quite "efficient", and didn't create too much trouble --- never lost an entire file-system, never corrupted data, etc. --- it does have a few quirks: * "empty files" after panics --- I think in this regard it's not JFS's fault, but actually badly written software, because things go like this: say you edit a file, save it, and immediately (a few seconds) get either a panic or power failure, the result is an empty file; the technical details are like this: some software first truncate the file, write to it, and close it, but don't sync the data, thus you end up with an empty file; as said I think JFS is correct here, because you don't get a mix of old and new data, etc.; however I've encountered this behavior in quite a few instances... * no TRIM support --- obviously really useful on SSD and virtualized disks; (although I remember there was some work done in this respect;) * not enough tooling --- you get only the `jfs-utils`, and that's kind of it... * small community --- if you have a question, you can use the mailing list, it's quite responsive, but there aren't many "data-points" so that you can easily find someone in a similar situation, thus with a solution... All in all, I've started gradually migrating my partitions on Ext4. I stay away for Btrfs for now. And to be frank I don't quite like Btrfs's, and ZFS's for that matter, approach of throwing together all the layers, from the file-system, to the RAID, to the block management, etc. I find the layered approach more appealing --- as in if something goes wrong you can poke around --- of having completely separated block device management (LVM), RAID (MD), and file-system. A... and for backup file-systems, I use Ext2. Why? My take on this is: * I don't need write or read performance; I don't mind long fsck's; (thus any file-system could fit in here, however see below;) * I do really need reliability and, most importantly, recovery in case s**t... Therefore Ext2 is a perfect match: * it is so old, that I guess by now most bugs have been found and squashed; * it is so old, that virtually any Linux (or Windows, FreeBSD, or most other knows OS's) are able to at least read it; * it is so old, that by now I bet there are countless recovery tools; * it is so simple (compared with others), that someone could just re-implement a reader for it, or recovery tools; Any feedback about the Ext2 for backups? (Hope I'm not wrong on this one...) Ciprian.