On 05/06/14 18:18, Marc Joliet wrote: > Hi all, > > I've become increasingly motivated to convert to btrfs. From what I've seen, > it has become increasingly stable; enough so that it is apparently supposed to > become the default FS on OpenSuse in 13.2. > > I am motivated by various reasons: ....
My btrfs experience: I have been using btrfs seriously (vs testing) for a while now with mixed results but the latest kernel/tools seem to be holding up quite well. ~ 2yrs on a Apple/gentoo laptop (I handed it back to work a few months back) - never a problem! (mounted with discard/trim) btrfs on a 128MB intel ssd (linux root drive) had to secure reset a few times as btrfs said the filesystem was full, but there was 60G+ free - happens after multiple crashes and it seemed the btrfs metadata and the ssd disagreed on what was actually in use - reset drive and restore from backups :( Now running ext4 on that drive with no problems - will move back to btrfs at some point. cephfs - rolling disaster but its more to do with not giving the system adequate resources and using what from ceph's point of view are bad practises (running ceph on the same machine used for VM's and mounts) - mostly resulted in gradually corrupted and unrecoverable btrfs partitions over time. 3 x raid 0+1 (btrfs raid 1 with 3 drives) - working well for about a month ~10+ gentoo VM's, one ubuntu and 3 x Win VM's with kvm/qemu storage on btrfs - regular scrubs show an occasional VM problem after system crash (VM server), otherwise problem free since moving to pure btrfs from ceph. Gentoo VM's were btrfs in raw qemu containers and are now converted to qcow2 - no problems since moving from ceph. Fragmentation on VM's is a problem but "cp --reflink vm1 vm2" for vm's is really really cool! I have a clear impression that btrfs has been incrementally improving and the current kernel and recovery tools are quite good but its still possible to end up with an unrecoverable partition (in the sense that you might be able to get to some of the the data using recovery tools, but the btrfs mount itself is toast) Backups using dirvish - was getting an occasional corruption (mainly checksum) that seemed to coincide with network problems during a backup sequence - have not seen it for a couple of months now. Only lost whole partition once :( Dirvish really hammers a file system and ext4 usually dies very quickly so even now btrfs is far better here. The comments on ceph only hold in my use case i.e., dont do it this way! After the experience and problems, I would still choose ceph for its proper use case (its actually way cool!) - the ceph people do not recommend btrfs for production use. I am slowly moving my systems from reiserfs to btrfs as my confidence in it and its tools builds. I really dislike ext4 and its ability to lose valuable data (though that has improved dramaticaly) but it still seems better than btrfs on solid state and hard use - but after getting burnt I am avoiding that scenario so need to retest. BillK