On 05/07/14 07:51, Marc Joliet wrote:
> Am Wed, 07 May 2014 06:56:12 +0800
> schrieb William Kenworthy <bi...@iinet.net.au>:
>
>> On 05/06/14 18:18, Marc Joliet wrote:
>>> Hi all,
>>>
>>> I've become increasingly motivated to convert to btrfs.  From what I've 
>>> seen,
>>> it has become increasingly stable; enough so that it is apparently supposed 
>>> to
>>> become the default FS on OpenSuse in 13.2.
>>>
>>> I am motivated by various reasons:
>> ....
>>
>> My btrfs experience:
>>
>> I have been using btrfs seriously (vs testing) for a while now with
>> mixed results but the latest kernel/tools seem to be holding up quite well.
>>
>> ~ 2yrs on a Apple/gentoo laptop (I handed it back to work a few months
>> back) - never a problem! (mounted with discard/trim)
> That's one HDD, right? From what I've read, that's the most tested and stable
> use case for btrfs, so it doesn't surprise me that much that it worked so 
> well.
>
Yes, light duty using the builtin ssd chips on the motherboard.
>> btrfs on a 128MB intel ssd (linux root drive) had to secure reset a few
>> times as btrfs said the filesystem was full, but there was 60G+ free -
>> happens after multiple crashes and it seemed the btrfs metadata and the
>> ssd disagreed on what was actually in use - reset drive and restore from
>> backups :(  Now running ext4 on that drive with no problems - will move
>> back to btrfs at some point.
> All the more reason to stick with EXT4 on the SSD for now.
I have had had very poor luck with ext anything and would hesitate it to
recommend it except for this very specific case where there is little
alternative - reiserfs is far better on platters for instance.
>
> [snip interesting but irrelevant ceph scenario]
Its relevant because it keeps revealing bugs in btrfs by stressing it -
one of those reported by me to ceph was reported upstream by the ceph
team and fixed last year - bugs still exist in btrfs !
>> 3 x raid 0+1 (btrfs raid 1 with 3 drives) - working well for about a month
> That last one is particularly good to know. I expect RAID 0, 1 and 10 to work
> fairly well, since those are the oldest supported RAID levels.
>
>> ~10+ gentoo VM's, one ubuntu and 3 x Win VM's with kvm/qemu storage on
>> btrfs - regular scrubs show an occasional VM problem after system crash
>> (VM server), otherwise problem free since moving to pure btrfs from
>> ceph.  Gentoo VM's were btrfs in raw qemu containers and are now
>> converted to qcow2 - no problems since moving from ceph.  Fragmentation
>> on VM's is a problem but "cp --reflink vm1 vm2" for vm's is really
>> really cool!
> That matches the scenario from the ars technica article; the author is a huge
> fan of file cloning in btrfs :) .
>
> And yeah, too bad autodefrag is not yet stable.
Not that its not stable but that it cant deal with large files that
change randomly on a continual basis like VM virtual disks.
>
>> I have a clear impression that btrfs has been incrementally improving
>> and the current kernel and recovery tools are quite good but its still
>> possible to end up with an unrecoverable partition (in the sense that
>> you might be able to get to some of the the data using recovery tools,
>> but the btrfs mount itself is toast)
>>
>> Backups using dirvish - was getting an occasional corruption (mainly
>> checksum) that seemed to coincide with network problems during a backup
>> sequence - have not seen it for a couple of months now.  Only lost whole
>> partition once :(  Dirvish really hammers a file system and ext4 usually
>> dies very quickly so even now btrfs is far better here.
> I use rsnapshot here with an external hard drive formatted to EXT4. I'm not
> *that* worried about the FS dying, more that it dies at an inopportune moment
> where I can't immediately restore it.
>
> [again, snip interesting but irrelevant ceph scenario]
as I said above - if it fails under ceph, its likely going to fail under
similar stresses using other software - I am not talking ceph bugs (of
which there are many) but actual btrfs corruption.
>> I am slowly moving my systems from reiserfs to btrfs as my confidence in
>> it and its tools builds.  I really dislike ext4 and its ability to lose
>> valuable data (though that has improved dramaticaly) but it still seems
>> better than btrfs on solid state and hard use - but after getting burnt
>> I am avoiding that scenario so need to retest.
> Rising confidence: good to hear :) .
>
> Perhaps this will turn out similarly to when I was using the xf86-video-ati
> release candidates and bleeding edge gentoo-sources/mesa/libdrm/etc. (for 3D
> support in the r600 driver): I start using it shortly before it starts truly
> stabilising :) .
>
More exposure, more bugs will surface and be fixed - its getting there.

BillK



Reply via email to