On Sat, Oct 7, 2017 at 6:28 AM, Neil Bothwick <n...@digimed.co.uk> wrote:
> On Sat, 7 Oct 2017 05:18:33 -0400, Tanstaafl wrote:
>
>> Anyone have any experience with comparing performance with either btrfs
>> or ZFS against either ReiserFS or XFS for a maildir based mail server?
>
> I tried btrfs on a mail server and it was unbearably slow. Disabling
> copy-on-write made a big difference, but it still went a lot faster when
> I switched to ext4.
>
> I haven't used XFS in years, maybe it's time to revisit it.
>

I haven't used xfs in a while, but here is my sense of things, for a
basic configuration (filesystem running on one drive or a mirrored
pair):

xfs > ext4 > zfs >>> btrfs

At least, that is in terms of most conventional measures of
performance (reading and writing files on a typical filesystem).  If
you want to measure performance in terms of how long your system is
down after a controller error then both zfs and btrfs will have an
advantage.  I mention it because I think that integrity shouldn't take
a back seat to performance 99% of the time.  It has performance
benefits of its own, but you only see them every couple of years when
something fails.

btrfs isn't horrible, but it basically hasn't been optimized at all.
The developers are mainly focused on getting it to not destroy your
data, with mixed success.  An obvious example of this is that if you
read a file from a pair of mirrors, the filesystem decides which drive
in the pair to use based on whether the PID doing the read is even or
odd.

Fundamentally I haven't seen any arguments as to why btrfs should be
any worse than zfs.  It just hasn't been implemented completely.  But,
if you want a filesystem today and not in 10 years you need to take
that into account.

Now, ZFS has a bunch of tricks available to improve things like SSD
read caches and write logs.  But, you could argue that other
filesystems support separate journal devices and there is bcache so I
think if you want to look at those features you need to compare apples
to apples.  ZFS certainly integrates it all nicely, but then it has
other "features" like not being able to remove a drive from a storage
pool, or revert to a snapshot without deleting all the subsequent
snapshots.

In general though I think zfs will always suffer a bit in performance
because it is copy-on-write.  If you want to change 1 block in the
middle of a file, ext4 and xfs can just write over that 1 block, while
zfs and btrfs are going to write that block someplace else and do a
metadata dance to map it over the original block.  I just don't see
how that will ever be faster.  Of course, if you have a hardware
failure in the middle of an operation zfs and btrfs basically
guarantee that the writes behave as if they were atomic, while you
only get that benefit with ext4/xfs if you do full journaling with a
significant performance hit, and if you're using mdadm underneath then
you lose that guarantee.  Both zfs and btrfs avoid the raid write hole
(though to be fair you don't want to go anywhere near parity raid on
btrfs anytime soon).

I'm not saying that there isn't a place for performance-above-all.
For an ephemeral worker node you already have 47 backups running and
if the node fails you restart it, so if it needs to write some data to
disk performance is probably the only concern.  Ditto for any data
that has no long-term value/etc.  However, for most general-purpose
filesystems I think integrity should be the #1 concern, because you
won't notice that 20us access time difference but you probably will
notice hour spent restoring from backups, assuming you even have
backups.

-- 
Rich

Reply via email to