> From: edmud...@mail.bounceswoosh.org
> [mailto:edmud...@mail.bounceswoosh.org] On Behalf Of Eric D. Mudama
> 
> On Wed, Oct  6 at 22:04, Edward Ned Harvey wrote:
> > * Because ZFS automatically buffers writes in ram in order to
> > aggregate as previously mentioned, the hardware WB cache is not
> > beneficial.  There is one exception.  If you are doing sync writes
> > to spindle disks, and you don't have a dedicated log device, then
> > the WB cache will benefit you, approx half as much as you would
> > benefit by adding dedicated log device.  The sync write sort-of
> > by-passes the ram buffer, and that's the reason why the WB is able
> > to do some good in the case of sync writes.
> 
> All of your comments made sense except for this one.
> 
> (etc)

Your point about long-term fragmentation and significant drive emptiness are
well received.  I never let a pool get over 90% full, for several reasons
including this one.  My target is 70%, which seems to be sufficiently empty.

Also, as you indicated, blocks of 128K are not sufficiently large for
reordering to benefit.  There's another thread here, where I calculated, you
need blocks approx 40MB in size, in order to reduce random seek time below
1% of total operation time.  So all that I said will only be relevant or
accurate if within 30sec (or 5 sec in the future) there exists at least 40M
of aggregatable sequential writes.

It's really easy to measure and quantify what I was saying.  Just create a
pool, and benchmark it in each configuration.  Results that I measured were:

        (stripe of 2 mirrors) 
        721  IOPS without WB or slog.  
        2114 IOPS with WB
        2722 IOPS with WB and slog
        2927 IOPS with slog, and no WB

There's a whole spreadsheet full of results that I can't publish, but the
trend of WB versus slog was clear and consistent.

I will admit the above were performed on relatively new, relatively empty
pools.  It would be interesting to see if any of that changes, if the test
is run on a system that has been in production for a long time, with real
user data in it.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to