I previously wrote about my scepticism on the claims that zfs selectively
enables and disables write cache, to improve throughput over the usual
solaris defaults prior to this point.
I posted my observations that this did not seem to be happening in any
meaningful way, for my zfs, on build nv33.
I was told, "oh you just need the more modern drivers".
Well, I'm now running S10u2, with
SUNWzfsr 11.10.0,REV=2006.05.18.01.46
I dont see much of a difference.
By default, iostat shows the disks grinding along at 10MB/sec during the
transfer.
However, if I manually enable write_cache on the drives (SATA drives, FWIW),
the drive throughput zips up to 30MB/sec during the transfer.
Test case:
# zpool status philpool
pool: philpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
philpool ONLINE 0 0 0
c5t1d0 ONLINE 0 0 0
c5t4d0 ONLINE 0 0 0
c5t5d0 ONLINE 0 0 0
# dd if=/dev/zero of=/philpool/testfile bs=256k count=10000
# [run iostat]
The wall clock time for the i/o to quiesce, is as espected. Without write
cache manually enabled, it takes 3 times as long to finish, as with it
enabled. (1:30, vs 30sec)
[Approximately a 2 gig file is generated. A side note of interest to me is
that in both cases, the dd returns to the user relatively quickly, but the
write goes on for quite a long time in the background.. without apparently
reserving 2 gigabytes of extra kernel memory according to swap -s ]
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss