> I previously wrote about my scepticism on the claims that zfs selectively
> enables and disables write cache, to improve throughput over the usual
> solaris defaults prior to this point.
I have snv_38 here. With a zpool thus :
bash-3.1# zpool status
pool: zfs0
state: ONLINE
scrub: scrub completed with 0 errors on Sun Jun 11 16:17:24 2006
config:
NAME STATE READ WRITE CKSUM
zfs0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t10d0 ONLINE 0 0 0
c1t10d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t11d0 ONLINE 0 0 0
c1t11d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t12d0 ONLINE 0 0 0
c1t12d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t9d0 ONLINE 0 0 0
c1t9d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t13d0 ONLINE 0 0 0
c1t13d0 ONLINE 0 0 0
errors: No known data errors
Regardless of what abuse I throw at this I never seem to see anything happen
that indicates that cache is being "toggled" on or off.
Furthermore these are all Sun 36G disks.
> I posted my observations that this did not seem to be happening in any
> meaningful way, for my zfs, on build nv33.
>
> I was told, "oh you just need the more modern drivers".
>
> Well, I'm now running S10u2, with
> SUNWzfsr 11.10.0,REV=2006.05.18.01.46
Its possible that the feature you seek is in snv somewhere and not in that
S10 wos.
But I am guessing. We would need to look to the changelogs to see where
that feature was incorporated in the ZFS bits.
Better yet .. use the source Luke !
Dennis
_______________________________________________
zfs-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss