On Wed, Oct 6 at 22:04, Edward Ned Harvey wrote:
* Because ZFS automatically buffers writes in ram in order to aggregate as previously mentioned, the hardware WB cache is not beneficial. There is one exception. If you are doing sync writes to spindle disks, and you don't have a dedicated log device, then the WB cache will benefit you, approx half as much as you would benefit by adding dedicated log device. The sync write sort-of by-passes the ram buffer, and that's the reason why the WB is able to do some good in the case of sync writes.
All of your comments made sense except for this one. Every N seconds when the system decides to burst writes to media from RAM, those writes are only sequential in the case where the underlying storage devices are significantly empty. Once you're in a situation where your allocations are scattered across the disk due to longer-term fragmentation, I don't see any way that a write cache would hurt performances on the devices, since it'd allow the drive to reorder writes to the media within that burst of data. Even though ZFS is issuing writes of ~256 sectors if it can, that is only a fraction of a revolution on a modern drive, so random writes of 128KB still have significant opportunity for reordering optimization. Granted, with NCQ or TCQ you can get back much of the cache-disabled performance loss, however, in any system that implements an internal queue depth greater than the protocol-allowed queue depth, there is opportunity for improvement, to an asymptotic limit driven by servo settle speed. Obviously this performance improvement comes with the standard WB risks, and YMMV, IANAL, etc. --eric -- Eric D. Mudama edmud...@mail.bounceswoosh.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss