> That depends upon exactly what effect turning off the
> ZFS cache-flush mechanism has.

The only difference is that ZFS won't send a SYNCHRONIZE CACHE command at the 
end of a transaction group (or ZIL write). It doesn't change the actual read or 
write commands (which are always sent as ordinary writes -- for the ZIL, I 
suspect that setting the FUA bit on writes rather than flushing the whole cache 
might provide better performance in some cases, but I'm not sure, since it 
probably depends what other I/O might be outstanding.)

> Of course, if that's true then disabling cache-flush
> should have no noticeable effect on performance (the
> controller just answers "Done" as soon as it receives
> a cache-flush request, because there's no applicable
> cache to flush), so you might as well just leave it
> enabled.

The problem with SYNCHRONIZE CACHE is that its semantics aren't quite defined 
as precisely as one would want (until a fairly recent update). Some controllers 
interpret it as "push all data to disk" even if they have battery-backed NVRAM. 
In this case, you lose quite a lot of performance, and you gain only a modicum 
of reliability (at least in the case of larger RAID systems, which will 
generally use their battery to flush NVRAM to disk if power is lost).

There's a bit defined now that can be used to say "Only flush volatile caches, 
it's OK if data is in non-volatile cache." But not many controllers support 
this yet, and Solaris didn't as of last year -- not sure if it's been added yet.

-- Anton
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to