Thanks for the clarification.

Anyway, to Dearl Neal's point, you won't see much (if any) difference between cached and not cached in zfs, until the next release, because it is fs_flush that does the deed, and that is always called at present.

Incidentally, I have an action item to come up with a better name for that attribute, as "cached" actually means "don't flush the memory mapped cache" and "not cached" actually means "DO flush the memory mapped cache". in both cases, file caching happens, it is just a question of whether the file cache starts out "hot", with portions of the files that were just created, or "cold".

Drew

On Apr 28, 2009, at 2:22 PM, johan...@sun.com wrote:

On Tue, Apr 28, 2009 at 12:58:25PM -0700, Andrew Wilson wrote:
I believe that Solaris always converts reads and writes into file mapped I/O, so if the cached attribute is missing or set to "false" the VM pages involved in that will be flushed, so you will see a small difference in
performance, but because the files still live in the ARC, it is much
smaller than with UFS, where the files have been completely flushed from
memory.

UFS reads and writes are converted to file mapped I/O because the VM is used to back the file cache. In ZFS, the read and write operations only
perform file mapped I/O when the vnode has an existing file mapping in
place. If the vnode doesn't, then the only place the data is cached is
in the ARC.

-j

_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to