I have a few questions regarding ZFS, and would appreciate if someone could enlighten me as I work my way through.
First write cache. If I look at traditional UFS / VxFS type file systems, they normally cache metadata to RAM before flushing it to disk. This helps increase their perceived write performance (perceived in the sense that if a power outage occurs, data loss can occur). ZFS on the other hand, performs copy-on-write to ensure that the disk is always consistent, I see this as sort of being equivalent to using a directio option. I understand that the data is written first, then the points are updated, but if I were to use the directio analogy, would this be correct? If that is the case, then is it true that ZFS really does not use a write cache at all? And if it does, then how is it used? Read Cache. Any of us that have started using or benchmakring ZFS, have seen its voracious appetite for memory, an appetite that is fully shared with VxFS for example, as I am not singling out ZFS (I'm rather a fan). On reboot of my T2000 test server (32GB Ram) I see that the arc cache max size is set to 30.88GB - a sizeable piece of memory. Now, is all that cache space only for read cache? (given my assumption regarding write cache) Tuneable Parameters: I know that the philosophy of ZFS is that you should never have to tune your file system, but might I suggest, that tuning the FS is not always a bad thing. You can't expect a FS to be all things for all people. If there are variables that can be modified to provide different performance characteristics and profiles, then I would contend that it could strengthen ZFS and lead to wider adoption and acceptance if you could, for example, limit the amount of memory used by items like the cache without messing with c_max / c_min directly in the kernel. -Tony This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss