-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 hi,
i have a system connected to an external DAS (SCSI) array, using ZFS. the array has an nvram write cache, but it honours SCSI cache flush commands by flushing the nvram to disk. the array has no way to disable this behaviour. a well-known behaviour of ZFS is that it often issues cache flush commands to storage in order to ensure data integrity; while this is important with normal disks, it's useless for nvram write caches, and it effectively disables the cache. so far, i've worked around this by setting zfs_nocacheflush, as described at [1], which works fine. but now i want to upgrade this system to Solaris 10 Update 6, and use a ZFS root pool on its internal SCSI disks (previously, the root was UFS). the problem is that zfS_nocacheflush applies to all pools, which will include the root pool. my understanding of ZFS is that when run on a root pool, which uses slices (instead of whole disks), ZFS won't enable the write cache itself. i also didn't enable the write cache manually. so, it _should_ be safe to use zfs_nocacheflush, because there is no caching on the root pool. am i right, or could i encounter problems here? (the system is an NFS server, which means lots of synchronous writes (and therefore ZFS cache flushes), so i *really* want the performance benefit from using the nvram write cache.) - river. [1] http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes -----BEGIN PGP SIGNATURE----- iD8DBQFJNRJVIXd7fCuc5vIRAgDlAJ0boVf5zmvkRySeIHVumsKm3VSVhACffyOK POEMyzG8U2yQYeZr01uJ74Q= =9eBp -----END PGP SIGNATURE----- _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss