The only obvious thing would be if the exported ZFS filesystems where initially mounted at a point in time when zil_disable was non-null.
The stack trace that is relevant is: sd_send_scsi_SYNCHRONIZE_CACHE sd`sdioctl+0x1770 zfs`vdev_disk_io_start+0xa0 zfs`zil_flush_vdevs+0x108 zfs`zil_commit_writer+0x2b8 ... You might want to try in turn: dtrace -n 'sd_send_scsi_SYNCHRONIZE_CACHE:[EMAIL PROTECTED](20)]=count()}' dtrace -n 'sdioctl:[EMAIL PROTECTED](20)]=count()}' dtrace -n zil_flush_vdevs:[EMAIL PROTECTED](20)]=count()}' dtrace -n zil_commit_writer:[EMAIL PROTECTED](20)]=count()}' And see if you loose your footing along the way. -r Marion Hakanson writes: > [EMAIL PROTECTED] said: > > [b]How the ZFS striped on 7 slices of FC-SATA LUN via NFS worked [u]146 > > times > > faster[/u] than the ZFS on 1 slice of the same LUN via NFS???[/b] > > Well, I do have more info to share on this issue, though how it worked > faster in that test still remains a mystery. Folks may recall that I said: > > > Not that I'm not complaining, mind you. I appear to have stumbled across a > > way to get NFS over ZFS to work at a reasonable speed, without making > > changes > > to the array (nor resorting to giving ZFS SVN soft partitions instead of > > "real" devices). Suboptimal, mind you, but it's workable if our Hitachi > > folks don't turn up a way to tweak the array. > > Unfortunately, I was wrong. I _don't_ know how to make it go fast. While > I _have_ been able to reproduce the result on a couple different LUN/slice > configurations, I don't know what triggers the "fast" behavior. All I can > say for sure is that a little dtrace one-liner that counts sync-cache calls > turns up no such calls (for both local ZFS and remote NFS extracts) when > things are going fast on a particular filesystem. > > By comparison, a local ZFS tar-extraction triggers 12 sync-cache calls, > and one hits 288 such calls during an NFS extraction before interrupting > the run after 30 seconds (est. 1/100th of the way through) when things > are working in the "slow" mode. Oh yeah, here's the one-liner (type in > the command, run your test in another session, then hit ^C on this one): > > dtrace -n fbt::ssd_send_scsi_SYNCHRONIZE_CACHE:entry'[EMAIL PROTECTED] = > count()}' > > This is my first ever use of dtrace, so please be gentle with me (:-). > > > [EMAIL PROTECTED] said: > > Guess I should go read the ZFS source code (though my 10U3 surely lags the > > Opensolaris stuff). > > I did go read the source code, for my own edification. To reiterate what > was said earlier: > > [EMAIL PROTECTED] said: > > The point is that the flushes occur whether or not ZFS turned the caches on > > or not (caches might be turned on by some other means outside the > > visibility > > of ZFS). > > My limited reading of ZFS (on opensolaris.org site) code so far has turned > up no obvious way to make ZFS skip the sync-cache call. However my dtrace > test, unless it's flawed, shows that on some filesystems, the call is made, > and on other filesystems the call is not made. > > > [EMAIL PROTECTED] said: > > 2.I never saw the storage controller with cache-per-LUN setting. Cache size > > doesn't depend on number of LUNs IMHO, it's a fixed size per controller or > > per FC port, SAN-experts-please-fix-me-if-I'm-wrong. > > Robert has already mentioned array cache being reserved on a per-LUN basis > in Symmetrix boxes. Our low-end HDS unit also has cache pre-fetch settings > on a per-LUN basis (defaults according to number of disks in RAID-group). > > Regards, > > Marion > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss