Thanks for the tip. In the local case, I could send to the
iSCSI-backed ZFS RAIDZ at even faster rates, with a total elapsed time
of 50seconds (17 seconds better than UFS). However, I didn't even both
finishing the NFS client test, since it was taking a few seconds
between multiple 27K files. So, it didn't help NFS at all. I'm
wondering if there is something on the NFS end that needs changing,
no? Also, how would one easily script the mdb command below to make
permanent?


On 5/5/06, Eric Schrock <[EMAIL PROTECTED]> wrote:
My gut feeling is that somehow the DKIOCFLUSHWRITECACHE ioctls (which
translate to the SCSI flush write cace requests) are throwing iSCSI for
a loop.  We've exposed a number of bugs in our drivers because ZFS is
the first filesystem to actually care to issue this request.

To turn this off, you can try:

# mdb -kw
> ::walk spa | ::print spa_t spa_root_vdev | ::vdev -r
ADDR             STATE     AUX          DESCRIPTION
ffffffff82dc16c0 HEALTHY   -            root
ffffffff82dc0640 HEALTHY   -              /dev/dsk/c0d0s0
> ffffffff82dc0640::print -a vdev_t vdev_nowritecache
ffffffff82dc0af8 vdev_nowritecache = 0 (B_FALSE)
> ffffffff82dc0af8/W1
0xffffffff82dc0af8:             0               =       0x1
>

See if that makes a difference.

- Eric

--
Eric Schrock, Solaris Kernel Development       http://blogs.sun.com/eschrock

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to