On Dec 30, 2009, at 9:35 PM, Ross Walker wrote:
On Dec 30, 2009, at 11:55 PM, "Steffen Plotner"
<swplot...@amherst.edu> wrote:
Hello,
I was doing performance testing, validating zvol performance in
particularly, and found that zvol write performance to be slow
~35-44MB/s at 1MB blocksize writes. I then tested the underlying
zfs file system with the same test and got 121MB/s. Is there any
way to fix this? I really would like to have compatible performance
between the zfs filesystem and the zfs zvols.
Been there.
ZVOLs were changed a while ago to make each operation synchronous so
to provide data consistency in the event of a system crash or power
outage, particularly when used as backing stores for iscsitgt or
comstar.
While I think that the change is necessary I think they should have
made the cooked 'dsk' device node run with caching enabled to
provide an alternative for those willing to take the risk, or modify
iscsitgt/comstar to issue a sync after every write if write-caching
is enabled on the backing device and the user doesn't want to write
cache, or advertise WCE on the mode page to the initiators and let
them sync.
CR 6794730, need zvol support for DKIOCSETWCE and friends, was
integrated into b113. Unfortunately, OpenSolaris 2009.06 is b111, where
zvol performance will stink.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6794730
This still requires that the client implements WCE (or WCD, as some
developers
like double negatives :-(. This is optional for Solaris iSCSI clients
and, IIRC, the
default has changed over time. See the above CR for more info.
-- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss