On Dec 30, 2009, at 11:55 PM, "Steffen Plotner"
<swplot...@amherst.edu> wrote:
Hello,
I was doing performance testing, validating zvol performance in
particularly, and found that zvol write performance to be slow
~35-44MB/s at 1MB blocksize writes. I then tested the underlying zfs
file system with the same test and got 121MB/s. Is there any way to
fix this? I really would like to have compatible performance between
the zfs filesystem and the zfs zvols.
Been there.
ZVOLs were changed a while ago to make each operation synchronous so
to provide data consistency in the event of a system crash or power
outage, particularly when used as backing stores for iscsitgt or
comstar.
While I think that the change is necessary I think they should have
made the cooked 'dsk' device node run with caching enabled to provide
an alternative for those willing to take the risk, or modify iscsitgt/
comstar to issue a sync after every write if write-caching is enabled
on the backing device and the user doesn't want to write cache, or
advertise WCE on the mode page to the initiators and let them sync.
I also believe performance can be better. When using zvols with
iscsitgt and comstar I was unable to break 30MB/s with 4k sequential
read workload to a zvol with a 128k recordsize (recommended for
sequential IO), not very good. To the same hardware running Linux and
iSCSI Enterprise Target I was able to drive over 50MB/s with the same
workload. This isn't writes, just reads. I was able to do somewhat
better going to the physical device with iscsitgt and comstar, but not
as good as Linux, so I kept on using Linux for iSCSI and Solaris for
NFS which performed better.
-Ross
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss