On Wed, Dec 30, 2009 at 9:35 PM, Ross Walker <rswwal...@gmail.com> wrote:
> On Dec 30, 2009, at 11:55 PM, "Steffen Plotner" <swplot...@amherst.edu>
> wrote:
>
> Hello,
>
> I was doing performance testing, validating zvol performance in
> particularly, and found that zvol write performance to be slow ~35-44MB/s at
> 1MB blocksize writes. I then tested the underlying zfs file system with the
> same test and got 121MB/s.  Is there any way to fix this? I really would
> like to have compatible performance between the zfs filesystem and the zfs
> zvols.
>
> Been there.
> ZVOLs were changed a while ago to make each operation synchronous so to
> provide data consistency in the event of a system crash or power outage,
> particularly when used as backing stores for iscsitgt or comstar.
> While I think that the change is necessary I think they should have made the
> cooked 'dsk' device node run with caching enabled to provide an alternative
> for those willing to take the risk, or modify iscsitgt/comstar to issue a
> sync after every write if write-caching is enabled on the backing device and
> the user doesn't want to write cache, or advertise WCE on the mode page to
> the initiators and let them sync.
> I also believe performance can be better. When using zvols with iscsitgt and
> comstar I was unable to break 30MB/s with 4k sequential read workload to a
> zvol with a 128k recordsize (recommended for sequential IO), not very good.
> To the same hardware running Linux and iSCSI Enterprise Target I was able to
> drive over 50MB/s with the same workload. This isn't writes, just reads. I
> was able to do somewhat better going to the physical device with iscsitgt
> and comstar, but not as good as Linux, so I kept on using Linux for iSCSI
> and Solaris for NFS which performed better.
> -Ross
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>

I also noticed that using ZVOLS instead of files, for 20MB/sec read
I/O, I saw as many as 900 iops to the disks themselves.
When using file based luns to Comstar, doing 20MB/sec read I/O will
just issue a couple hundred iops.
Seemed to get decent performance, it was required for me to either
throw away my X4540's and switch to 7000's with expensive SSDs, or
switch to file-based Comstar LUNs and disable the ZIL  :(

Sad when a $50k piece of equipment requires such sacrifice.

-- 
Brent Jones
br...@servuhome.net
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to