-----Original Message----- From: Ross Walker [mailto:rswwal...@gmail.com] Sent: Thu 12/31/2009 12:35 AM To: Steffen Plotner Cc: <zfs-discuss@opensolaris.org> Subject: Re: [zfs-discuss] zvol (slow) vs file (fast) performance snv_130 Been there.
ZVOLs were changed a while ago to make each operation synchronous so to provide data consistency in the event of a system crash or power outage, particularly when used as backing stores for iscsitgt or comstar. While I think that the change is necessary I think they should have made the cooked 'dsk' device node run with caching enabled to provide an alternative for those willing to take the risk, or modify iscsitgt/comstar to issue a sync after every write if write-caching is enabled on the backing device and the user doesn't want to write cache, or advertise WCE on the mode page to the initiators and let them sync. I also believe performance can be better. When using zvols with iscsitgt and comstar I was unable to break 30MB/s with 4k sequential read workload to a zvol with a 128k recordsize (recommended for sequential IO), not very good. To the same hardware running Linux and iSCSI Enterprise Target I was able to drive over 50MB/s with the same workload. This isn't writes, just reads. I was able to do somewhat better going to the physical device with iscsitgt and comstar, but not as good as Linux, so I kept on using Linux for iSCSI and Solaris for NFS which performed better. -Ross Thank you for the information, I guess the grass is not always greener on the other side. I currently run linux IET+LVM and was looking for improved snapshot capabilities. Comstar is extremely well engineered from a scsi/iscsi/fc perspective. It is sad to see that ZVOLs have such a performance issue. I have tried changing the WCE setting in the comstar LU and it made barely a difference. Steffen
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss