We are using zfs backed fibre targets for ESXi 4.1 and previously 4.0 and have 
had good performance with no issues. The fibre LUNS were formated with vmfs by 
the ESXi boxes.

SQLIO benchmarks from guest system running on fibre attacted ESXi host.

File Size MB    Threads Read/Write      Duration        Sector Size KB  Pattern 
IOs oustanding  IO/Sec  MB/Sec  Lat. Min.       Lat. Ave.       Lat. Max.

24576   8       R       30      8       random  64      37645   294     0       
1       141

24576   8       W       30      8       random  64      17304   135     0       
3       303

24576   8       R       30      64      random  64      6250    391     1       
9       176

24576   8       W       30      64      random  64      5742    359     1       
10      203

The array is a raidz2 with 14 x 256 gb Patriot Torqx drives and a cache with 4 
x 32 gb intel 32 GB G1s

When I get around to doing the next series of boxes I'll probably use c300s in 
place of the indellix based drives.

iSCSI was disappointing and seemed to be CPU bound. Possibly by a stupid amount 
of interupts coming from the less than stellar nic on the test box.

NFS we have only used as an ISO store, but it has worked ok and without issues.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to