I've used ZFS since July/August 2006 when Sol 10 Update 2 came out (first 
release to integrate ZFS.) I've used it on three servers (E25K domain, and 2 
E2900s) extensivesely; two them are production. I've over 3TB of storage from 
an EMC SAN under ZFS management for no less than 6 months. Like your 
configuration we've defered data redundancy to SAN. My observations are:

1. ZFS is stable to a very large extent. There are two known issues that I'm 
aware of:
  a. You can end up in an endless 'reboot' cycle when you've a corrupt zpool. I 
came across this when I had data corruption due to a HBA mismatch with EMC SAN. 
This mismatch injected data corruption in transit and the EMC faithfully wrote 
bad data, upon reading this bad data ZFS threw up all over the floor for that 
pool. There is a documented workaround to snap out of the 'reboot' cycle, I've 
not checked if this is fixed in 11/06 update 3.
  b. Your server will hang when one of the underlying disks disappear. In our 
case we had a T2000 running 11/06 and had a mirrored zpool against two internal 
drives. When we pulled one of the drives abruptly the server simply hung. I 
believe this is a known bug, workaround?

2. When you've I/O operations that either request fsync or open files with 
O_DSYNC option coupled with high I/O ZFS will choke. It won't crash but the 
filesystem I/O runs like molases on a cold morning.

All my feedback is based on Solaris 10 Update 2 (aka 06/06) and I've no 
comments on NFS. I strongly recommend that you use ZFS data redundancy (z1, z2, 
or mirror) and simply delegate the Engenio to stripe the data for performance.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to