I run 3510FC and 2540 units in pairs.   I build 2 5-disk RAID5 LUNs in each 
array, with 2 disks as global spares.  Each array has dual controllers and I'm 
doing multipath.

Then from the server I have access to 2 LUNs from 2 arrays, and I build a ZFS 
RAID-10 set from these 4 LUNs being sure each mirror pair is constructed with 
LUNs from both arrays.

Thus I can survive a complete failure of one array and multiple other failures 
and keep on trucking.  

Performance is quite good since using this in /etc/system:
set zfs:zfs_nocacheflush = 1
And since recent ZFS patches for 10u4 which fixed FSYNC performance issues my 
arrays and servers are hardly breaking a sweat.

I very much like that the arrays can handle lower-level problems for me like 
sparing and ZFS ensures correctness on top of that.

This is for Cyrus mail-stores so availability and correctness are paramount in 
case you are wondering if all this belt & suspenders paranoia is worthwhile.

If/when ZFS acquires a method to ensure that spare#1 in chassis#1 only gets 
used to replace failed disks in chassis#1 then I'll reconsider my position.  
Currently though there is no mechanism to ensure this so I could easily see a 
spare being pulled from the other chassis and leaving me with an undesirable 
dependency if I were doing ZFS with JBOD.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to