On Sep 5, 2006, at 06:45, Robert Milkowski wrote:
i remember working up a chart on this list about 2 months ago: Here's 10 options I can think of to summarize combinations of zfs with hw redundancy: # ZFS ARRAY HW CAPACITY COMMENTS -- --- -------- -------- -------- 1 R0 R1 N/2 hw mirror - no zfs healing (XXX) 2 R0 R5 N-1 hw R5 - no zfs healing (XXX) 3 R1 2 x R0 N/2 flexible, redundant, good perf 4 R1 2 x R5 (N/2)-1 flexible, more redundant, decent perf 5 R1 1 x R5 (N-1)/2 parity and mirror on same drives (XXX) 6 RZ R0 N-1 standard RAIDZ - no array RAID (XXX) 7 RZ R1 (tray) (N/2)-1 RAIDZ+1 8 RZ R1 (drives) (N/2)-1 RAID1+Z (highest redundancy) 9 RZ 2 x R5 N-3 triple parity calculations (XXX) 10 RZ 1 x R5 N-2 double parity calculations (XXX) If you've invested in a RAID controller on an array, you might as well take advantage of it, otherwise you could probably get an old D1000 chassis somewhere and just run RAIDZ on JBOD. If you're more concerned about redundancy than space, with the SUN/STK 3000 series dual controller arrays I would either create at least 2 x RAID5 luns balanced across controllers and zfs mirror, or create at least 4 x RAID1 luns balanced across controllers and use RAIDZ. RAID0 isn't going to make that much sense since you've got a 128KB txg commit on zfs which isn't going to be enough to do a full stripe in most cases. .je
|
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss