comfortable with having 2 parity drives for 12 disks,

the thread starting config of 4 disks per controller(?):
zpool create tank raidz2 c1t1d0 c1t2d0 c1t3d0 c1t4d0    c2t1d0 c2t2d0

then later
zpool add tank raidz2 c2t3d0 c2t4d0     c3t1d0 c3t2d0 c3t3d0 c3t4d0

as described, doubles ones IOPs, and usable space in tank, with the loss
of another two disks, splitting the cluster into four (and two parity)
writes per disk.  perhaps a 8 disk controller, and start with

zpool create tank raidz c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0

then do a
zpool add tank raidz c1t6d0 c1t7d0 c1t8d0 c2t1d0 c2t2d0
zpool add tank raidz c2t3d0 c2t4d0 c2t5d0 c2t6d0 c2t7d0
zpool add tank spare c2t8d0

gives one the same largeish cluster size div 4 per raidz disk, 3x the
IOPs, less parity math per write, and a hot spare for the same usable
space and loss of 4 disks.

splitting the max 128k cluster into 12 chunks (+2 parity) makes good MTTR
sense but not much performance sense.  if someone wants to do the MTTR
math between all three configs, I'd love to read it.

                        Rob

http://storageadvisors.adaptec.com/2005/11/02/actual-reliability-calculations-for-raid/
http://www.barringer1.com/ar.htm
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to