On 18 May, 2007 - Dale Sears sent me these 1,5K bytes:

> Tomas Ögren wrote:
> >On 14 May, 2007 - Dale Sears sent me these 0,9K bytes:
> >
> >>I was wondering if this was a good setup for a 3320 single-bus,
> >>single-host attached JBOD.  There are 12 146G disks in this array:
> >>
> >>I used:
> >>
> >>zpool create pool1 \
> >>raidz2 c2t0d0 c2t1d0 c2t2d0 c2t3d0 c2t4d0 c2t5d0 c2t6d0 c2t8d0 c2t9d0 
> >>c2t10 \
> >>spare c2t11d0 c2t12d0
[..]
> >That raid set will give you the same random IO performance as a single
> >disk. Sequential IO will be better than a single disk.
> >
> >For instance splitting it into two raidz2 disks without spares can
> >survive any two disks within both groups (so 2 to 4 disks can fail
> >without data loss).. Random IO performance will be twice the single
> >raidz2/single disk.
> 
> What would that command look like?   Is this what you're saying?:
> 
>  zpool create pool1 \
>  raidz2 c2t0d0 c2t1d0 c2t2d0 c2t3d0  c2t4d0  c2t5d0  \
>  raidz2 c2t6d0 c2t8d0 c2t9d0 c2t10d0 c2t11d0 c2t12d0
> 
> Thanks!

Yep. Verify performance differences in your usage case between the two
methods..

Its reliability against failures is a bit more of a gamble than a big
one with 2HS.. If you're lucky, 4 disks can blow up at the same time
without problems (vs 2 in your version).. If you're unlucky, 2 disks
from the same set blows up and then another one before you had the
chance to replace them with cold spare(s).. If first 2 then another one
during a weekend or so.. A hot spare could have saved you then..

If you have a cold spare laying around and replacing as soon as one
break, this shouldn't be a problem.. but it can make a difference, it's
up to you to decide (or attach a single additional hotspare outside the
3320).

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to