> I am configuring my first thumper. Our goal is to
> reduce the odds that a single failure will take down
> the file  system. DOes such a design exist. I cannot
> find it.

It should ship this way.  There should be a zpool created with redundancy.

> Questions: 
> 1)Do boot disks (currently controller 5 , disk 0 and
> 4) have to be on one controller or can they be split
> (eg contoller 0 disk 0 and controller 5 disk 0)?

BIOS limitation requires that they be on the same controller. But, don't worry 
about it.
(I blogged about this in more detail a while back, http://blogs.sun.com/relling)

> 2) IF for whatever reason I lose both mirrors of a
> root disk, where does ZFS information reside? Can I
> get new disks, rebuild the OS and mount the ZFS
> filesystems , or is that ZFS configuration stored on
> these root disks?

ZFS configuration is stored on the ZFS vdevs.

> Thanks. Details are below.
> Our first design was:
> 
> 5 pools of 9 disks , all raidz + 1 hot spare for all
> pools
> 1 filesystem with all 5 pools , and raidz as the
> underlying protection

9 disks is beyond the recommended size for raidz.  You might consider a
different config.  For some guidance, check out my blog.

> However , a single controller failure will kill the
> filesystem with this configuration .
> 
> Next design:
> 
> 9 pools of 5 disk raidzs. All pools are protected
> against a single controller failure but the two UFS
> root disks (mirror using md raid mirroring) are both
> on controller 5, and if that controller dies, the
> filesystems goes down. This is my next probelm with a
> design avoiding single controller failure.

Don't worry about the controllers.  The disks are the weak link in
the chain.  Controllers have something like an order of magnitude
higher reliability.  I've blogged about this, too.
 -- richard 

> Are there performance issues with this design?
> 
> ! /bin/bash
> # create initial pool from disk 0
> zpool create  adp raidz c0t0d0 c1t0d0 c2t0d0 c3t0d0
> c4t0d0
> # now grow zfs pool adp , each 5 disks ensuring that
> no single controller failure will lose the
> filesystem
> # option of 5 9disk pools is risky as a single
> controller failure could result in 2 or more  disks
> in 1 raidset failing
> # sacrifice capacity for risk management
> zpool add adp raidz  c0t1d0 c1t1d0 c2t1d0 c3t1d0
> c4t1d0
> zpool add adp raidz  c0t2d0 c1t2d0 c2t2d0 c3t2d0
> c5t1d0
> zpool add adp raidz  c0t3d0 c1t3d0 c2t3d0 c4t2d0
> c5t2d0
> zpool add adp raidz  c0t4d0 c1t4d0 c3t3d0 c4t3d0
> c5t3d0
> zpool add adp raidz  c0t5d0 c1t5d0 c2t4d0 c3t4d0
> c4t4d0
> zpool add adp raidz  c0t6d0 c2t5d0 c3t5d0 c4t5d0
> c5t5d0
> zpool add adp raidz  c0t7d0 c2t6d0 c3t6d0 c4t6d0
> c5t6d0
> zpool add adp raidz  c1t7d0 c2t7d0 c3t7d0 c4t7d0
> c5t7d0
> # now create the one global hot spare
> zpool add -f adp spare c1t1d0
> zpool add -f systems spare c1t1d0
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to