> Hi Dave, Hi Cindy.
> Consider the easiest configuration first and it will probably save > you time and money in the long run, like this: > > 73g x 73g mirror (one large s0 on each disk) - rpool > 73g x 73g mirror (use whole disks) - data pool > > Then, get yourself two replacement disks, a good backup strategy, > and we all sleep better. Oh, you're throwing in free replacement disks too?! This is great! :P > A complex configuration of slices and a combination of raidZ and > mirrored pools across the same disks will be difficult to administer, > performance will be unknown, not to mention how much time it might take > to replace a disk. Yeah that's a very good point. But if you guys will make ZFS filesystems span vdevs then this could work even better! You're right about the complexity but OTOH the great thing about ZFS is not having to worry about how to plan mount point allocations and with this scenario (I also have a few servers with 4x36) the planning issue raises its ugly head again. That's why I kind of like Edward's suggestion even though it is complicated (for me) still I think it may be best given my goals. I like breathing room and not having to worry about a filesystem filling, it's great not having to know exactly ahead of time how much I have to allocate for a filesystem and instead let the whole drive be used as needed. > Use the simplicity of ZFS as it was intended is my advice and you > will save time and money in the long run. Thanks. I guess the answer is really using the small drives for root pools and then getting the biggest drives I can afford for the other bays. Thanks to everybody. _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss