Adam Leventhal <[EMAIL PROTECTED]> writes: > I'm not sure I even agree with the notion that this is a real > problem (and if it is, I don't think is easily solved). Stripe > widths are a function of the expected failure rate and fault domains > of the system which tend to be static in nature. A coarser solution > would be to create a new pool where you zfs send/zfs recv the > filesystems of the old pool.
RAIDZ expansion is a big enough deal that I may end up buying an Infrant NAS box and using their X-RAID instead. The ZFS should be more secure, and I *really* like the block checksumming -- but the ability to expand my existing pool by just adding a new disk is REALLY REALLY USEFUL in a small office or home configuration. Having disks down for hours at work while they arrange to make them bigger suggests there'd be benefits in that market, too. I see phrases like "just add another 7-disk RAIDZ", and I laugh; the boxes I'm looking at mostly have *4* or *5* hot-swap bays. If I could, I'd start with a 2-disk RAIDZ, planning to expand it twice before hitting the system config limit. A *single* 7-disk RAIDZ is probably beyond my means; two of them is absurd to even consider. Possibly this isn't the market ZFS will make money in, but it's the market *I'm* in. -- David Dyer-Bennet, <mailto:[EMAIL PROTECTED]>, <http://www.dd-b.net/dd-b/> RKBA: <http://www.dd-b.net/carry/> Pics: <http://dd-b.lighthunters.net/> <http://www.dd-b.net/dd-b/SnapshotAlbum/> Dragaera/Steven Brust: <http://dragaera.info/> _______________________________________________ zfs-discuss mailing list [email protected] http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
