Don Enrique wrote: > Hi, > > I am looking for some best practice advice on a project that i am working on. > > We are looking at migrating ~40TB backup data to ZFS, with an annual data > growth of > 20-25%. > > Now, my initial plan was to create one large pool comprised of X RAIDZ-2 > vdevs ( 7 + 2 ) > with one hotspare per 10 drives and just continue to expand that pool as > needed. > > Between calculating the MTTDL and performance models i was hit by a rather > scary thought. > > A pool comprised of X vdevs is no more resilient to data loss than the > weakest vdev since loss > of a vdev would render the entire pool unusable. >
Yes, but a raidz2 vdev using enterprise class disks is very reliable. > This means that i potentially could loose 40TB+ of data if three disks within > the same RAIDZ-2 > vdev should die before the resilvering of at least one disk is complete. > Since most disks > will be filled i do expect rather long resilvering times. > > We are using 750 GB Seagate (Enterprise Grade) SATA disks for this project > with as much hardware > redundancy as we can get ( multiple controllers, dual cabeling, I/O > multipathing, redundant PSUs, > etc.) > nit: SATA disks are single port, so you would need a SAS implementation to get multipathing to the disks. This will not significantly impact the overall availability of the data, however. I did an availability analysis of thumper to show this. http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_vs > I could use multiple pools but that would make data management harder which > in it self is a lengthy > process in our shop. > > The MTTDL figures seem OK so how much should i need to worry ? Anyone having > experience from > this kind of setup ? > I think your design is reasonable. We'd need to know the exact hardware details to be able to make more specific recommendations. -- richard _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss