On Oct 15, 2010, at 9:18 AM, Stephan Budach <stephan.bud...@jvm.de> wrote:
> Am 14.10.10 17:48, schrieb Edward Ned Harvey: >> >>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >>> boun...@opensolaris.org] On Behalf Of Toby Thain >>> >>>> I don't want to heat up the discussion about ZFS managed discs vs. >>>> HW raids, but if RAID5/6 would be that bad, no one would use it >>>> anymore. >>> It is. And there's no reason not to point it out. The world has >> Well, neither one of the above statements is really fair. >> >> The truth is: radi5/6 are generally not that bad. Data integrity failures >> are not terribly common (maybe one bit per year out of 20 large disks or >> something like that.) >> >> And in order to reach the conclusion "nobody would use it," the people using >> it would have to first *notice* the failure. Which they don't. That's kind >> of the point. >> >> Since I started using ZFS in production, about a year ago, on three servers >> totaling approx 1.5TB used, I have had precisely one checksum error, which >> ZFS corrected. I have every reason to believe, if that were on a raid5/6, >> the error would have gone undetected and nobody would have noticed. >> > Point taken! > > So, what would you suggest, if I wanted to create really big pools? Say in > the 100 TB range? That would be quite a number of single drives then, > especially when you want to go with zpool raid-1. A pool consisting of 4 disk raidz vdevs (25% overhead) or 6 disk raidz2 vdevs (33% overhead) should deliver the storage and performance for a pool that size, versus a pool of mirrors (50% overhead). You need a lot if spindles to reach 100TB. -Ross
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss