On 14-Oct-10, at 11:48 AM, Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Toby Thain

I don't want to heat up the discussion about ZFS managed discs vs.
HW raids, but if RAID5/6 would be that bad, no one would use it
anymore.

It is. And there's no reason not to point it out. The world has

Well, neither one of the above statements is really fair.

The truth is: radi5/6 are generally not that bad. Data integrity failures are not terribly common (maybe one bit per year out of 20 large disks or
something like that.)

Such statistics assume that no part of the stack (drive, cable, network, controller, memory, etc) has any fault and is operating normally. This is, indeed, the base presumption of RAID (which also assumes a perfect error reporting chain).


And in order to reach the conclusion "nobody would use it," the people using it would have to first *notice* the failure. Which they don't. That's kind
of the point.

Indeed it is. And then we could talk about self healing (also missing from RAID).

--Toby


Since I started using ZFS in production, about a year ago, on three servers totaling approx 1.5TB used, I have had precisely one checksum error, which ZFS corrected. I have every reason to believe, if that were on a raid5/6,
the error would have gone undetected and nobody would have noticed.


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to