Hi all, Bob Friesenhahn wrote: > My understanding is that ordinary HW raid does not check data > correctness. If the hardware reports failure to successfully read a > block, then a simple algorithm is used to (hopefully) re-create the > lost data based on data from other disks. The difference here is that > ZFS does check the data correctness (at the CPU) for each read while > HW raid depends on the hardware detecting a problem, and even if the > data is ok when read from disk, it may be corrupted by the time it > makes it to the CPU.
AFAIK this is not done during the normal operation (unless a disk asked for a sector cannot get this sector). > > ZFS's scrub algorithm forces all of the written data to be read, with > validation against the stored checksum. If a problem is found, then > an attempt to correct is made from redundant storage using traditional > RAID methods. That's exactly what volume checking for standard HW controllers does as well. Read all data and compare it with parity. This is exactly the point why RAID6 should always be chosen over RAID5, because in the event of a wrong parity check and RAID5 the controller can only say, oops, I have found a problem but cannot correct it - since it does not know if the parity is correct or any of the n data bits. In RAID6 you have redundant parity, thus the controller can find out if the parity was correct or not. At least I think that to be true for Areca controllers :) Cheers Carsten _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss