My take is that since RAID-Z creates a stripe for every block (http://blogs.sun.com/bonwick/entry/raid_z), it should be able to rebuild the bad sectors on a per block basis. I'd assume that the likelihood of having bad sectors on the same places of all the disks is pretty low since we're only reading the sectors related to the block being rebuilt. It also seems that fragmentation would work in your favor here since the stripes would be distributed across more of the platter(s), hopefully protecting you from a wonky manufacturing defect that causes UREs on the same place on the disk.
-Aaron On Thu, Jul 3, 2008 at 12:24 PM, Jim <[EMAIL PROTECTED]> wrote: > Anyone here read the article "Why RAID 5 stops working in 2009" at > http://blogs.zdnet.com/storage/?p=162 > > Does RAIDZ have the same chance of unrecoverable read error as RAID5 in Linux > if the RAID has to be rebuilt because of a faulty disk? I imagine so because > of the physical constraints that plague our hds. Granted, the chance of > failure in my case shouldn't be nearly as high as I will most likely recruit > four or three 750gb drives- not in the order of 10tb. > > With my opensolaris NAS, I will be scrubbing every week (consumer grade > drives[every month for enterprise-grade]) as recommended in the ZFS best > practices guide. If I "zpool status" and I see that the scrub is > increasingly fixing errors, would that mean that the disk is in fact headed > towards failure or perhaps that the natural expansion of disk usage is to > blame? > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss