After a scrub of a pool with 3 raidz2 vdevs (each with 5 disks in them) I see the following status output. Notice that the raidz2 vdev has 2 checksum errors, but only one disk inside the raidz2 vdev has a checksum error. How is this possible? I thought that you would have to have 3 errors in the same 'stripe' within a raidz2 vdev in order for the error to become unrecoverable.
And I have not reset any errors with zpool clear ... Comments will be appreciated. Thanks. $ zpool status -v pool: tank state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://www.sun.com/msg/ZFS-8000-8A scrub: scrub completed with 1 errors on Mon Jul 23 19:59:07 2007 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 2 raidz2 ONLINE 0 0 2 c2t0d0 ONLINE 0 0 1 c2t1d0 ONLINE 0 0 0 c2t2d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 c2t4d0 ONLINE 0 0 0 raidz2 ONLINE 0 0 0 c2t5d0 ONLINE 0 0 0 c2t6d0 ONLINE 0 0 0 c2t7d0 ONLINE 0 0 0 c2t8d0 ONLINE 0 0 0 c2t9d0 ONLINE 0 0 0 raidz2 ONLINE 0 0 0 c2t10d0 ONLINE 0 0 0 c2t11d0 ONLINE 0 0 0 c2t12d0 ONLINE 0 0 1 c2t13d0 ONLINE 0 0 0 c2t14d0 ONLINE 0 0 0 spares c2t15d0 AVAIL errors: The following persistent errors have been detected: DATASET OBJECT RANGE 5 5fe9784 lvl=0 blkid=40299 This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss