Jakov Sosic wrote: > Hi guys! > > I'm doing series of tests on ZFS before putting it into production on several > machines, and I've come to a dead end. I have two disks in mirror (rpool). > Intentionally, I corrupt data on second disk: > > # dd if=/dev/urandom of=/dev/rdsk/c0d1t0 bs=512 count=20480 seek=10240 > > So, I've written 10MB's of random data after first 5MB's of hard drive. After > sync and reboot, ZFS got the corruption noticed, and then I run zpool scrub > rpool. After that, I've got this state: > > unknown# zpool status > pool: rpool > state: DEGRADED > status: One or more devices has experienced an unrecoverable error. An > attempt was made to correct the error. Applications are unaffected. > action: Determine if the device needs to be replaced, and clear the errors > using 'zpool clear' or replace the device with 'zpool replace'. > see: http://www.sun.com/msg/ZFS-8000-9P > scrub: scrub in progress for 0h0m, 5.64% done, 0h5m to go > config: > > NAME STATE READ WRITE CKSUM > rpool DEGRADED 0 0 0 > mirror DEGRADED 0 0 0 > c0d1s0 DEGRADED 0 0 26 too many errors > c0d0s0 ONLINE 0 0 0 > > errors: No known data errors > > > So I wonder now, how to fix this up? Why doesn't scrub overwrite bad data > with good data from first disk?
ZFS doesn't know why the errors occurred, the most likely scenario would be a bad disk -- in which case you'd need to replace it. > If I run zpool clear, it will only clear the error reports, and it won't > fixed them - I presume that because I don't understand the man page for that > section clearly. The admin guide is great to follow for these tests : http://docs.sun.com/app/docs/doc/819-5461 > So, how can I fix this disk, without detach/attach procedure? You shouldn't need to attach/detach anything. I think you're looking for 'zpool replace'. zpool replace tank c0d1s0 -Bryant _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss