I've been referred to here from the zfs-fuse newsgroup. I have a (non-redundant) pool which is reporting errors that I don't quite understand:
# zpool status -v pool: green state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://www.sun.com/msg/ZFS-8000-8A scrub: scrub in progress for 1h12m, 2.96% done, 39h44m to go config: NAME STATE READ WRITE CKSUM green ONLINE 0 0 2 disk/by-id/dm-name-green ONLINE 0 0 4 errors: Permanent errors have been detected in the following files: <metadata>:<0x0> green:<0x0> I read the explanations at http://dlc.sun.com/osol/docs/content/ZFSADMIN/gbbwl.html#gbcuz that the 0x0 is output when a file path is not available, but I'm still unsure how to proceed (of course, I'd also like to know why these errors occurred in the first place - after just a couple of days of using zfs-fuse, but that's another story). It has been suggested to me to copy out all data from the pool and/or recreate it from backup, but do I really have to (hours of recovery), or is there a faster way to correct the problem? Apart from these alarming messages, the pool seems to be in working order, e.g. all files that I tried could be read. I guess I'd just like to know [i]what[/i] the corrupted data is and the implications. -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss