Hi Richard, Yes, sure. We can add that scenario.
What's been on my todo list is a ZFS troubleshooting wiki. I've been collecting issues. Let's talk soon. Cindy Richard Elling wrote: > Tom Bird wrote: > >> Richard Elling wrote: >> >> >> >>> I see no evidence that the data is or is not correct. What we know >>> is that >>> ZFS is attempting to read something and the device driver is >>> returning EIO. >>> Unfortunately, EIO is a catch-all error code, so more digging to find >>> the >>> root cause is needed. >>> >> >> >> I'm currently checking the whole LUN, although as a 42TB unit this will >> take a few hours so we'll see how that is tomorrow. >> >> >> >>> However, I will bet a steak dinner that if this device was mirrored >>> to another, >>> the pool will import just fine, with the affected device in a faulted >>> or degraded >>> state. >>> >> >> >> On any other file system though, I could probably kick off a fsck and >> get back most of the data. I see the argument a lot that ZFS "doesn't >> need" a fsck utility, however I would be inclined to disagree, if not a >> full on fsck then something that can patch it up to the point where I >> can mount it and then get some data off or run a scrub. >> >> > > > Probably not. fsck only repairs metadata, it does not restore or correct > data. If the data is gone or damaged, then there isn't much ZFS could > do, since ZFS was not in control of the data redundancy (by default, > ZFS metadata is redundant). > > BTW, another good sanity test is to try to read the ZFS labels: > zdb -l /dev/rdsk/... > > Cindy, I note that we don't explicitly address the case where the pool > cannot be imported in the Troubleshooting and Data Recovery chapter > of the ZFS administration guide. Can we put this on the todo list? > -- richard > _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss