Tom Bird wrote:
> Richard Elling wrote:
>
>   
>> I see no evidence that the data is or is not correct.  What we know is that
>> ZFS is attempting to read something and the device driver is returning EIO.
>> Unfortunately, EIO is a catch-all error code, so more digging to find the
>> root cause is needed.
>>     
>
> I'm currently checking the whole LUN, although as a 42TB unit this will
> take a few hours so we'll see how that is tomorrow.
>
>   
>> However, I will bet a steak dinner that if this device was mirrored to 
>> another,
>> the pool will import just fine, with the affected device in a faulted or 
>> degraded
>> state.
>>     
>
> On any other file system though, I could probably kick off a fsck and
> get back most of the data.  I see the argument a lot that ZFS "doesn't
> need" a fsck utility, however I would be inclined to disagree, if not a
> full on fsck then something that can patch it up to the point where I
> can mount it and then get some data off or run a scrub.
>
>   

 From the ZFS Administration Guide, Chapter 11, Data Repair section:
    Given that the fsck utility is designed to repair known pathologies
    specific to individual file systems, writing such a utility for a file
    system with no known pathologies is impossible. Future
    experience might prove that certain data corruption problems are
    common enough and simple enough such that a repair utility can
    be developed, but these problems can always be avoided by
    using redundant pools.

    If your pool is not redundant, the chance that data corruption can
    render some or all of your data inaccessible is always present.

If you go through the archives you should find similar conversations.
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to