I had a 4 disk RAIDZ1 array.  I did not monitor it's status as closely
as I should have.  My first sign of trouble was that programs doing
writes all locked up.  When I looked two drives in the array where
showing problems via "zpool status".

After purchasing more hard drives to move the data to (which is fun
given the shortages at the moment.  Ended up getting pass amazon
limiting one per order by having my wife, brother and sister all order
for me.) I'm looking at recovering the data.

The first bad drive cannot be seen by the system at all.  I spins up
but then clicks.  Let's assume it's ~$3000 trip to the data recovery
people away from being read from ever again.

The second bad drive can be seen from the system.  smartctl reports
that the disk is failing, but I was able to use ddrescue

http://www.gnu.org/s/ddrescue/ddrescue.html

to a full copy of the device. ddrescue did find some errors but it seems
to have worked around them.

The other two disks seem to be fine.

I added the image of the the second bad drive to loop0 and made
symlinks of loop0 and the other two drive device files into $PWD and
tried to import

$ zpool import -f -d . bank0
cannot import 'bank0': I/O error
        Destroy and re-create the pool from
        a backup source.

$ zpool import -fFX -d . bank0
# runs for 6 hours and then prints out something like "one or more
devices is currently unavailable"

Looking at the output of "zdb -ve bank0", I think what's happening is
that the disk image is marked as "faulted: 1" and "aux_state:
'err_exceeded'".  Perhaps if that could be cleared, then the import
would work?  I think if this was a pool that was already imported you
could clear this error with a zpool clear bank0 $DEV_NAME

Output of "zdb -ve bank0":

Configuration for import:
        vdev_children: 1
        version: 26
        pool_guid: 3936305481264476979
        name: 'bank0'
        state: 0
        hostid: 661351
        hostname: 'gir'
        vdev_tree:
            type: 'root'
            id: 0
            guid: 3936305481264476979
            children[0]:
                type: 'raidz'
                id: 0
                guid: 10967243523656644777
                nparity: 1
                metaslab_array: 23
                metaslab_shift: 35
                ashift: 9
                asize: 6001161928704
                is_log: 0
                create_txg: 4
                children[0]:
                    type: 'disk'
                    id: 0
                    guid: 13554115250875315903
                    phys_path: '/pci@0,0/pci1002,4391@11/disk@3,0:q'
                    whole_disk: 0
                    DTL: 57
                    create_txg: 4
                    path: '/dev/sdh'
                children[1]:
                    type: 'disk'
                    id: 1
                    guid: 17894226827518944093
                    phys_path: '/pci@0,0/pci1002,4391@11/disk@0,0:q'
                    whole_disk: 0
                    DTL: 62
                    create_txg: 4
                    path: '/dev/sdg'
                children[2]:
                    type: 'disk'
                    id: 2
                    guid: 9087312107742869669
                    phys_path: '/pci@0,0/pci1002,4391@11/disk@1,0:q'
                    whole_disk: 0
                    DTL: 61
                    create_txg: 4
                    faulted: 1
                    aux_state: 'err_exceeded'
                    path: '/dev/loop0'
                children[3]:
                    type: 'disk'
                    id: 3
                    guid: 13297176051223822304
                    path: '/dev/dsk/c10t2d0p0'
                    devid:
'id1,sd@SATA_____ST31500341AS________________9VS32K25/q'
                    phys_path: '/pci@0,0/pci1002,4391@11/disk@2,0:q'
                    whole_disk: 0
                    DTL: 60
                    create_txg: 4

Any ideas?
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to