Last week my FreeNAS server began to beep constantly so I rebooted it through 
the webgui. When the machine finished booting I logged back in to the webgui 
and I noted that my zpool (Raidz) was faulted.  Most of the data on this pool 
is replaceable but I had some pictures on this pool that were not backed up 
that I would really like to recover. At the time that the machine started to 
beep I was verifying a torrent and streaming a movie. 

Here is a little more info about my setup:
--------------------------------------------
FreeNAS 0.7.1 Shere (revision 4997)
intel pentium 4
tyan s5161
2Gb ram
adaptec aar-21610sa
1x 250Gb maxtor boot drive/storage
6x 1tb WD drives in raidz (pool name Raidz)
ZFS filesystem version 6
ZFS storage pool version 6
--------------------------------------------

Commands I tried and their output:

freenas:~# zpool import
no pools available to import

freenas:~# zpool status -v
pool: Raidz
state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from a backup source.
see: http://www.sun.com/msg/ZFS-8000-CS
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
Raidz FAULTED 0 0 6 corrupted data
raidz1 FAULTED 0 0 6 corrupted data
aacdu0 ONLINE 0 0 0
aacdu1 ONLINE 0 0 0
aacdu2 ONLINE 0 0 0
aacdu3 ONLINE 0 0 0
aacdu4 ONLINE 0 0 0
aacdu5 ONLINE 0 0 0

freenas:~# zpool import -f
no pools available to import

freenas:~# zpool import -f Raidz
cannot import 'Raidz': no such pool available
====================================================

I transfered the drives to another motherboard (asus, core2 duo, 4Gb ram) and 
booted freebsd 8.1RC1 with ZFSv14.  I got the following output with "zpool 
status -v"

===============================================

pool: Raidz
id: 14119036174566039103
state: ONLINE
status: The pool is formatted using an older on-disk version.
action: The pool can be imported using its name or numeric identifier, though
some features will not be available without an explicit 'zpool upgrade'.
config:

Raidz ONLINE
raidz1 ONLINE
ada3 ONLINE
ada5 ONLINE
ada4 ONLINE
ada1 ONLINE
ada2 ONLINE
ada0 ONLINE
=================================================

but when I ran "zpool import Raidz" it told me to use -f flag.  Doing so gave 
me a "fatal trap 12" error.  

The command "zdb -l /dev/ada0"
--------------------------------------------
LABEL 0
--------------------------------------------
version=6
name='Raidz'
state=0
txg=11730350
pool_guid=14119036174566039103
hostid=0
hostname='freenas.local'
top_guid=16879648846521942561
guid=6282477769796963197
vdev_tree
type='raidz'
id=0
guid=16879648846521942561
nparity=1
metaslab_array=14
metaslab_shift=32
ashift=9
asize=6000992059392
children[0]
type='disk'
id=0
guid=6543046729241888600
path='/dev/aacdu0'
whole_disk=0
children[1]
type='disk'
id=1
guid=14313209149820231630
path='/dev/aacdu1'
whole_disk=0
children[2]
type='disk'
id=2
guid=5383435113781649515
path='/dev/aacdu2'
whole_disk=0
children[3]
type='disk'
id=3
guid=9586044621389086913
path='/dev/aacdu3'
whole_disk=0
DTL=1372
children[4]
type='disk'
id=4
guid=10401318729908601665
path='/dev/aacdu4'
whole_disk=0
children[5]
type='disk'
id=5
guid=6282477769796963197
path='/dev/aacdu5'
whole_disk=0
--------------------------------------------
LABEL 1
--------------------------------------------
version=6
name='Raidz'
state=0
txg=11730350
pool_guid=14119036174566039103
hostid=0
hostname='freenas.local'
top_guid=16879648846521942561
guid=6282477769796963197
vdev_tree
type='raidz'
id=0
guid=16879648846521942561
nparity=1
metaslab_array=14
metaslab_shift=32
ashift=9
asize=6000992059392
children[0]
type='disk'
id=0
guid=6543046729241888600
path='/dev/aacdu0'
whole_disk=0
children[1]
type='disk'
id=1
guid=14313209149820231630
path='/dev/aacdu1'
whole_disk=0
children[2]
type='disk'
id=2
guid=5383435113781649515
path='/dev/aacdu2'
whole_disk=0
children[3]
type='disk'
id=3
guid=9586044621389086913
path='/dev/aacdu3'
whole_disk=0
DTL=1372
children[4]
type='disk'
id=4
guid=10401318729908601665
path='/dev/aacdu4'
whole_disk=0
children[5]
type='disk'
id=5
guid=6282477769796963197
path='/dev/aacdu5'
whole_disk=0
--------------------------------------------
LABEL 2
--------------------------------------------
failed to unpack label 2
--------------------------------------------
LABEL 3
--------------------------------------------
failed to unpack label 3
+++++++++++++++++++++++++++++++++++++++++


After this I install the controller and used different ports and cabling setups 
to get the guid and device path to math and I was able to unpack all labels on 
all drives.  I am not sure what any of this means and fortunately I didn't 
record the outputs.  I can redo this if necessary.




I then booted opensolaris 2009.06 and 4 out of the 6 disks showed corrupt data 
when I did "zpool import." When I tried to import the pool it couldn't but I 
can't remember the output. I have never used opensolaris and could not ssh into 
it to record the outputs. With "zpool import -f Raidz" opensolaris did not 
crash but wouldn't import either.



Lastly I booted dev 134 last night, here are the commands and their outputs I 
ran

xx...@opensolaris:~# zpool import
  pool: Raidz
    id: 14119036174566039103
 state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-72
config:

        Raidz         FAULTED  corrupted data
          raidz1-0    FAULTED  corrupted data
            c6t0d0p0  ONLINE
            c6t4d0p0  ONLINE
            c6t3d0s2  ONLINE
            c6t5d0p0  ONLINE
            c6t2d0p0  ONLINE
            c6t1d0p0  ONLINE
xx...@opensolaris:~# zpool import Raidz
cannot import 'Raidz': pool may be in use from other system
use '-f' to import anyway
xx...@opensolaris:~# zpool import 14119036174566039103
cannot import 'Raidz': pool may be in use from other system
use '-f' to import anyway
xx...@opensolaris:~# zpool import -f 14119036174566039103
cannot import 'Raidz': one or more devices is currently unavailable
        Destroy and re-create the pool from
        a backup source.
xx...@opensolaris:~# zdb -l /dev/dsk/c6t3d0s2
--------------------------------------------
LABEL 0
--------------------------------------------
    version: 6
    name: 'Raidz'
    state: 0
    txg: 11730350
    pool_guid: 14119036174566039103
    hostid: 0
    hostname: 'freenas.local'
    top_guid: 16879648846521942561
    guid: 5383435113781649515
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 16879648846521942561
        nparity: 1
        metaslab_array: 14
        metaslab_shift: 32
        ashift: 9
        asize: 6000992059392
        children[0]:
            type: 'disk'
            id: 0
            guid: 6543046729241888600
            path: '/dev/aacdu0'
            whole_disk: 0
        children[1]:
            type: 'disk'
            id: 1
            guid: 14313209149820231630
            path: '/dev/aacdu1'
            whole_disk: 0
        children[2]:
            type: 'disk'
            id: 2
            guid: 5383435113781649515
            path: '/dev/aacdu2'
            whole_disk: 0
        children[3]:
            type: 'disk'
            id: 3
            guid: 9586044621389086913
            path: '/dev/aacdu3'
            whole_disk: 0
            DTL: 1372
        children[4]:
            type: 'disk'
            id: 4
            guid: 10401318729908601665
            path: '/dev/aacdu4'
            whole_disk: 0
        children[5]:
            type: 'disk'
            id: 5
            guid: 6282477769796963197
            path: '/dev/aacdu5'
            whole_disk: 0
--------------------------------------------
LABEL 1
--------------------------------------------
    version: 6
    name: 'Raidz'
    state: 0
    txg: 11730350
    pool_guid: 14119036174566039103
    hostid: 0
    hostname: 'freenas.local'
    top_guid: 16879648846521942561
    guid: 5383435113781649515
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 16879648846521942561
        nparity: 1
        metaslab_array: 14
        metaslab_shift: 32
        ashift: 9
        asize: 6000992059392
        children[0]:
            type: 'disk'
            id: 0
            guid: 6543046729241888600
            path: '/dev/aacdu0'
            whole_disk: 0
        children[1]:
            type: 'disk'
            id: 1
            guid: 14313209149820231630
            path: '/dev/aacdu1'
            whole_disk: 0
        children[2]:
            type: 'disk'
            id: 2
            guid: 5383435113781649515
            path: '/dev/aacdu2'
            whole_disk: 0
        children[3]:
            type: 'disk'
            id: 3
            guid: 9586044621389086913
            path: '/dev/aacdu3'
            whole_disk: 0
            DTL: 1372
        children[4]:
            type: 'disk'
            id: 4
            guid: 10401318729908601665
            path: '/dev/aacdu4'
            whole_disk: 0
        children[5]:
            type: 'disk'
            id: 5
            guid: 6282477769796963197
            path: '/dev/aacdu5'
            whole_disk: 0
--------------------------------------------
LABEL 2
--------------------------------------------
failed to unpack label 2
--------------------------------------------
LABEL 3
--------------------------------------------
failed to unpack label 3

All six disk's in Raidz give a similar ouput with "zdb -l" 

I would like to at least recover the family photos/movies if possible.  Any 
help would be really appreciated.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to