Have you tried importing the pool with that drive completely unplugged?  Which 
HBA are you using?  How many of these disks are on same or separate HBAs?

Gregg Wonderly


On Jan 8, 2013, at 12:05 PM, John Giannandrea <j...@meer.net> wrote:

> 
> I seem to have managed to end up with a pool that is confused abut its 
> children disks.  The pool is faulted with corrupt metadata:
> 
>  pool: d
> state: FAULTED
> status: The pool metadata is corrupted and the pool cannot be opened.
> action: Destroy and re-create the pool from
>       a backup source.
>   see: http://illumos.org/msg/ZFS-8000-72
>  scan: none requested
> config:
> 
>       NAME                     STATE     READ WRITE CKSUM
>       d                        FAULTED      0     0     1
>         raidz1-0               FAULTED      0     0     6
>           da1                  ONLINE       0     0     0
>           3419704811362497180  OFFLINE      0     0     0  was /dev/da2
>           da3                  ONLINE       0     0     0
>           da4                  ONLINE       0     0     0
>           da5                  ONLINE       0     0     0
> 
> But if I look at the labels on all the online disks I see this:
> 
> # zdb -ul /dev/da1 | egrep '(children|path)'
>        children[0]:
>            path: '/dev/da1'
>        children[1]:
>            path: '/dev/da2'
>        children[2]:
>            path: '/dev/da2'
>        children[3]:
>            path: '/dev/da3'
>        children[4]:
>            path: '/dev/da4'
>        ...
> 
> But the offline disk (da2) shows the older correct label:
> 
>        children[0]:
>            path: '/dev/da1'
>        children[1]:
>            path: '/dev/da2'
>        children[2]:
>            path: '/dev/da3'
>        children[3]:
>            path: '/dev/da4'
>        children[4]:
>            path: '/dev/da5'
> 
> zpool import -F doesnt help because none of the labels on the unfaulted disks 
> seem to have the right label.  And unless I can import the pool I cant 
> replace the bad drive.
> 
> Also zpool seems to really not want to import a raidz1 pool with one faulted 
> drive even though that should be readable.  I have read about the 
> undocumented -V option but dont know if that would help.
> 
> I got into this state when i noticed the pool was DEGRADED and was trying to 
> replace the bad disk.   I am debugging it under FreeBSD 9.1 
> 
> Suggestions of things to try welcome, Im more interested in learning what 
> went wrong than restoring the pool.  I dont think I should have been able to 
> go from one offline drive to a unrecoverable pool this easily.
> 
> -jg
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to