more below…

On Sep 19, 2011, at 9:51 AM, Fred Liu wrote:

>> 
>> No, but your pool is not imported.
>> 
> 
> YES. I see.
>> and look to see which disk is missing"?
>> 
>> The label, as displayed by "zdb -l" contains the heirarchy of the
>> expected pool config.
>> The contents are used to build the output you see in the "zpool import"
>> or "zpool status"
>> commands. zpool is complaining that it cannot find one of these disks,
>> so look at the
>> labels on the disks to determine what is or is not missing. The next
>> steps depend on
>> this knowledge.
> 
> zdb -l /dev/rdsk/c22t2d0s0
> cannot open '/dev/rdsk/c22t2d0s0': I/O error

Is this disk supposed to be available?
You might need to check the partition table, if one exists, to determine if
s0 has a non-zero size.

> root@cn03:~# zdb -l /dev/rdsk/c22t3d0s0
> --------------------------------------------
> LABEL 0
> --------------------------------------------
>    version: 22
>    name: 'cn03'
>    state: 0
>    txg: 18269872
>    pool_guid: 1907858070511204110
>    hostid: 13564652
>    hostname: 'cn03'
>    top_guid: 11074483144412112931
>    guid: 11074483144412112931
>    vdev_children: 6
>    vdev_tree:
>        type: 'disk'
>        id: 1
>        guid: 11074483144412112931
>        path: '/dev/dsk/c22t3d0s0'
>        devid: 
> 'id1,sd@s4154412020202020414e53393031305f324e4e4e324e4e4e2020202020202020353632383637390000005f31/a'
>        phys_path: '/pci@0,0/pci15d9,400@1f,2/disk@3,0:a'
>        whole_disk: 1
>        metaslab_array: 37414
>        metaslab_shift: 24
>        ashift: 9
>        asize: 1895563264
>        is_log: 0
>        create_txg: 18269863
> --------------------------------------------
> LABEL 1
> --------------------------------------------
>    version: 22
>    name: 'cn03'
>    state: 0
>    txg: 18269872
>    pool_guid: 1907858070511204110
>    hostid: 13564652
>    hostname: 'cn03'
>    top_guid: 11074483144412112931
>    guid: 11074483144412112931
>    vdev_children: 6
>    vdev_tree:
>        type: 'disk'
>        id: 1
>        guid: 11074483144412112931
>        path: '/dev/dsk/c22t3d0s0'
>        devid: 
> 'id1,sd@s4154412020202020414e53393031305f324e4e4e324e4e4e2020202020202020353632383637390000005f31/a'
>        phys_path: '/pci@0,0/pci15d9,400@1f,2/disk@3,0:a'
>        whole_disk: 1
>        metaslab_array: 37414
>        metaslab_shift: 24
>        ashift: 9
>    asize: 1895563264
>        is_log: 0
>        create_txg: 18269863
> --------------------------------------------
> LABEL 1
> --------------------------------------------
>    version: 22
>    name: 'cn03'
>    state: 0
>    txg: 18269872
>    pool_guid: 1907858070511204110
>    hostid: 13564652
>    hostname: 'cn03'
>    top_guid: 11074483144412112931
>    guid: 11074483144412112931
>    vdev_children: 6
>    vdev_tree:
>        type: 'disk'
>        id: 1
>        guid: 11074483144412112931
>        path: '/dev/dsk/c22t3d0s0'
>        devid: 
> 'id1,sd@s4154412020202020414e53393031305f324e4e4e324e4e4e2020202020202020353632383637390000005f31/a'
>        phys_path: '/pci@0,0/pci15d9,400@1f,2/disk@3,0:a'
>        whole_disk: 1
>        metaslab_array: 37414
>        metaslab_shift: 24
>        ashift: 9
>        asize: 1895563264
>        is_log: 0
>        create_txg: 18269863
> --------------------------------------------
> LABEL 2
> --------------------------------------------
> failed to unpack label 2
> --------------------------------------------
> LABEL 3
> --------------------------------------------
> failed to unpack label 3

This is a bad sign, but can be recoverable, depending on how you got here. zdb 
is saying
that it could not find labels at the end of the disk. Label 2 and label 3 are 
256KB each, located
at the end of the disk, aligned to 256KB boundary. zpool import is smarter than 
zdb in these
cases, and can often recover from it -- up to the loss of all 4 labels, but you 
need to make sure 
that the partition tables look reasonable and haven't changed.

> c22t2d0 and c22t3d0 are the devices I physically removed and connected back 
> to the server.
> How can I fix them?

Unless I'm mistaken, these are ACARD SSDs that have an optional CF backup. 
Let's hope
that the CF backup worked.
 -- richard

-- 

ZFS and performance consulting
http://www.RichardElling.com
VMworld Copenhagen, October 17-20
OpenStorage Summit, San Jose, CA, October 24-27
LISA '11, Boston, MA, December 4-9 













_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to