Hello all,

I have managed to get my hands on a OSOL 2009.06 root disk which has three failed blocks on it, these three blocks makes it impossible to boot from the disk and to import the disk to on another machine. I have checked the disk and three blocks are inaccessible, quite close to each other. Now should this not have good a good chance of being saved by replicated metadata? The data on the disk is usable, i did a block copy of the whole disk to a new one, and the scrub works out flawlessly. I guess this could be a timeout issue, but the disk is at least a WD RE2 disk with error recovery of 7 seconds. The failing systems release was 111a, and I have tried to import it into 122.

The disk was used by one of my friends which i have converted into using Solaris and ZFS for his company storage needs, and he is a bit skeptical when three blocks makes the whole pool unusable. The good part is that he uses mirrors for his rpool even on this non critical system now ;)

Anyway, can someone help to explain this, is there any timeouts that can be tuned to import the pool or is this a feature, obviously all data that is needed is intact on the disk since the block copy of the pool worked fine.

Also don't we need a force option for the -e option to zdb, so that we can use it with pools thats not have been exported correctly from a failing machine?

The import timeouts after 41 seconds:

r...@arne:/usr/sbin# zpool import -f 2934589927925685355 dpool
cannot import 'rpool' as 'dpool': one or more devices is currently unavailable

r...@arne:/usr/sbin# zpool import
  pool: rpool
    id: 2934589927925685355
 state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

        rpool       ONLINE
          c1t4d0s0  ONLINE

Damaged blocks as reported by format:
Medium error during read: block 8646022 (0x83ed86) (538/48/28)
ASC: 0x11   ASCQ: 0x0
Medium error during read: block 8650804 (0x840034) (538/124/22)
ASC: 0x11   ASCQ: 0x0
Medium error during read: block 8651987 (0x8404d3) (538/143/8)
ASC: 0x11   ASCQ: 0x0

What i managed to get out of zdb:

r...@arne:/usr/sbin# zdb -e 2934589927925685355
WARNING: pool '2934589927925685355' could not be loaded as it was last accessed by another system (host: keeper hostid: 0xc34967). See: http://www.sun.com/msg/ZFS-8000-EY
zdb: can't open 2934589927925685355: No such file or directory

r...@arne:/usr/sbin# zdb -l /dev/dsk/c1t4d0s0
--------------------------------------------
LABEL 0
--------------------------------------------
    version=14
    name='rpool'
    state=0
    txg=269696
    pool_guid=2934589927925685355
    hostid=12798311
    hostname='keeper'
    top_guid=9161928630964440615
    guid=9161928630964440615
    vdev_tree
        type='disk'
        id=0
        guid=9161928630964440615
        path='/dev/dsk/c7t1d0s0'
        devid='id1,s...@sata_____wdc_wd5000ys-01m_____wd-wcanu2080316/a'
phys_path='/p...@0,0/pci8086,2...@1c,4/pci1043,8...@0/ d...@1,0:a'
        whole_disk=0
        metaslab_array=23
        metaslab_shift=32
        ashift=9
        asize=500067467264
        is_log=0
--------------------------------------------
LABEL 1
--------------------------------------------
    version=14
    name='rpool'
    state=0
    txg=269696
    pool_guid=2934589927925685355
    hostid=12798311
    hostname='keeper'
    top_guid=9161928630964440615
    guid=9161928630964440615
    vdev_tree
        type='disk'
        id=0
        guid=9161928630964440615
        path='/dev/dsk/c7t1d0s0'
        devid='id1,s...@sata_____wdc_wd5000ys-01m_____wd-wcanu2080316/a'
phys_path='/p...@0,0/pci8086,2...@1c,4/pci1043,8...@0/ d...@1,0:a'
        whole_disk=0
        metaslab_array=23
        metaslab_shift=32
        ashift=9
        asize=500067467264
        is_log=0
--------------------------------------------
LABEL 2
--------------------------------------------
    version=14
    name='rpool'
    state=0
    txg=269696
    pool_guid=2934589927925685355
    hostid=12798311
    hostname='keeper'
    top_guid=9161928630964440615
    guid=9161928630964440615
    vdev_tree
        type='disk'
        id=0
        guid=9161928630964440615
        path='/dev/dsk/c7t1d0s0'
        devid='id1,s...@sata_____wdc_wd5000ys-01m_____wd-wcanu2080316/a'
phys_path='/p...@0,0/pci8086,2...@1c,4/pci1043,8...@0/ d...@1,0:a'
        whole_disk=0
        metaslab_array=23
        metaslab_shift=32
        ashift=9
        asize=500067467264
        is_log=0
--------------------------------------------
LABEL 3
--------------------------------------------
    version=14
    name='rpool'
    state=0
    txg=269696
    pool_guid=2934589927925685355
    hostid=12798311
    hostname='keeper'
    top_guid=9161928630964440615
    guid=9161928630964440615
    vdev_tree
        type='disk'
        id=0
        guid=9161928630964440615
        path='/dev/dsk/c7t1d0s0'
        devid='id1,s...@sata_____wdc_wd5000ys-01m_____wd-wcanu2080316/a'
phys_path='/p...@0,0/pci8086,2...@1c,4/pci1043,8...@0/ d...@1,0:a'
        whole_disk=0
        metaslab_array=23
        metaslab_shift=32
        ashift=9
        asize=500067467264
        is_log=0

Regards

Henrik
http://sparcv9.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to