I have a system who's rpool has gone defunct. The rpool is made of a
single "disk" which is a raid5EE made of all 8 146G disks on the box.
The raid card is the Adaptec brand card. It was running nv_107, but its
currently net booted to nv_121. I have already checked in the raid card
bios, and it says the volume is "optimal" . We had a power outage in
BRM07 on Tuesday, and the system appeared to boot back up, but then went
wonky. I power cycled it, and it came back to a grub> prompt cause it
couldn't read the filesystem.
# uname -a
SunOS 5.11 snv_121 i86pc i386 i86pc
# zpool import
pool: rpool
id: 7197437773913332097
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
the '-f' flag.
see: http://www.sun.com/msg/ZFS-8000-EY
config:
rpool ONLINE
c0t0d0s0 ONLINE
# zpool import -f 7197437773913332097
cannot import 'rpool': one or more devices is currently unavailable
#
# zpool import -a -f -R /a
cannot import 'rpool': one or more devices is currently unavailable
# zdb -l /dev/dsk/c0t0d0s0
--------------------------------------------
LABEL 0
--------------------------------------------
version=14
name='rpool'
state=0
txg=742622
pool_guid=7197437773913332097
hostid=4930069
hostname=''
top_guid=5620634672424557591
guid=5620634672424557591
vdev_tree
type='disk'
id=0
guid=5620634672424557591
path='/dev/dsk/c0t0d0s0'
devid='id1,s...@tsun_____stk_raid_int____efd1dfe0/a'
phys_path='/p...@0,0/pci8086,3...@4/pci108e,2...@0/d...@0,0:a'
whole_disk=0
metaslab_array=24
metaslab_shift=33
ashift=9
asize=880083730432
is_log=0
--------------------------------------------
LABEL 1
--------------------------------------------
version=14
name='rpool'
state=0
txg=742622
pool_guid=7197437773913332097
hostid=4930069
hostname=''
top_guid=5620634672424557591
guid=5620634672424557591
vdev_tree
type='disk'
id=0
guid=5620634672424557591
path='/dev/dsk/c0t0d0s0'
devid='id1,s...@tsun_____stk_raid_int____efd1dfe0/a'
phys_path='/p...@0,0/pci8086,3...@4/pci108e,2...@0/d...@0,0:a'
whole_disk=0
metaslab_array=24
metaslab_shift=33
ashift=9
asize=880083730432
is_log=0
--------------------------------------------
LABEL 2
--------------------------------------------
version=14
name='rpool'
state=0
txg=742622
pool_guid=7197437773913332097
hostid=4930069
hostname=''
top_guid=5620634672424557591
guid=5620634672424557591
vdev_tree
type='disk'
id=0
guid=5620634672424557591
path='/dev/dsk/c0t0d0s0'
devid='id1,s...@tsun_____stk_raid_int____efd1dfe0/a'
phys_path='/p...@0,0/pci8086,3...@4/pci108e,2...@0/d...@0,0:a'
whole_disk=0
metaslab_array=24
metaslab_shift=33
ashift=9
asize=880083730432
is_log=0
--------------------------------------------
LABEL 3
--------------------------------------------
version=14
name='rpool'
state=0
txg=742622
pool_guid=7197437773913332097
hostid=4930069
hostname=''
top_guid=5620634672424557591
guid=5620634672424557591
vdev_tree
type='disk'
id=0
guid=5620634672424557591
path='/dev/dsk/c0t0d0s0'
devid='id1,s...@tsun_____stk_raid_int____efd1dfe0/a'
phys_path='/p...@0,0/pci8086,3...@4/pci108e,2...@0/d...@0,0:a'
whole_disk=0
metaslab_array=24
metaslab_shift=33
ashift=9
asize=880083730432
is_log=0
# zdb -cu -e -d /dev/dsk/c0t0d0s0
zdb: can't open /dev/dsk/c0t0d0s0: No such file or directory
# zdb -e rpool -cu
zdb: can't open rpool: No such device or address
# zdb -e 7197437773913332097
zdb: can't open 7197437773913332097: No such device or address
#
I obviously have no clue how to weild zdb.
Any help you can offer would be appreciated.
Thanks,
Tommy
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss