IHAC who is having problem in importing zpool. When tried to import
zpool, it is failing with error message:
# zpool import -f Backup1
cannot import 'Backup1': invalid vdev configuration
fma log reports a bad label on vdev_guid 0xd51633a1766882ad.
fma errors:
Jan 23 2010 09:18:27.374175886 ereport.fs.zfs.vdev.bad_label
nvlist version: 0
class = ereport.fs.zfs.vdev.bad_label
ena = 0xa359146fa900c01
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0x858ffdc3116bac
vdev = 0xd51633a1766882ad
(end detector)
pool = Backup1
pool_guid = 0x858ffdc3116bac
pool_context = 2
pool_failmode = wait
vdev_guid = 0xd51633a1766882ad
vdev_type = raidz
parent_guid = 0x858ffdc3116bac
parent_type = root
prev_state = 0x7
__ttl = 0x1
__tod = 0x4b5b0533 0x164d788e
zpool import reports "insufficient replicas" because first raidz2 group
is not available. Comparing it with the older zpool import, I see few
drives has the controller number changed and there are duplicate entries
for c7t0d0.
I want to know how to map fma vdev_guid to the device. Running zdb-l on
all the devices in first raidz2, I could not able to find this
vdev_guid. Also label on all the drives is pointing to c8t0d0, c8t1d0.
What steps should be taken to import zpool in degraded mode. raidz2 can
sustain two drives failure. Should I try removing /dev/dsk, /dev/rdsk
entries for c7t0d0 and c4t1d0 to see if it works? Is there a better way
to approach this issue?
[r...@sanback1]/># zpool import
pool: Backup1
id: 37594491964713900
state: UNAVAIL
status: The pool is formatted using an older on-disk version.
action: The pool cannot be imported due to damaged devices or data.
config:
Backup1 UNAVAIL insufficient replicas
raidz2 UNAVAIL corrupted data
c0t0d0 ONLINE
c1t0d0 ONLINE
c5t1d0 ONLINE
c7t0d0 ONLINE << duplicate
c7t0d0 ONLINE << duplicate
c4t1d0 ONLINE << it was c8t1d0
c0t1d0 ONLINE
c4t2d0 ONLINE << it was c8t0d0
c5t2d0 ONLINE
c6t1d0 ONLINE
c7t1d0 ONLINE
raidz2 ONLINE
.....
zpool status output from last successful import shows:
r...@sanback1:/# zpool status
pool: Backup1
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
Backup1 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c0t0d0 ONLINE 0 0 0
c1t0d0 ONLINE 0 0 0
c6t1d0 ONLINE 0 0 0
c7t0d0 ONLINE 0 0 0
c8t0d0 ONLINE 0 0 0 <<
c5t1d0 ONLINE 0 0 0
c0t1d0 ONLINE 0 0 0
c5t2d0 ONLINE 0 0 0
c6t2d0 ONLINE 0 0 0
c7t1d0 ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0 <<
--
Amer Ather
Senior Staff Engineer
Solaris Kernel
Global Services Delivery
amer.at...@sun.com
408-276-9780 (x19780)
" If you fail to prepare, prepare to fail"
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss