On 02/15/2012 02:49 PM, Olaf Seibert wrote:
This is the current status:
$ zpool status
pool: tank
state: FAULTED
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
see:http://www.sun.com/msg/ZFS-8000-3C
scan: scrub repaired 0 in 49h3m with 2 errors on Fri Jan 20 15:10:35 2012
config:
NAME STATE READ WRITE CKSUM
tank FAULTED 0 0 2
raidz2-0 DEGRADED 0 0 8
da0 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0
3758301462980058947 UNAVAIL 0 0 0 was /dev/da4
da5 ONLINE 0 0 0
The strange thing is that the pool is FAULTED while its part is merely
DEGRADED.
da4 failed reccently and was replaced with a new disk, but no resilvering is
taking place.
The correct sequence to replace a failed drive in a ZFS pool is:
zpool offline tank da4
shutdown and replace the drive
zpool replace tank da4
You can see a history of modifications you've made to your pool with:
zpool history
Probably you haven't gone through this sequence correctly and now ZFS is
still referring to the old/wrong UUID (the number you see instead of
da4) and therefore thinks the disk is unavailable.
Hope that helps,
Tiemen
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss