Need help on removing a faulted spare. We tried following with no
success. There is no resilvering active as shown below:
# zpool clear sybdump_pool c7t0d0 <<<< spare device
cannot clear errors for c7t0d0: device is reserved as a hot spare
# zpool remove sybdump_pool c7t0d0
# zpool status -xv sybdump_pool
pool 'sybdump_pool' is healthy
# zpool status sybdump_pool
pool: sybdump_pool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
sybdump_pool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c0t2d0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c4t2d0 ONLINE 0 0 0
c5t2d0 ONLINE 0 0 0
spares
c7t0d0 FAULTED corrupted data <<<
c7t4d0 FAULTED corrupted data <<<
c7t0d0 FAULTED corrupted data <<<
c7t4d0 AVAIL
Trussing the command shows ioctl ZFS_IOC_VDEV_REMOVE was issued and
return with 0. Also DTracing this ioctl shows success with no errno
return by routine zfs_ioc_vdev_remove():
zfs`spa_vdev_remove+0x6b
zfs`zfs_ioc_vdev_remove+0x48
zfs`zfsdev_ioctl+0x14c
genunix`cdev_ioctl+0x1d
specfs`spec_ioctl+0x50
genunix`fop_ioctl+0x25
genunix`ioctl+0xac
genunix`dtrace_systrace_syscall32+0xc7
unix`sys_syscall32+0x101
3 <- spa_vdev_remove
spa_vdev_remove returns errno:0 <<<<
One thing customer did was that he ran "zpool upgrade -a" while spare
were faulted. No sure if problem was there before upgrade or started
after upgrade.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss