I have a 24 disk SATA array running on Open Solaris Nevada, b78. We had a drive fail, and I¹ve replaced the device but can¹t get the system to recognize that I replaced the drive.
Zpool status v shows the failed drive: [EMAIL PROTECTED] ~]$ zpool status -v pool: LogData state: DEGRADED status: One or more devices are faulted in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the faulted device, or use 'zpool clear' to mark the device repaired. scrub: resilver completed with 0 errors on Wed Feb 27 11:51:45 2008 config: NAME STATE READ WRITE CKSUM LogData DEGRADED 0 0 0 raidz2 DEGRADED 0 0 0 c0t12d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 c0t0d0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t8d0 ONLINE 0 0 0 c0t16d0 ONLINE 0 0 0 c0t20d0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c0t9d0 ONLINE 0 0 0 c0t13d0 ONLINE 0 0 0 c0t17d0 ONLINE 0 0 0 c0t20d0 FAULTED 0 0 0 too many errors c0t2d0 ONLINE 0 0 0 c0t6d0 ONLINE 0 0 0 c0t10d0 ONLINE 0 0 0 c0t14d0 ONLINE 0 0 0 c0t18d0 ONLINE 0 0 0 c0t22d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c0t7d0 ONLINE 0 0 0 c0t11d0 ONLINE 0 0 0 c0t15d0 ONLINE 0 0 0 c0t19d0 ONLINE 0 0 0 c0t23d0 ONLINE 0 0 0 errors: No known data errors I tried doing a zpool clear with no luck: [EMAIL PROTECTED] ~]# zpool clear LogData c0t20d0 [EMAIL PROTECTED] ~]# zpool status -v pool: LogData state: DEGRADED status: One or more devices are faulted in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the faulted device, or use 'zpool clear' to mark the device repaired. scrub: resilver completed with 0 errors on Wed Feb 27 11:51:45 2008 config: NAME STATE READ WRITE CKSUM LogData DEGRADED 0 0 0 raidz2 DEGRADED 0 0 0 c0t12d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 c0t0d0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t8d0 ONLINE 0 0 0 c0t16d0 ONLINE 0 0 0 c0t20d0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c0t9d0 ONLINE 0 0 0 c0t13d0 ONLINE 0 0 0 c0t17d0 ONLINE 0 0 0 c0t20d0 FAULTED 0 0 0 too many errors c0t2d0 ONLINE 0 0 0 c0t6d0 ONLINE 0 0 0 c0t10d0 ONLINE 0 0 0 c0t14d0 ONLINE 0 0 0 c0t18d0 ONLINE 0 0 0 c0t22d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c0t7d0 ONLINE 0 0 0 And I¹ve tried zpool replace: [EMAIL PROTECTED] ~]# [EMAIL PROTECTED] ~]# zpool replace -f LogData c0t20d0 invalid vdev specification the following errors must be manually repaired: /dev/dsk/c0t20d0s0 is part of active ZFS pool LogData. Please see zpool(1M). So.. What am I missing here folks? Any help would be appreciated. -Mike
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss