Hmmm, this looks like a bug to me.  The single argument form of 'zpool
replace' should do the trick.  What has happened is that there is enough
information on the disk to identify it as belonging to 'tank', yet not
enough good data for it to be opened.  Incidentally, you you send me the
contents of /var/fm/fmd/errlog and /var/fm/fmd/fltlog, as well as
/var/adm/messages?  I'm always trying to collect details of this failure
mode.  The 'zpool replace' code should probably allow you to replace a
disk with itself provided the original isn't still online.

As a workaround, you should be able to dd(1) over the first and last
megabyte of the disk.  This will prevent zpool(1M) from recognizing it
as the same disk in the pool, and should allow you to replace it.

- Eric

On Fri, May 05, 2006 at 03:28:34PM -0700, Richard Broberg wrote:
> I have a raidz pool which looks like this after a disk failure:
> 
> # zpool status
>   pool: tank
>  state: DEGRADED
> status: One or more devices could not be used because the label is missing or
>         invalid.  Sufficient replicas exist for the pool to continue
>         functioning in a degraded state.
> action: Replace the device using 'zpool replace'.
>    see: http://www.sun.com/msg/ZFS-8000-4J
>  scrub: resilver completed with 0 errors on Fri May  5 18:14:29 2006
> config:
> 
>         NAME         STATE     READ WRITE CKSUM
>         tank         DEGRADED     0     0     0
>           raidz      DEGRADED     0     0     0
>             c1t0d0   ONLINE       0     0     0
>             c1t1d0   ONLINE       0     0     0
>             c1t2d0   ONLINE       0     0     0
>             c1t3d0   ONLINE       0     0     0
>             c1t4d0   ONLINE       0     0     0
>             c1t5d0   UNAVAIL      0     0     0  corrupted data
>             c2t8d0   ONLINE       0     0     0
>             c2t9d0   ONLINE       0     0     0
>             c2t10d0  ONLINE       0     0     0
>             c2t11d0  ONLINE       0     0     0
>             c2t12d0  ONLINE       0     0     0
>             c2t13d0  ONLINE       0     0     0
> 
> errors: No known data errors
> # 
> 
> -----
> 
> I have physically replaced the failed disk with a new one, but I'm having
> problems using 'zpool replace':
> 
> # zpool replace tank c1t5d0
> invalid vdev specification
> use '-f' to override the following errors:
> /dev/dsk/c1t5d0s0 is part of active ZFS pool tank. Please see zpool(1M).
> /dev/dsk/c1t5d0s2 is part of active ZFS pool tank. Please see zpool(1M).
> # 
> 
> so I follow the advice, and use '-f':
> 
> # zpool replace -f tank c1t5d0
> invalid vdev specification
> the following errors must be manually repaired:
> /dev/dsk/c1t5d0s0 is part of active ZFS pool tank. Please see zpool(1M).
> /dev/dsk/c1t5d0s2 is part of active ZFS pool tank. Please see zpool(1M).
> #
> 
> ---
> 
> What now?
>  
>  
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, Solaris Kernel Development       http://blogs.sun.com/eschrock
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to