Hi--

I don't know why the spare isn't kicking in automatically, it should.

A documented workaround is to outright replace the failed disk with one
of the spares, like this:

# zpool replace fwgpool0 c4t5000C5001128FE4Dd0 c4t5000C50014D70072d0

The autoreplace pool property has nothing to do with automatic spare
replacement. When this property is enabled, a replacement disk will
be automatically labeled and replaced. No need to manually run the
zpool command when this property is enabled.

Then, you can find the original failed c4t5000C5001128FE4Dd0 disk
and physically replace it when you have time. You could then add this
disk back into the pool as the new spare, like this:

# zpool add fwgpool0 spare c4t5000C5001128FE4Dd0


Thanks,

Cindy
On 04/25/11 17:56, Lamp Zy wrote:
Hi,

One of my drives failed in Raidz2 with two hot spares:

# zpool status
  pool: fwgpool0
 state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-2Q
scrub: resilver completed after 0h0m with 0 errors on Mon Apr 25 14:45:44 2011
config:

        NAME                       STATE     READ WRITE CKSUM
        fwgpool0                   DEGRADED     0     0     0
          raidz2                   DEGRADED     0     0     0
            c4t5000C500108B406Ad0  ONLINE       0     0     0
            c4t5000C50010F436E2d0  ONLINE       0     0     0
            c4t5000C50011215B6Ed0  ONLINE       0     0     0
            c4t5000C50011234715d0  ONLINE       0     0     0
            c4t5000C50011252B4Ad0  ONLINE       0     0     0
            c4t5000C500112749EDd0  ONLINE       0     0     0
            c4t5000C5001128FE4Dd0  UNAVAIL      0     0     0  cannot open
            c4t5000C500112C4959d0  ONLINE       0     0     0
            c4t5000C50011318199d0  ONLINE       0     0     0
            c4t5000C500113C0E9Dd0  ONLINE       0     0     0
            c4t5000C500113D0229d0  ONLINE       0     0     0
            c4t5000C500113E97B8d0  ONLINE       0     0     0
            c4t5000C50014D065A9d0  ONLINE       0     0     0
            c4t5000C50014D0B3B9d0  ONLINE       0     0     0
            c4t5000C50014D55DEFd0  ONLINE       0     0     0
            c4t5000C50014D642B7d0  ONLINE       0     0     0
            c4t5000C50014D64521d0  ONLINE       0     0     0
            c4t5000C50014D69C14d0  ONLINE       0     0     0
            c4t5000C50014D6B2CFd0  ONLINE       0     0     0
            c4t5000C50014D6C6D7d0  ONLINE       0     0     0
            c4t5000C50014D6D486d0  ONLINE       0     0     0
            c4t5000C50014D6D77Fd0  ONLINE       0     0     0
        spares
          c4t5000C50014D70072d0    AVAIL
          c4t5000C50014D7058Dd0    AVAIL

errors: No known data errors


I'd expect the spare drives to auto-replace the failed one but this is not happening.

What am I missing?

I really would like to get the pool back in a healthy state using the spare drives before trying to identify which one is the failed drive in the storage array and trying to replace it. How do I do this?

Thanks for any hints.

--
Peter
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to