Thanks Brandon,

On 04/25/2011 05:47 PM, Brandon High wrote:
On Mon, Apr 25, 2011 at 4:56 PM, Lamp Zy<lam...@gmail.com>  wrote:
I'd expect the spare drives to auto-replace the failed one but this is not
happening.

What am I missing?

Is the autoreplace property set to 'on'?
# zpool get autoreplace fwgpool0
# zpool set autoreplace=on fwgpool0

Yes, autoreplace is on. I should have mentioned it in my original post:

# zpool get autoreplace fwgpool0
NAME      PROPERTY     VALUE     SOURCE
fwgpool0  autoreplace  on        local


I really would like to get the pool back in a healthy state using the spare
drives before trying to identify which one is the failed drive in the
storage array and trying to replace it. How do I do this?

Turning on autoreplace might start the replace. If not, the following
will replace the failed drive with the first spare. (I'd suggest
verifying the device names before running it.)
# zpool replace fwgpool0 c4t5000C5001128FE4Dd0 c4t5000C50014D70072d0

I thought about doing that. My understanding is that this command should be used to replace a drive with a brand new one i.e. a drive that is not known to the raidz configuration.

Should I somehow unconfigure one of the spare drives to be just a loose drive and not a raidz spare before running the command (and how do I do it)? Or, is it save to just run the replace command and let zfs take care of the details like noticing that one of the spares has been manually re-purposed to replace a failed drive?

Thank you
Peter
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to