Hi Cindy,
Good job for using a mirrored configuration. :-)
Thanks!
Your various approaches would work.
My only comment about #2 is that it might take some time for the spare
to kick in for the faulted disk.
Both 1 and 2 would take a bit more time than just replacing the faulted
disk with a spare disk, like this:
# zpool replace tank c1t6d0 c1t15d0
You mean I can execute
zpool replace tank c1t6d0 c1t15d0
without having made c1t15d0 a spare disk first with
zpool add tank spare c1t15d0
? After doing that c1t6d0 is offline and ready to be physically
replaced?
Then you could physically replace c1t6d0 and add it back to the pool as
a spare, like this:
# zpool add tank spare c1t6d0
For a production system, the steps above might be the most efficient.
Get the faulted disk replaced with a known good disk so the pool is
no longer degraded, then physically replace the bad disk when you have
the time and add it back to the pool as a spare.
It is also good practice to run a zpool scrub to ensure the
replacement is operational
That would be
zpool scrub tank
in my case!?
and use zpool clear to clear the previous
errors on the pool.
I assume teh complete comamnd fo rmy case is
zpool clear tank
Why d we have to do that. Couldb't zfs realize that everything is fine
again after executing "zpool replace tank c1t6d0 c1t15d0"?
If the system is used heavily, then you might want to run the zpool
scrub when system use is reduced.
That would be now! :-)
If you were going to physically replace c1t6d0 while it was still
attached to the pool, then you might offline it first.
Ok, this sounds like approach 3)
zpool offline tank c1t6d0
<physically replace c1t6d0 with a new one>
zpool online tank c1t6d0
Would that be it?
Thanks a lot!
Regards,
Andreas
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss