On 04/02/2010 13:45, Karl Pielorz wrote:
--On 04 February 2010 11:31 +0000 Karl Pielorz
<kpielorz_...@tdx.co.uk> wrote:
What would happen when I tried to 'online' ad2 again?
A reply to my own post... I tried this out, when you make 'ad2' online
again, ZFS immediately logs a 'vdev corrupt' failure, and marks 'ad2'
(which at this point is a byte-for-byte copy of 'ad1' as it was being
written to in background) as 'FAULTED' with 'corrupted data'.
You can't "replace" it with itself at that point, but a detach on ad2,
and then attaching ad2 back to ad1 results in a resilver, and recovery.
So to answer my own question - from my tests it looks like you can do
this, and "get away with it". It's probably not ideal, but it does work.
it is actually fine - zfs is designed to detect and fix corruption like
the one you induced.
A safer bet would be to detach the drive from the pool, and then
re-attach it (at which point ZFS assumes it's a new drive and probably
ignores the 'mirror image' data that's on it).
Yes, it should and if you want to force resynchronization that's
probably the best way to do it.
Other thing is that if you suspect some of your data to be corrupted on
a half of mirror you might try to run zpool scrub as it will fix only
those corrupted blocks instead of resynchronizing entire mirror which
might be faster and safer approach.
--
Robert Milkowski
http://milek.blogspot.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss