Okay.. I "fixed" it by powering the server off, removing the new drive, letting 
the pool come up degraded, and then doing zpool replace.

I'm assuming what happened was ZFS saw that the disk was online, tried to use 
it, and then noticed that the checksums didn't match (of course) and marked the 
pool as corrupted.  The question is why didn't ZFS check the labels on the 
drive and see that the drive wasn't in the pool and kick it out itself?
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to