Eric Schrock wrote:
> There's really no way to recover from this, since we don't have device
> removal.  However, I'm suprised that no warning was given.  There are at
> least two things that should have happened:
>
> 1. zpool(1M) should have warned you that the redundancy level you were
>    attempting did not match that of your existing pool.  This doesn't
>    apply if you already have a mixed level of redundancy.
>
> 2. zpool(1M) should have warned you that the device was in use as an
>    active spare and not let you continue.
>
> What bits were you running?
>   

snv_78, however the pool was created on snv_43 and hasn't yet been 
upgraded.  Though, programatically, I can't see why there would be a 
difference in the way 'zpool' would handle the check.

The big question is, if I'm stuck like the permanently, whats the 
potential risk?

Could I potentially just fail that drive and leave it in a failed state?

benr.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to