Hello Ben, Wednesday, January 16, 2008, 5:29:57 AM, you wrote:
BR> Eric Schrock wrote: >> There's really no way to recover from this, since we don't have device >> removal. However, I'm suprised that no warning was given. There are at >> least two things that should have happened: >> >> 1. zpool(1M) should have warned you that the redundancy level you were >> attempting did not match that of your existing pool. This doesn't >> apply if you already have a mixed level of redundancy. >> >> 2. zpool(1M) should have warned you that the device was in use as an >> active spare and not let you continue. >> >> What bits were you running? >> BR> snv_78, however the pool was created on snv_43 and hasn't yet been BR> upgraded. Though, programatically, I can't see why there would be a BR> difference in the way 'zpool' would handle the check. BR> The big question is, if I'm stuck like the permanently, whats the BR> potential risk? BR> Could I potentially just fail that drive and leave it in a failed state? If some data has been written since you did it you have a "chance" it was stripped between your raid-z pools and this drive - so if you fail a drive you won't have an access to some data. Metadata should be fine but then after a reboot or export you won't be able to import a pool. If you can't re-create a pool (+backup&restore your data) I would recommend to wait for device removal in zfs and in a mean time I would attach another drive to it so you've got mirrored configuration and remove them once there's a device removal. Since you're already working on nevada you probably could adopt new bits quickly. The only question is - when device removal is going to be integrated - last time someone mentioned it here it was supposed to be by the end of last year... -- Best regards, Robert Milkowski mailto:[EMAIL PROTECTED] http://milek.blogspot.com _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss