On Mon, Jul 19, 2010 at 10:56 AM, Garrett Moore <garrettmo...@gmail.com>wrote:
> So you think it's because when I switch from the old disk to the new disk, > ZFS doesn't realize the disk has changed, and thinks the data is just > corrupt now? Even if that happens, shouldn't the pool still be available, > since it's RAIDZ1 and only one disk has gone away? > > I don't have / on ZFS; I'm only using it as a 'data' partition, so I should > be able to try your suggestion. My only concern: is there any risk of > trashing my pool if I try your instructions? Everything I've done so far, > even when told "insufficient replicas / corrupt data", has not cost me any > data as long as I switch back to the original (dying) drive. If I mix in > export/import statements which might 'touch' the pool, is there a chance it > will choke and trash my data? > I'm not sure what's going on in your case, but I have cron'd a zpool scrub for my pool on weekly basis to avoid this. I run a / zfs mirror and one day I could no longer boot and saw the dread 'insufficient replicas'. I eventually got it when disk started to work again briefly then did a snapshot/send offsite, redid system with new install & disk then restored data. The export/import shouldn't hurt, I used that when booting off an MFSBSD cd and imported the zpool to send from there. Perhaps you might want to consider RAIDZ2 with all those disks. -- Adam Vande More _______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"