On Sat, May 1, 2010 at 1:48 PM, Victor Latushkin <victor.latush...@sun.com>wrote:
> On Apr 29, 2010, at 2:20 AM, Freddie Cash wrote: > > On Wed, Apr 28, 2010 at 2:48 PM, Victor Latushkin < > victor.latush...@sun.com> wrote: > > >> 2. Run 'zdb -ddd storage' and provide section titles Dirty Time Logs >> >> See attached. > > So you really do have enough redundancy to be able to handle this scenario, > so this is software bug. On recent OpenSolaris build you should be able to > detach one of the devices, and replace second one. Version 14 corresponds to > build 103, and spa_vdev_detach() was changed significantly in build 105 > (along with other related changes), so those changes are probably not yet > available in FreeBSD. > > After much gnashing of teeth, pulling of hair, reading of web pages, and offering of dead chickens, the pool is back to an ONLINE state. I pulled the 1.5 TB drive from the system, inserted the original 500 GB drive, and ZFS detected it and marked it as online, with the 1.5 TB as faulted. At that point, I was able to "zpool detach" the 1.5 TB drive, putting the pool back into the original configuration, and online state. After that, I "zpool offline" the 500 GB drive, replace it with a new 1.5 TB drive, and "zpool replace". And it's happily re-silvering the drive. In 35h, it should be complete. Thankfully, the original drive was not faulty and was only being replaced to increase the size. If the original drive was dead, I'd probably be hooped. -- Freddie Cash fjwc...@gmail.com
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss