Have you tried to "blank" out c0t3d0s2 using dd and zeros? Btw, "zpool attach -f zpol01 ..." won't work ;) (zpol01 = zpool01?)
On 8/21/07, Alderman, Sean <[EMAIL PROTECTED]> wrote: > > > > I'm looking for ideas to resolve the problem below… > > # zpool attach -f zpol01 c0t2d0 c0t3d0 > invalid vdev specification > the following errors must be manually repaired: > /dev/dsk/c0t3d0s0 is part of an active ZFS pool on zpool01. Please see > zpool(1M) > # zpool status > pool: zpool01 > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > zpool01 ONLINE 0 0 0 > c0t2d0 ONLINE 0 0 0 > > errors: No known data errors > > > What's happened here is that we've broken a hardware mirror with the intent > to create a ZFS mirror on the two disks. c0t2d0 seems fine as it is, before > and after breaking the mirror, but it seems no matter what I do to c0t3d0 > seems to be messed up. I can't prtvtoc c0t3d0s2 (an I/O error returns), > format/fdisk say c0t3d0s0 is part of an active ZFS pool, etc. I've done > this before more than once, applying config changes to bring system into > what we now know is a more stable config. > > Any ideas? > -- > Sean > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss