Sort of... I couldn't even do a prtvtoc /dev/rdsk/c0t3d0s2. I found that if I exported the pool I could view the partition table with format or prtvtoc, etc. but no luck doing a newfs on the disk as it complained about a ZFS filesystem being on it, just like the attach command did.
I ended up resolving the problem by removing the disk from the server, putting it into another (all my Solaris boxes are Sun Fire X4200's). >From there I was able to create a UFS filesystem, repartition the disk, mount it, write to it, etc. Then I returned the disk to the original server and was able to attach it to the pool with out issue. Still the error is curious, and why would a -f on the attach not override the error...isn't that what -f is for? In addition the message about manually fixing the problem is strange, though I guess what I did to resolve it was quite manual. P.S. yeah I hand typed the output of those commands due to being in Single User Mode via the ILOM console application which doesn't allow for anything but a screen print. I didn't figure anyone would enjoy an attachment, so I hand typed it all. -- Sean -----Original Message----- From: Louwtjie Burger [mailto:[EMAIL PROTECTED] Sent: Tuesday, August 21, 2007 2:01 PM To: Alderman, Sean Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] Problem attaching a disk to a mirror... Have you tried to "blank" out c0t3d0s2 using dd and zeros? Btw, "zpool attach -f zpol01 ..." won't work ;) (zpol01 = zpool01?) On 8/21/07, Alderman, Sean <[EMAIL PROTECTED]> wrote: > > > > I'm looking for ideas to resolve the problem below... > > # zpool attach -f zpol01 c0t2d0 c0t3d0 invalid vdev specification the > following errors must be manually repaired: > /dev/dsk/c0t3d0s0 is part of an active ZFS pool on zpool01. Please > see > zpool(1M) > # zpool status > pool: zpool01 > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > zpool01 ONLINE 0 0 0 > c0t2d0 ONLINE 0 0 0 > > errors: No known data errors > > > What's happened here is that we've broken a hardware mirror with the > intent to create a ZFS mirror on the two disks. c0t2d0 seems fine as > it is, before and after breaking the mirror, but it seems no matter > what I do to c0t3d0 seems to be messed up. I can't prtvtoc c0t3d0s2 > (an I/O error returns), format/fdisk say c0t3d0s0 is part of an active > ZFS pool, etc. I've done this before more than once, applying config > changes to bring system into what we now know is a more stable config. > > Any ideas? > -- > Sean > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss