On 05/03/2010 02:52, Jason wrote:
So I tried to do a SAN copy of a (couple of) zpools/zfs volumes today, and I
failed.
Shutdown the box, zoned it to the new storage, finalized the last data sync
from array x to array y, and turned the box on. And the volumes didn't show
up. I issued a reconfigure reboot (reboot -- -r) and still had no dice. I was
in a time crunch (maintenance window closed), so I rezoned the server back to
the old storage, re-issued a reconfigure reboot, and I'm back where I was.
So, I've since poked and prodded, and learned a few things, and now I'm trying
to let ZFS do the data sync'ing. So I've re-presented the disks to the server
(Solaris 10, on a T1000), and I can see the disks I need. Cool! But when I
try to 'zpool attach' them, I get this error:
r...@prtwxfacts01 # zpool attach -f intranetcf c1t50060E80045A1040d1
c1t50060E8010053B92d1
invalid vdev specification
the following errors must be manually repaired:
/dev/dsk/c1t50060E8010053B92d1s6 is part of active ZFS pool intranetcf. Please
see zpool(1M).
See, the old disk has a pool named 'intranetcf'. The new disk ALSO (since it's
previously sync'ed) thinks it's part of the same active pool.
So, I'm trying to clean out whatever state is in the new disk I've presented.
Ideally, I'd love a command like:
# zpool obliderate -f c1t50060E8010053B92d1
Do you really know what you're doing? [y/N] y
Ok. It's dead.
#
But I'm willing to go through more hackery if needed.
(If I need to destroy and re-create these LUNS on the storage array, I can do
that too, but I'm hoping for something more host based)
--Jason
you need to destroy zfs labels.
overwrite with zeros using dd beginning and the end of the lun - look in
this mailing list's archives for more details.
--
Robert Milkowski
http://milek.blogspot.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss