And to add more fuel to the fire, an fmdump -eV shows the following:

Jan 05 2007 11:30:38.030057310 ereport.fs.zfs.vdev.open_failed
nvlist version: 0
        class = ereport.fs.zfs.vdev.open_failed
        ena = 0x88c01b571200801
        detector = (embedded nvlist)
        nvlist version: 0
                version = 0x0
                scheme = zfs
                pool = 0x66dd422b2d14d75b
                vdev = 0x1750a5751459ad65
        (end detector)

        pool = pool
        pool_guid = 0x66dd422b2d14d75b
        pool_context = 0
        vdev_guid = 0x1750a5751459ad65
        vdev_type = disk
        vdev_path = /dev/dsk/c5t6d0s0
        vdev_devid = id1,[EMAIL PROTECTED]/a
        parent_guid = 0x33b0223eb6c89eac
        parent_type = raidz
        prev_state = 0x1
        __ttl = 0x1
        __tod = 0x459e8b3e 0x1caa35e

Based on this, the drive has a pool signature on it already, and I did test the 
replacement disk in another machine... crap...

Based on the size of the drive and number of blocks (0x3a3037ff) and block size 
(0x200) I calculated that the end of the drive less a couple of megabytes is at 
offset 959854591.  Then I ran the following command:

vault:/#dd if=/dev/zero of=/dev/dsk/c5t6d0 bs=512 count=32000 oseek=959854591
32000+0 records in
32000+0 records out

That wiped the last couple of megabytes on the disk (where the vdev label is 
stored).  I also ran the same command without the oseek to clear the front 
couple of megabytes.

Then I did a "zpool status" to re-list the pool.  It still showed the drive as 
unavailable.  I ran "zpool replace pool c5t6d0" and this time it said "cannot 
replace c5t6d0 with c5t6d0: c5t6d0 is busy".

Grumble.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to