On January 30, 2009 4:51:36 PM -0800 Frank Cusack <fcus...@fcusack.com> 
wrote:
> later on when i am done with the new pool (it's temporary space) i will
> destroy it and try to recreate it and see if i get the same error.

yup.  this time i couldn't attach.

# zpool status | grep c.t.d.
          c2t0d0    ONLINE       0     0     0
          c3t0d1    ONLINE       0     0     0
            c1t2d0s0  ONLINE       0     0     0
            c1t3d0s0  ONLINE       0     0     0
# rmformat | grep c.t.d.
     1. Logical Node: /dev/rdsk/c3t0d0p0
     2. Logical Node: /dev/rdsk/c2t0d0p0
     3. Logical Node: /dev/rdsk/c3t0d1p0
     4. Logical Node: /dev/rdsk/c0t0d0p0
# zpool attach data c2t0d0 c3t0d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c2t0d0s0 is part of active ZFS pool data. Please see zpool(1M).
# zpool attach -f data c2t0d0 c3t0d0
invalid vdev specification
the following errors must be manually repaired:
/dev/dsk/c2t0d0s0 is part of active ZFS pool data. Please see zpool(1M).
#

again, i fail to see why an error regarding c2t0d0s* has anything to
do with attaching c3t0d0.  also, this time, i couldn't remove c2t0d0
from the system because that's the disk i am trying to attach to.

i suspected that the c2t0d0s0 error was coming from data that still
lived on the new disk, maybe it was previously mounted on that device
and zpool was reporting that device in the error instead of the name
of the device it was currently known as.

so i zeroed (with 'dd') the first 8k blocks.  didn't work.

i then found a hint online that maybe the disks didn't have the
same partitioning.  don't know why this would matter as zfs should
write a new partition map when i try to use the whole disk, but i
tried using 'fdisk' (couldn't use 'format' because it doesn't work
with USB drives ... wtf), but was unable to get the same partition
map written -- the new disk (c3t0d0) was always one cylinder short.
'prtvtoc | fmthard' wouldn't work because it would complain that
a partition was not aligned on a cylinder boundary.

so i then removed c2t0d0 (not sure if it was physically or just export)
and created a new pool on c3t0d0, at which time the partition map / label
written by zfs was now consistent with what was on c2t0d0. [interesting[1] 
that fdisk won't write the same 'full disk' partition that 'zpool' does.]
after destroying the new test pool, re-importing the pool on c2t0d0, still
zpool was complaining about c2t0d0s0 being active.

what finally worked was that i swapped the two disks physically.  on
reboot my existing pool now was on c3t0d0.  i was then able to attach
the new disk (now c2t0d0) without complaint.  UGH.

hopefully there's a hint in my narrative that will help someone to
reveal what the problem was.

-frank
[1] annoying
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to