Karen,

This looks like you were using the internal raid on a T2000, is that right?
If so is it possible that you did not re-label the drives after you deleted the volume? After deleting a raid volume using the onboard controller you must relabel the affected drives. The 1064 controller utilizes a 64MB on-disk metadata region when you create a volume which alters the disk geometry used. So when you create or delete a volume re-labeling is required.

Not sure if this could create the situation you describe here but figured it is worth checking just
in case this was not done.
Regards
-Mark D.


Karen Chau wrote:
We deleted the mirror in the HW RAID, now zfs thinks device is not
available.  we're using the same device name, c1t0d0.  How do we recover??

RAID INFO before:
# raidctl
RAID    Volume  RAID            RAID            Disk
Volume  Type    Status          Disk            Status
------------------------------------------------------
c1t0d0  IM      OK              c1t0d0          OK
                                c1t1d0          OK

RAID INFO after:
# raidctl
No RAID volumes found

itsm-mpk-2# zpool status -x
  pool: canary
 state: FAULTED
status: One or more devices could not be opened.  There are insufficient
        replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-D3
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        canary      UNAVAIL      0     0     0  insufficient replicas
          c1t0d0s3  UNAVAIL      0     0     0  cannot open


Thanks,
karen


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to