After attempting unsuccessfully to replace a failed drive in a 10 drive raidz2 
array and reading as many forum entries as I could find I followed a suggestion 
to export and import the pool.

In another attempt to import the pool I reinstalled the OS, but I have so far 
been unable to import the pool.

Here is the output from format and zpool commands:

ke...@opensolaris:~# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    rpool       ONLINE       0     0     0
      c8d0s0    ONLINE       0     0     0

errors: No known data errors
ke...@opensolaris:~# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c4d0 <ST350083-         9QG0LW8-0001-465.76GB>
          /p...@0,0/pci8086,2...@1e/pci-...@1/i...@0/c...@0,0
       1. c4d1 <ST350063-         9QG1E50-0001-465.76GB>
          /p...@0,0/pci8086,2...@1e/pci-...@1/i...@0/c...@1,0
       2. c5d0 <ST350063-         9QG3AM7-0001-465.76GB>
          /p...@0,0/pci8086,2...@1e/pci-...@1/i...@1/c...@0,0
       3. c5d1 <ST350063-         9QG19MY-0001-465.76GB>
          /p...@0,0/pci8086,2...@1e/pci-...@1/i...@1/c...@1,0
       4. c6d0 <ST350063-         9QG19VY-0001-465.76GB>
          /p...@0,0/pci8086,2...@1e/pci-...@2/i...@0/c...@0,0
       5. c6d1 <ST350063-         5QG019W-0001-465.76GB>
          /p...@0,0/pci8086,2...@1e/pci-...@2/i...@0/c...@1,0
       6. c7d0 <ST350063-         9QG1DKF-0001-465.76GB>
          /p...@0,0/pci8086,2...@1e/pci-...@2/i...@1/c...@0,0
       7. c7d1 <ST350063-         5QG0B2Y-0001-465.76GB>
          /p...@0,0/pci8086,2...@1e/pci-...@2/i...@1/c...@1,0
       8. c8d0 <DEFAULT cyl 9961 alt 2 hd 255 sec 63>
          /p...@0,0/pci-...@1f,1/i...@0/c...@0,0
       9. c10d0 <ST350083-         9QG0LR5-0001-465.76GB>
          /p...@0,0/pci-...@1f,2/i...@0/c...@0,0
      10. c11d0 <ST350083-         9QG0LW6-0001-465.76GB>
          /p...@0,0/pci-...@1f,2/i...@1/c...@0,0
Specify disk (enter its number): ^C
ke...@opensolaris:~# zpool import
  pool: storage
    id: 18058787158441119951
 state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

    storage          UNAVAIL  insufficient replicas
      raidz2-0       DEGRADED
        c4d0         ONLINE
        c4d1         ONLINE
        c5d0         ONLINE
        replacing-3  DEGRADED
          c5d1       ONLINE
          c5d1       FAULTED  corrupted data
        c6d0         ONLINE
        c6d1         ONLINE
        c7d0         ONLINE
        c7d1         ONLINE
        c10d0        ONLINE
        c11d0        ONLINE
ke...@opensolaris:~# zpool import -f
  pool: storage
    id: 18058787158441119951
 state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

    storage          UNAVAIL  insufficient replicas
      raidz2-0       DEGRADED
        c4d0         ONLINE
        c4d1         ONLINE
        c5d0         ONLINE
        replacing-3  DEGRADED
          c5d1       ONLINE
          c5d1       FAULTED  corrupted data
        c6d0         ONLINE
        c6d1         ONLINE
        c7d0         ONLINE
        c7d1         ONLINE
        c10d0        ONLINE
        c11d0        ONLINE
ke...@opensolaris:~# zpool import -f storage
cannot import 'storage': one or more devices is currently unavailable
    Destroy and re-create the pool from
    a backup source.


Prior to exporting the pool I was able to offline the failed drive.

Finally about a month ago I upgraded the zpool version to enable dedupe.

The suggestions I have read include "playing with" the metadata and this is 
something I would need help with as I am just an "informed" user.

I am hoping that as only one drive failed and this is a dual parity raid that 
there is someway to recover the pool.

Thanks in advance,
Kevin
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to