On Nov 27, 2009, at 12:55 PM, Carsten Aulbert <carsten.aulb...@aei.mpg.de > wrote:

On Friday 27 November 2009 18:45:36 Carsten Aulbert wrote:
I was too fast, now it looks completely different:

scrub: resilver completed after 4h3m with 0 errors on Fri Nov 27 18:46:33
2009
[...]
s13:~# zpool status
 pool: atlashome
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors
       using 'zpool clear' or replace the device with 'zpool replace'.
  see: http://www.sun.com/msg/ZFS-8000-9P
scrub: resilver completed after 4h3m with 0 errors on Fri Nov 27 18:46:33
2009
config:

       NAME        STATE     READ WRITE CKSUM
       atlashome   DEGRADED     0     0     0
         raidz1    ONLINE       0     0     0
           c0t0d0  ONLINE       0     0     0
           c1t0d0  ONLINE       0     0     0
           c5t0d0  ONLINE       0     0     0
           c7t0d0  ONLINE       0     0     0
           c8t0d0  ONLINE       0     0     0
         raidz1    ONLINE       0     0     0
           c0t1d0  ONLINE       0     0     0
           c1t1d0  ONLINE       0     0     1
           c5t1d0  ONLINE       0     0     2
           c6t1d0  ONLINE       0     0     6
           c7t1d0  ONLINE       0     0     0
         raidz1    ONLINE       0     0     0
           c8t1d0  ONLINE       0     0     0
           c0t2d0  ONLINE       0     0     0
           c1t2d0  ONLINE       0     0     0
           c5t2d0  ONLINE       0     0     3
           c6t2d0  ONLINE       0     0     1
         raidz1    ONLINE       0     0     0
           c7t2d0  ONLINE       0     0     1
           c8t2d0  ONLINE       0     0     1
           c0t3d0  ONLINE       0     0     1
           c1t3d0  ONLINE       0     0     0
           c5t3d0  ONLINE       0     0     0
         raidz1    ONLINE       0     0     0
           c6t3d0  ONLINE       0     0     0
           c7t3d0  ONLINE       0     0     1
           c8t3d0  ONLINE       0     0     0
           c0t4d0  ONLINE       0     0     1
           c1t4d0  ONLINE       0     0     0
         raidz1    ONLINE       0     0     0
           c5t4d0  ONLINE       0     0     0
           c7t4d0  ONLINE       0     0     0
           c8t4d0  ONLINE       0     0     1
           c0t5d0  ONLINE       0     0     1
           c1t5d0  ONLINE       0     0     0
         raidz1    ONLINE       0     0     0
           c5t5d0  ONLINE       0     0     0
           c6t5d0  ONLINE       0     0     0
           c7t5d0  ONLINE       0     0     0
           c8t5d0  ONLINE       0     0     1
           c0t6d0  ONLINE       0     0     0
         raidz1    DEGRADED     0     0     1
           c1t6d0  ONLINE       0     0     0  124G resilvered
           c5t6d0  ONLINE       0     0     0
           c6t6d0  DEGRADED     0     0    41  too many errors
           c7t6d0  DEGRADED     1     0    14  too many errors
           c8t6d0  ONLINE       0     0     1
         raidz1    ONLINE       0     0     0
           c0t7d0  ONLINE       0     0     0
           c1t7d0  ONLINE       0     0     1
           c5t7d0  ONLINE       0     0     0
           c6t7d0  ONLINE       0     0     0
           c7t7d0  ONLINE       0     0     0
       logs
         c6t4d0    ONLINE       0     0     0
       spares
         c8t7d0    AVAIL


Now the big question:

(1) zpool clear or
(2) bring in the spare again (or exchange two more disks)?

Opinions?

I would plan downtime to physically inspect the cabling.

-Ross
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to