Hello everybody,
I have a problem with my pool, I had some slowdowns lately on my nfs share of my zfs pool. A weekly scrub began and is still running but it worries me, it currently returne that

  pool: nas
 state: UNAVAIL
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: http://illumos.org/msg/ZFS-8000-HC
  scan: scrub in progress since Sun Oct 20 19:29:23 2013
    15.2T scanned out of 22.2T at 84.0M/s, 24h5m to go
    1.29G repaired, 68.67% done
config:

        NAME                         STATE     READ WRITE CKSUM
        nas                          UNAVAIL     63     2     0  insufficient 
replicas
          raidz1-0                   DEGRADED     0     0     0
            c8t50024E9004993E6Ed0p0  ONLINE       0     0     0
            c8t50024E92062E7524d0    ONLINE       0     0     0
            c8t50024E900495BE84d0p0  ONLINE       0     0     0
            c8t50014EE25A5EEC23d0p0  ONLINE       0     0     0
            c8t50024E9003F03980d0p0  ONLINE       0     0     1  (repairing)
            c8t50014EE2B0D3EFC8d0    ONLINE       0     0     0
            c8t50014EE6561DDB4Cd0p0  DEGRADED     0     0   211  too many 
errors  (repairing)
            c8t50024E9003F03A09d0p0  ONLINE       0     0    18  (repairing)
          raidz1-1                   UNAVAIL    131     9     0  insufficient 
replicas
            c50t8d0                  REMOVED      0     0     0  (repairing)
            c2d0                     ONLINE       0     0     0  (repairing)
            c1d0                     ONLINE       0     0     0  (repairing)
            c50t11d0                 ONLINE       0     0     0  (repairing)
            c50t10d0                 REMOVED      0     0     0

errors: 10972861 data errors, use '-v' for a list


really weird, I haven't disconnected any disk. For several hours even if it 
said that the pool was unavailable I was browsing on it via nfs. I can't 
anymore.


What do you think I should do ?



_______________________________________________
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

Reply via email to