Yup, just hit exactly the same myself.  I have a feeling this faulted disk is 
affecting performance, so tried to remove or offline it:

$ zpool iostat -v 30

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
rc-pool     1.27T  1015G    682     71  84.0M  1.88M
  mirror     199G   265G      0      5      0  21.1K
    c4t1d0      -      -      0      2      0  21.1K
    c4t2d0      -      -      0      0      0      0
    c5t1d0      -      -      0      2      0  21.1K
  mirror     277G   187G    170      7  21.1M   322K
    c4t3d0      -      -     58      4  7.31M   322K
    c5t2d0      -      -     54      4  6.83M   322K
    c5t0d0      -      -     56      4  6.99M   322K
  mirror     276G   188G    171      6  21.1M   336K
    c5t3d0      -      -     56      4  7.03M   336K
    c4t5d0      -      -     56      3  7.03M   336K
    c4t4d0      -      -     56      3  7.04M   336K
  mirror     276G   188G    169      6  20.9M   353K
    c5t4d0      -      -     57      3  7.17M   353K
    c5t5d0      -      -     54      4  6.79M   353K
    c4t6d0      -      -     55      3  6.99M   353K
  mirror     277G   187G    171     10  20.9M   271K
    c4t7d0      -      -     56      4  7.11M   271K
    c5t6d0      -      -     55      5  6.93M   271K
    c5t7d0      -      -     55      5  6.88M   271K
  c6d1p0      32K   504M      0     34      0   620K
----------  -----  -----  -----  -----  -----  -----

20MB in 30 seconds for 3 disks.... that's 220kb/s.  Not healthy at all.

$ zpool status
  pool: rc-pool
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
        repaired.
   see: http://www.sun.com/msg/ZFS-8000-K4
 scrub: scrub completed after 2h55m with 0 errors on Tue Jun 23 11:11:42 2009
config:

        NAME        STATE     READ WRITE CKSUM
        rc-pool     DEGRADED     0     0     0
          mirror    DEGRADED     0     0     0
            c4t1d0  ONLINE       0     0     0
            c4t2d0  FAULTED  1.71M 23.3M     0  too many errors
            c5t1d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c4t3d0  ONLINE       0     0     0
            c5t2d0  ONLINE       0     0     0
            c5t0d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c5t3d0  ONLINE       0     0     0
            c4t5d0  ONLINE       0     0     0
            c4t4d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c5t4d0  ONLINE       0     0     0
            c5t5d0  ONLINE       0     0     0
            c4t6d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c4t7d0  ONLINE       0     0     0
            c5t6d0  ONLINE       0     0     0
            c5t7d0  ONLINE       0     0     0
        logs        DEGRADED     0     0     0
          c6d1p0    ONLINE       0     0     0

errors: No known data errors


# zpool offline rc-pool c4t2d0
cannot offline c4t2d0: no valid replicas

# zpool remove rc-pool c4t2d0
cannot remove c4t2d0: only inactive hot spares or cache devices can be removed
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to