I am curious why zpool status reports a pool to be in the DEGRADED state
after a drive in a raidz2 vdev has been successfully replaced. In this
particular case drive c0t6d0 was failing so I ran,

zpool offline home/c0t6d0
zpool replace home c0t6d0 c8t1d0

and after the resilvering finished the pool reports a degraded state.
Hopefully this is incorrect. At this point is the vdev in question
now has full raidz2 protected even though it is listed as "DEGRADED"?

P.S. This is on a pool created on S10U3 and upgraded to ZFS version 4
after upgrading the host to S10U4.

Thanks.


# zpool status
  pool: home
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
 scrub: resilver completed with 0 errors on Fri Sep  7 18:39:03 2007
config:

        NAME          STATE     READ WRITE CKSUM
        home          DEGRADED     0     0     0
          raidz2      ONLINE       0     0     0
            c0t0d0    ONLINE       0     0     0
            c1t0d0    ONLINE       0     0     0
            c5t0d0    ONLINE       0     0     0
            c7t0d0    ONLINE       0     0     0
            c8t0d0    ONLINE       0     0     0
            c0t1d0    ONLINE       0     0     0
            c1t1d0    ONLINE       0     0     0
            c5t1d0    ONLINE       0     0     0
            c6t1d0    ONLINE       0     0     0
            c7t1d0    ONLINE       0     0     0
            c0t2d0    ONLINE       0     0     0
          raidz2      ONLINE       0     0     0
            c1t2d0    ONLINE       0     0     0
            c5t2d0    ONLINE       0     0     0
            c6t2d0    ONLINE       0     0     0
            c7t2d0    ONLINE       0     0     0
            c8t2d0    ONLINE       0     0     0
            c0t3d0    ONLINE       0     0     0
            c1t3d0    ONLINE       0     0     0
            c5t3d0    ONLINE       0     0     0
            c6t3d0    ONLINE       0     0     0
            c7t3d0    ONLINE       0     0     0
            c8t3d0    ONLINE       0     0     0
          raidz2      ONLINE       0     0     0
            c0t4d0    ONLINE       0     0     0
            c1t4d0    ONLINE       0     0     0
            c5t4d0    ONLINE       0     0     0
            c7t4d0    ONLINE       0     0     0
            c8t4d0    ONLINE       0     0     0
            c0t5d0    ONLINE       0     0     0
            c1t5d0    ONLINE       0     0     0
            c5t5d0    ONLINE       0     0     0
            c6t5d0    ONLINE       0     0     0
            c7t5d0    ONLINE       0     0     0
            c8t5d0    ONLINE       0     0     0
          raidz2      DEGRADED     0     0     0
            spare     DEGRADED     0     0     0
              c0t6d0  OFFLINE      0     0     0
              c8t1d0  ONLINE       0     0     0
            c1t6d0    ONLINE       0     0     0
            c5t6d0    ONLINE       0     0     0
            c6t6d0    ONLINE       0     0     0
            c7t6d0    ONLINE       0     0     0
            c8t6d0    ONLINE       0     0     0
            c0t7d0    ONLINE       0     0     0
            c1t7d0    ONLINE       0     0     0
            c5t7d0    ONLINE       0     0     0
            c6t7d0    ONLINE       0     0     0
            c7t7d0    ONLINE       0     0     0
            c8t7d0    ONLINE       0     0     0
        spares
          c8t1d0      INUSE     currently in use

errors: No known data errors


-- 
Stuart Anderson  [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to