We run a cron job that does a 'zpool status -x' to check for any degraded 
pools.  We just happened to find a pool degraded this morning by running 'zpool 
status' by hand and were surprised that it was degraded as we didn't get a 
notice from the cron job.

# uname -srvp
SunOS 5.11 snv_78 i386

# zpool status -x
all pools are healthy

# zpool status pool1
  pool: pool1
 state: DEGRADED
 scrub: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        pool1        DEGRADED     0     0     0
          raidz1     DEGRADED     0     0     0
            c1t8d0   REMOVED      0     0     0
            c1t9d0   ONLINE       0     0     0
            c1t10d0  ONLINE       0     0     0
            c1t11d0  ONLINE       0     0     0

errors: No known data errors

I'm going to look into it now why the disk is listed as removed.

Does this look like a bug with 'zpool status -x'?

Ben
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to