Paul B. Henson wrote:
We just had our first x4500 disk failure (which of course had to happen
late Friday night <sigh>), I've opened a ticket on it but don't expect a
response until Monday so was hoping to verify the hot spare took over
correctly and we still have redundancy pending device replacement.

This is an S10U6 box:

Here's the zpool status output:

  pool: export
 state: DEGRADED
[...]
 scrub: scrub completed after 0h6m with 0 errors on Fri Jan  8 23:21:31
2010


        NAME          STATE     READ WRITE CKSUM
        export        DEGRADED     0     0     0

          mirror      DEGRADED     0     0     0
            c0t2d0    ONLINE       0     0     0
            spare     DEGRADED 18.9K     0     0
              c1t2d0  REMOVED      0     0     0
              c5t0d0  ONLINE       0     0 18.9K

        spares
          c5t0d0      INUSE     currently in use

Is the pool/mirror/spare still supposed to show up as degraded after the
hot spare is deployed?

Yes, the spare will show as degraded until you replace it. I had a pool on a 4500 that lost one drive, then swapped out 3 more due to brain farts from that naff Marvell driver. It was a bit of a concern for a while seeing two degraded devices in one raidz vdev!

The scrub started at 11pm last night, the disk got booted at 11:15pm,
presumably the scrub came across the failures the os had been reporting.
The last scrub status shows that scrub completing successfully. What
happened to the resilver status? How can I tell if the resilver was
successful? Did the resilver start and complete while the scrub was still
running and its status output was lost? Is there any way to see the status
of past scrubs/resilvers, or is only the most recent one available?

You only see the last one, but a resilver is a scrub.

Mostly I'd like to verify my hot spare is working correctly. Given the
spare status is "degraded", the read errors on the spare device, and the
lack of successful resilver status output, it seems like the spare might
not have been added successfully.

It has - "scrub completed after 0h6m with 0 errors".

--
Ian.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to