On Sat, 9 Jan 2010, Eric Schrock wrote:

> > If ZFS removed the drive from the pool, why does the system keep
> > complaining about it?
>
> It's not failing in the sense that it's returning I/O errors, but it's
> flaky, so it's attaching and detaching.  Most likely it decided to attach
> again and then you got transport errors.

Ok, how do I make it stop logging messages about the drive until it is
replaced? It's still filling up the logs with the same errors about the
drive being offline.

Looks like hdadm isn't it:

r...@cartman ~ # hdadm offline disk c1t2d0
/usr/bin/hdadm[1762]: /dev/rdsk/c1t2d0d0p0: cannot open
/dev/rdsk/c1t2d0d0p0 is not available

Hmm, I was able to unconfigure it with cfgadm:

r...@cartman ~ # cfgadm -c unconfigure sata1/2::dsk/c1t2d0

It went from:

sata1/2::dsk/c1t2d0            disk         connected    configured   failed

to:

sata1/2                        disk         connected    unconfigured failed

Hopefully that will stop the errors until it's replaced and not break
anything else :).

> No, it's fine.  DEGRADED just means the pool is not operating at the
> ideal state.  By definition a hot spare is always DEGRADED.  As long as
> the spare itself is ONLINE it's fine.

The spare shows as "INUSE", but I'm guessing that's fine too.

> Hope that helps

That was perfect, thank you very much for the review. Now I can not worry
about it until Monday :).

-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  hen...@csupomona.edu
California State Polytechnic University  |  Pomona CA 91768
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to