Hi Ryan,
Which Solaris release is this?
Thanks,
Cindy
On 07/09/10 10:38, Ryan Schwartz wrote:
Hi Cindy,
Not sure exactly when the drives went into this state, but it is likely that it
happened when I added a second pool, added the same spares to the second pool,
then later destroyed the second pool. There have been no controller or any
other hardware changes to this system - it is all original parts. The device
names are valid, the issue is that they are listed twice - once for a spare
which is AVAIL and another time for the spare which is FAULTED.
I've tried zpool remove, zpool offline, zpool clear, zpool export/import, I've
unconfigured the drives via cfgadm and tried a remove, nothing works to remove
the FAULTED spares.
I was just able remove the AVAIL spares, but only since they were listed first
in the spares list:
[IDGSUN02:/dev/dsk] root# zpool remove idgsun02 c0t6d0
[IDGSUN02:/dev/dsk] root# zpool remove idgsun02 c5t5d0
[IDGSUN02:/dev/dsk] root# zpool status
pool: idgsun02
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
idgsun02 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c0t1d0 ONLINE 0 0 0
c0t5d0 ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0
c1t5d0 ONLINE 0 0 0
c6t1d0 ONLINE 0 0 0
c6t5d0 ONLINE 0 0 0
c7t1d0 ONLINE 0 0 0
c7t5d0 ONLINE 0 0 0
c4t1d0 ONLINE 0 0 0
c4t5d0 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c0t0d0 ONLINE 0 0 0
c0t4d0 ONLINE 0 0 0
c1t0d0 ONLINE 0 0 0
c1t4d0 ONLINE 0 0 0
c6t0d0 ONLINE 0 0 0
c6t4d0 ONLINE 0 0 0
c7t0d0 ONLINE 0 0 0
c7t4d0 ONLINE 0 0 0
c4t0d0 ONLINE 0 0 0
c4t4d0 ONLINE 0 0 0
spares
c0t6d0 FAULTED corrupted data
c5t5d0 FAULTED corrupted data
errors: No known data errors
What's interesting is that running the zpool remove commands a second time has
no effect (presumably because zpool is using GUID internally).
I may have, at one point, tried to re-add the drive again after seeing the
state FAULTED and not being able to remove it, which is probably where the
second set of entries came from. (Pretty much exactly what's described here:
http://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSFaultedSpares).
What I really need is to be able to remove the two bogus faulted spares, and I
think the only way I'll be able to do that is via the GUIDs, since the (valid)
vdev path is shown as the same for each. I would guess zpool is attempting to
remove the device I've got a support case open, but no traction on that as of
yet.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss