On 10/11/10 05:40 AM, Günther wrote:
on my raidz3 pool one drive failed. on resilvering the hotspare seems to failed also. this ended in
a "insufficient replicas" error with state of hotfix drive "too many errors"
i could bring back the hotfix drive by export/import the pool (hotfix drive is
definitely ok) but i could not bring the pool to a online-state. scrubbing does
not help. an idea?
Sorry I can't offer a solution, but a couple of things look odd:
<pre>
pool: backup1
state: DEGRADED
scan: scrub repaired 0 in 15h5m with 0 errors on Thu Oct 7 05:44:48 2010
config:
NAME STATE READ WRITE CKSUM
backup1 DEGRADED 0 0 0
raidz3-0 DEGRADED 0 0 0
c0t0d0 ONLINE 0 0 0
replacing-1 UNAVAIL 0 0 0 insufficient
replicas
17426557978578343265 FAULTED 0 0 0 was
/dev/dsk/c0t1d0s0/old
14735104446814064651 FAULTED 0 0 0 was
/dev/dsk/c0t1d0s0
That looks odd, why c0t1d0s0 and not c0t1d0?
c0t2d0 ONLINE 0 0 0
c0t3d0 ONLINE 0 0 0
c0t4d0 ONLINE 0 0 0
c0t5d0 ONLINE 0 0 0
c0t7d0 ONLINE 0 0 0
c1t0d0 ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0
logs
mirror-1 ONLINE 0 0 0
c2d1 ONLINE 0 0 0
c3d1 ONLINE 0 0 0
cache
c3d0 ONLINE 0 0 0
spares
c0t1d0 AVAIL
Even odder, c0t1d0 is not in use..
--
Ian.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss