Hi Tim,
I'm not sure I understand this output completely, but have you
tried detaching the spare?
Cindy
On 11/10/09 09:21, Tim Cook wrote:
So, I currently have a pool with 12 disks raid-z2 (12+2). As you may
have seen in the other thread, I've been having on and off issues with
b126 randomly dropping drives. Well, I think after changing several
cables, and doing about 20 reboots plugging one drive in at a time (I
only booted to the marvell bios, not the whole way into the OS), I've
gotten the marvell cards to settle down. The problem is, I'm now seeing
this in a zpool output:
pool: fserv
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Tue Nov 10
09:15:12 2009
config:
NAME STATE READ WRITE CKSUM
fserv ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c8t0d0 ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0
c8t2d0 ONLINE 0 0 0
c8t3d0 ONLINE 0 0 0
c8t4d0 ONLINE 0 0 0
c8t5d0 ONLINE 0 0 0
c7t0d0 ONLINE 0 0 0
c7t1d0 ONLINE 0 0 0
c7t2d0 ONLINE 0 0 0
c7t3d0 ONLINE 0 0 0
c7t4d0 ONLINE 0 0 0
spare-11 ONLINE 0 0 5
c7t5d0 ONLINE 0 0 0 30K resilvered
c7t6d0 ONLINE 0 0 0
spares
c7t6d0 INUSE currently in use
Anyone have any thoughts? I'm trying to figure out how to get c7t6d0
back to being a hotspare since c7t5d0 is installed, there, and happy.
It's almost as if it's using both disks for "spare-11" right now.
--Tim
------------------------------------------------------------------------
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss