On Sat, 12 Sep 2009, Jeremy Kister wrote:
scrub: resilver in progress, 0.12% done, 108h42m to go
[...]
raidz1 DEGRADED 0 0 0
c3t8d0ONLINE 0 0 0
c5t8d0ONLINE 0 0 0
c3t9d0ONLINE 0 0 0
The device is listed with s0; did you try using c5t9d0s0 as the name?
On 12 Sep, 2009, at 17.44, Jeremy Kister wrote:
[sorry for the cross post to solarisx86]
One of my disks died that i had in a raidz configuration on a Sun
V40z with Solaris 10u5. I took the bad disk out, replaced the dis
On 9/12/2009 10:33 PM, Mark J. Musante wrote:
That could be a bug with the status output. Could you try "zdb -l" on
one of the good drives and see if the label for c5t9d0 has "/old"
oops, i just realized i took this thread off list. i hope you dont mind me
putting it back on -- mea culpa.
On 9/12/2009 9:41 PM, Mark J Musante wrote:
The device is listed with s0; did you try using c5t9d0s0 as the name?
I didn't -- I never used s0 in the config setting up the zpool -- it
changed to s0 after reboot. but in either case, it's a good thought:
# zpool replace nfspool c5t9d0s0 c5t9d
[sorry for the cross post to solarisx86]
One of my disks died that i had in a raidz configuration on a Sun V40z with
Solaris 10u5. I took the bad disk out, replaced the disk, and issued
'zpool replace pool c5t9d0'. the resilver process started, and before it
was done i rebooted the system.