On 09/20/10 10:45 AM, Giovanni Tirloni wrote:
On Thu, Sep 16, 2010 at 9:36 AM, Ben Miller <bmil...@mail.eecis.udel.edu
<mailto:bmil...@mail.eecis.udel.edu>> wrote:
I have an X4540 running b134 where I'm replacing 500GB disks with 2TB
disks (Seagate Constellation) and the pool seems sick now. The pool
has four raidz2 vdevs (8+2) where the first set of 10 disks were
replaced a few months ago. I replaced two disks in the second set
(c2t0d0, c3t0d0) a couple of weeks ago, but have been unable to get the
third disk to finish replacing (c4t0d0).
I have tried the resilver for c4t0d0 four times now and the pool also
comes up with checksum errors and a permanent error (<metadata>:<0x0>).
The first resilver was from 'zpool replace', which came up with
checksum errors. I cleared the errors which triggered the second
resilver (same result). I then did a 'zpool scrub' which started the
third resilver and also identified three permanent errors (the two
additional were in files in snapshots which I then destroyed). I then
did a 'zpool clear' and then another scrub which started the fourth
resilver attempt. This last attempt identified another file with
errors in a snapshot that I have now destroyed.
Any ideas how to get this disk finished being replaced without
rebuilding the pool and restoring from backup? The pool is working,
but is reporting as degraded and with checksum errors.
[...]
Try to run a `zpool clear pool2` and see if clears the errors. If not, you
may have to detach `c4t0d0s0/o`.
I believe it's a bug that was fixed in recent builds.
I had tried a clear a few times with no luck. I just did a detach and
that did remove the old disk and has now triggered another resilver which
hopefully works. I had tried a remove rather than a detach before, but
that doesn't work on raidz2...
thanks,
Ben
--
Giovanni Tirloni
gtirl...@sysdroid.com <mailto:gtirl...@sysdroid.com>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss