On Fri, 27 Nov 2009, Carsten Aulbert wrote:

At the very least, I would consider physically replacing c1t6d0.

That's an option and see if I can let the system repair more of the errors.
Regarding the error with a named disk, there is only one disk named in the
output so far.

Definitely replace c1t6d0 once the resilver is complete.

Richard, I'll try zpool clear as well, but wanted to wait for some feedback as
this is the first time, we have hit a large number of errors.

It does not seem wise to do a 'clear' until the resilver is complete and everything is stable.

From what others have posted here, sometimes the reported results
change after any on-going scrub/resilvers have completed.

What I find strange why a single vdev is producing so many errors. I think it
should not be possible to be a controller fault as these vdevs span across
controllers, I've not seen any memory errors (yet), not faulty CPU messages...

It is interesting that in addition to being in the same vdev, the disks encountering serious problems are all target 6. Besides something at the zfs level, there could be some some issue at the device driver, or underlying hardware level. Or maybe just bad luck.

As I recall, Albert Chin-A-Young posted about a pool failure where many devices in the same raidz2 vdev spontaneously failed somehow (in his case the whole pool was lost). He is using different hardware but this looks somewhat similar.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to