Hi,

On Sun, Dec 18, 2011 at 22:38, Matt Breitbach <matth...@flash.shanje.com> wrote:
> I'd look at iostat -En.  It will give you a good breakdown of disks that
> have seen errors.  I've also spotted failing disks just by watching an
> iostat -nxz and looking for the one who's spending more %busy than the rest
> of them, or exhibiting longer than normal service times.

Thanks for that - I've been looking at iostat output for a while
without being able to make proper sense of it, i.e. it doesn't really
look that weird. However, on a side note - would you happen to know
whether it is possible to reset the error counter for a particular
device?

In the output I get this:

c1t29d0          Soft Errors: 0 Hard Errors: 5395 Transport Errors: 5394

And c1t29d0 is actually a striped pair of disks where one disk failed
recently. (c1t29d0 has a mirror on the zpool level - the reason for
this weird config was running out of vdevs on the controller). The
device should be just fine now - the counter has stopped incrementing
- however having that number there is confusing for debugging.

Best regards
Jan
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to