I'd look at iostat -En. It will give you a good breakdown of disks that have seen errors. I've also spotted failing disks just by watching an iostat -nxz and looking for the one who's spending more %busy than the rest of them, or exhibiting longer than normal service times.
-Matt -----Original Message----- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Jan-Aage Frydenbø-Bruvoll Sent: Sunday, December 18, 2011 4:24 PM To: Nathan Kroenert Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] Very poor pool performance - no zfs/controllererrors?! Hi, On Sun, Dec 18, 2011 at 22:14, Nathan Kroenert <nat...@tuneunix.com> wrote: > I know some others may already have pointed this out - but I can't see it > and not say something... > > Do you realise that losing a single disk in that pool could pretty much > render the whole thing busted? > > At least for me - the rate at which _I_ seem to lose disks, it would be > worth considering something different ;) Yeah, I have thought that thought myself. I am pretty sure I have a broken disk, however I cannot for the life of me find out which one. zpool status gives me nothing to work on, MegaCli reports that all virtual and physical drives are fine, and iostat gives me nothing either. What other tools are there out there that could help me pinpoint what's going on? Best regards Jan _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss