On Mon, Aug 25, 2008 at 10:06 AM, Bert Miemietz <[EMAIL PROTECTED]> wrote:
> Thanks for prompt reply!
>
>  > CPU's don't wait for io, processes do.

> On Solaris10 with a large database encountering very poor IO performance
> I might see 100 %idle and have no idea that the CPU(s) are idle because
> of hundreds of IO-requests pending. Where can I get a hint on the IO-issue
> easier than from %wio?

iostat !



> If there are arguments for a CPU-normalized approach one could
> perhaps compare the number of threads waiting in biowait() against
> the number of CPUs in the system. So we would take into account the
> number of CPUs:
> o 8 threads in biowait() on an 8CPU system and no other runnable
>   processes -> 100% wio  - 0% idle
> o 1 thread in biowait() on a 8CPU system and no other runnable
>   processes -> 12.5% wio - 87.5% idle

This is what Solaris used to do, I think for Solaris 2.6 to Solaris 9,
it was normalized. Before that the un-normalized wio would go to 100%
on all CPUs whenever a single tape backup was running. Even normalized
it still generated too many false alarms and confusion. There have
been products and tools that add wio to sys when they plot CPU.

You should be watching iostat,

Note that network i/o has never caused wio to be set, but you can see
NFS delays in iostat.

I did once file an RFE for microstates to be upgraded so that process
iowait would be measured, that was about 10 years ago and we ended up
getting dtrace instead.

Adrian
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to