I wanted to avoid any filesystem specifc behavior from the output.

That will be very hard. You'd need a DTrace script that breaks-out
the time spent in the different layers down the IO stack, and substract
the time spent in the filesystem layers from the total.

The problem remains the same: profile app run time. sys, usr, various wait's. From your reponse, it becomes clear that think I have been using the wrong term, instead of calling wait's that time is sleep. So, a breakup of sleep time among various reasons.


There's a script call "whatfor.d" in /usr/demo/dtrace that may make a
starting point for you, but it does not track time, just the reason a
thread was put to sleep.

The problem you'll run into is most sleeps are on condition
variables, so if you're thread is sleeping on IOs, be they disk
IOs or network IOs, you'll see the threads sleeping on a CV.
Further decomposing that into sleeps on disk IO versus
network IO, and than further decomposing disk IO into
time spent in the FS layers versus time spent in the driver
layers, is hard.

Off hand, I can't think of a good starting point to help with your
first requirement. I'd like to spend some time looking at it, because
it's an interesting question to answer. Unfortunately, I can not commit
to spending any time on it in the next couple weeks, due to the
current work queue backlog....

Thanks,
/jim

_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to