Matt V. wrote:
OK, I've tried lockstat; here, the lockstat.out
Adaptive mutex spin: 75935 events in 10.334 seconds (7348 events/sec)
Count indv cuml rcnt nsec Lock Caller
-------------------------------------------------------------------------------
2338 3% 3% 0.00 16898 0x1999a78 sfmmu_mlspl_enter+0xa4
2122 3% 6% 0.00 11547 0x3001ac65c88 PowerDispatch+0x84
1794 2% 8% 0.00 118177 vx_worklist_lk vx_worklist_process+0x98
1376 2% 10% 0.00 7743 pr_pidlock pr_p_lock+0xc
1022 1% 11% 0.00 36757396 kstat_chain_lock kstat_hold+0x10
Ouch! This says you're spinning on the kstat lock for over 37 seconds of
kernel time out of 10 seconds wall time. Depending on how many CPU's you
have, this is probably a big part of your system time. And there's
another one a few lines down for the same lock that's another 8 seconds.
Any chance this machine is running many instances of Oracle? I've seen
very high kstat usage in that scenario.
Next step would be to add a '-s 20' to the lockstat to get a kernel
stack for this lock.
Hmm, I also notice a bunch of dtrace lock contention. I wonder if you
could be hitting this bug:
CR 6815915 libCrun.so.1 needs to use demand loading for its DTrace probes
That is fixed in patch 119963-14 SunOS 5.10: Shared library patch for C++
Jim
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org