On Apr 10, 2008, at 11:41 AM, Zeljko Vrba wrote:
> On Thu, Apr 10, 2008 at 10:48:06AM -0400, Jonathan Adams wrote:
>>
>>
>> Do you have administrator access to this system?  With dtrace(1M),  
>> you
>> can drill down on this, and get more data on what's going on.  See:
>>
> Yes, I do; backtraces are rather long, so I won't post them, unless  
> you
> explicitly want.  In summary, for 8192 threads:
>
> - 433 hits in unix`idle
> - 179 hits in libc.so`cond_signal [invoked from the source thread]
> - 52 hits in libc.so`__lwp_park [cond_wait invoked from my code]
> - 48 and 41 hits in AES encryption [useful work]
> - rest (119 hits) in CV wait/signal and mutex wait/wakeup
>
> gethrtime() reports total elapsed time of 21.4 seconds.  However,  
> the total
> amount of real time spent in top 10 hits is only 872 / 97 = ~ 9  
> seconds.
> (The rest is probably other application-processing.)

Hrm;  try dropping the ', ustack()' from the script;  that way, you're  
only considering kernel stacks, which should raise the amount of signal.


> Why is idle() called so often when there's always at least one  
> thread runnable?
> [For if it weren't, the program would never finish.]  In both cases,  
> the idle
> time equals ca. 20% of the total real running time (iostat reports  
> 9% idle --
> I guess that's divided across CPUs).

I'm not sure;  let's get the kernel stack traces above, and see where  
it leads.

> I tried also to trace the program with plockstat on 8192 threads  
> (both as
> ordinary user and root), and got only the following message:
>
> plockstat: processing aborted: Abort due to drop

You probably need to increase the buffer sizes for this; add - 
xaggsize=4m to start; if that doesn't help, add -xbufsize=1m, and if  
that's still not working, make -xaggsize=16m.

Cheers,
- jonathan


> Thanks for your help.
>
> Best regards,
>  Zeljko.

--------------------------------------------------------------------------
Jonathan Adams, Sun Microsystems, ZFS Team    http://blogs.sun.com/jwadams

_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to