I think that Mike is on the right track with some type of cache effect
going on.  The increased sleep time may simply be due to increased
lock hold times because your critical regions take longer to complete,
and may also be due to increased time needed to acquire and release
locks because the lock structures themselves are also impacted by
cache thrashing.

Take a look at your cache miss rates as you cross the 2^11 boundary.
My guess is that you will see something start to go through the roof.

In addition to exhausting some cache resource, you might be impacted
by false sharing as you increase your thread count.  This happens when
two threads have private data that winds up in the same cache line due
to memory alignment.  This will typically show up as an increase in CPU
cross calls (mpstat xcal column). Make sure that the memory block that
you split up among threads is aligned on a cache line boundary, and
that the structures handed to each thread are also cache line aligned.  It
is often worth adding a little padding to each structure if necessary to keep
the alignment you need.  If not, you can introduce a cache line ping pong
game between cores.

David Lutz

----- Original Message -----
From: Zeljko Vrba <[EMAIL PROTECTED]>
Date: Thursday, April 10, 2008 11:00 am
Subject: Re: [perf-discuss] Thread scheduler: exponentially slow in extreme 
conditions
To: Mike Gerdts <[EMAIL PROTECTED]>
Cc: perf-discuss@opensolaris.org


> On Thu, Apr 10, 2008 at 11:57:27AM -0500, Mike Gerdts wrote:
> > 
> > In the "Opteron's Data Cache & Load/Store Units" notice, for instance,
> > that "Dual L1 Tags" has 2x1024 (presumably = 2048) entries.  That
> > seems suspiciously close to the knee in your observed performance.
> > 
> Hm, might be, but it still doesn't explain why there is a lot of idle 
> time.
> 
> >
> > Does x86 have the detailed hardware counters that are available on
> > sparc?  If so, cputrack(1) may be able to help out.
> > 
> Yes, it does.  I'll check it out, thanks for the suggestion.
> 
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to