On Thu, Jan 31, 2013 at 11:45:41AM +0100, Ingo Molnar wrote:
> 
> * Yuanhan Liu <yuanhan....@linux.intel.com> wrote:
> 
> > > > output with this patch:
> > > > -----------------------
> > > > cpu 00:   0   0   ...   1   1   2   1   1   1   2   1   1   1 .... 1   3
> > > > cpu 01:   0   0   ...   1   1   1   1   1   1   2   1   1   1 .... 1   3
> > > > cpu 02:   0   0   ...   2   2   3   2   0   2   1   2   1   1 .... 1   1
> > > > cpu 03:   0   0   ...   2   2   3   2   1   2   1   2   1   1 .... 1   1
> > > > cpu 04:   0   1   ...   2   0   0   1   0   1   3   1   1   1 .... 1   1
> > > > cpu 05:   0   1   ...   2   0   1   1   0   1   2   1   1   1 .... 1   1
> > > > cpu 06:   0   0   ...   2   1   1   2   0   1   2   1   1   1 .... 2   1
> > > > cpu 07:   0   0   ...   2   1   1   2   0   1   2   1   1   1 .... 2   1
> > > > cpu 08:   0   0   ...   1   1   1   1   1   1   1   1   1   1 .... 0   0
> > > > cpu 09:   0   0   ...   1   1   1   1   1   1   1   1   1   1 .... 0   0
> > > > cpu 10:   0   0   ...   1   1   1   0   0   1   1   1   1   1 .... 0   0
> > > > cpu 11:   0   0   ...   1   1   1   0   0   1   1   1   1   2 .... 1   0
> > > > cpu 12:   0   0   ...   1   1   1   0   1   1   0   0   0   1 .... 2   1
> > > > cpu 13:   0   0   ...   1   1   1   0   1   1   1   0   1   2 .... 2   0
> > > > cpu 14:   0   0   ...   2   0   0   0   0   1   1   1   1   1 .... 2   2
> > > > cpu 15:   0   0   ...   2   0   0   1   0   1   1   1   1   1 .... 2   2
> > > > ------------------------------------------------------------------------
> > > > Where you can see that CPU is much busier with this patch.
> > > 
> > > That looks really good - quite similar to how it behaved 
> > > with mutexes, right?
> > 
> > Yes :)
> > 
> > And the result is almost same with mutex lock when MUTEX_SPIN_ON_OWNER
> > is disabled, and that's the reason you will see massive processes(about
> > 100) queued on each CPU in my last report:
> >     https://lkml.org/lkml/2013/1/29/84
> 
> Just curious: how does MUTEX_SPIN_ON_OWNER versus 
> !MUTEX_SPIN_ON_OWNER compare, for this particular, 
> massively-contended anon-vma locks benchmark?

In above testcase, MUTEX_SPIN_ON_OWNER is slightly doing better job(like
3% ~ 4%) than !MUTEX_SPIN_ON_OWNER.

> 
> > > Does this recover most of the performance regression?
> > 
> > Yes, there is only a 10% gap here then. I guess that's because 

Sorry, to be accurate, it's about 14% gap; when MUTEX_SPIN_ON_OWNER is
enabled.

> > I used the general rwsem lock 
> > implementation(lib/rwsem-spinlock.c), but not the XADD 
> > one(lib/rwsem.c). I guess the gap may be a little smaller if 
> > we do the same thing to lib/rwsem.c.
> 
> Is part of the gap due to MUTEX_SPIN_ON_OWNER perhaps?

Nope, !MUTEX_SPIN_ON_OWNER does introduce a little performance drop just
as above stated.

So, to make it clear, here is the list:

lock case                            performance drop compared to mutex lock
----------------------------------------------------------------------------
mutex lock w/o MUTEX_SPIN_ON_OWNER   3.x%
rwsem-spinlock with write stealing   14.x%
rwsem-spinlock                       >100%


> 
> I'm surprised that rwsem-spinlock versus rwsem.c would show a 
> 10% performance difference -

Yes, it may not. And there is only about 0.9% performance difference in
above test between rwsem-spinlock and XADD rwsem. The difference maybe
enlarged when both has write lock stealing enabled, which will be known
only after we do same thing to lib/rwsem.c.

Thanks.

        --yliu

> assuming you have lock 
> debugging/tracing disabled in the .config.
> 
> ( Once the performance regression is fixed, another thing to 
>   check would be to reduce anon-vma lock contention. )
> 
> Thanks,
> 
>       Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to