On 10/01/2013 03:33 AM, Ingo Molnar wrote:
* Waiman Long wrote:
I think Waiman's patches (even the later ones) made the queued rwlocks
be a side-by-side implementation with the old rwlocks, and I think
that was just being unnecessarily careful. It might be useful for
testing to have a config o
* Peter Zijlstra wrote:
> On Tue, Oct 01, 2013 at 09:28:15AM +0200, Ingo Molnar wrote:
>
> > That I mostly agree with, except that without a serious usecase do we
> > have a guarantee that bugs in fancies queueing in rwsems gets ironed
> > out?
>
> Methinks mmap_sem is still a big enough lock
On Tue, Oct 01, 2013 at 09:48:02AM +0200, Ingo Molnar wrote:
>
> * Peter Zijlstra wrote:
>
> > On Sat, Sep 28, 2013 at 11:55:26AM -0700, Linus Torvalds wrote:
> > > So if the primary reason for this is really just that f*cking anon_vma
> > > lock, then I would seriously suggest:
> >
> > I would
On Tue, Oct 01, 2013 at 09:28:15AM +0200, Ingo Molnar wrote:
> That I mostly agree with, except that without a serious usecase do we have
> a guarantee that bugs in fancies queueing in rwsems gets ironed out?
Methinks mmap_sem is still a big enough lock to work out a locking
primitive :-)
In fac
* Peter Zijlstra wrote:
> On Sat, Sep 28, 2013 at 11:55:26AM -0700, Linus Torvalds wrote:
> > So if the primary reason for this is really just that f*cking anon_vma
> > lock, then I would seriously suggest:
>
> I would still like to see the rwsem patches merged; even if we end up
> going back
* Waiman Long wrote:
> > I think Waiman's patches (even the later ones) made the queued rwlocks
> > be a side-by-side implementation with the old rwlocks, and I think
> > that was just being unnecessarily careful. It might be useful for
> > testing to have a config option to switch between th
* Peter Zijlstra wrote:
> On Mon, Sep 30, 2013 at 09:13:52AM -0700, Linus Torvalds wrote:
>
> > So unlike a lot of other "let's try to make our locking fancy" that I
> > dislike because it tends to hide the fundamental problem of
> > contention, the rwlock patches make me go "those actually
On Mon, Sep 30, 2013 at 09:13:52AM -0700, Linus Torvalds wrote:
> So unlike a lot of other "let's try to make our locking fancy" that I
> dislike because it tends to hide the fundamental problem of
> contention, the rwlock patches make me go "those actually _fix_ a
> fundamental problem".
So her
On Mon, Sep 30, 2013 at 3:44 AM, Peter Zijlstra wrote:
> On Sat, Sep 28, 2013 at 12:33:36PM -0700, Linus Torvalds wrote:
>> The old rwlock's really have been a disappointment - they are slower
>> than spinlocks, and seldom/never end up scaling any better. Their
>> main advantage was literally the
On 09/30/2013 03:05 AM, Ingo Molnar wrote:
* Michel Lespinasse wrote:
That said, I am very scared of using rwlock_t here, and I would much
prefer we choose a fair lock (either spinlock or a new rwlock
implementation which guarantees not to starve any locker thread)
Given how few users rwlock_
On 09/28/2013 03:33 PM, Linus Torvalds wrote:
On Sat, Sep 28, 2013 at 12:21 PM, Ingo Molnar wrote:
If we do that then I suspect the next step will be queued rwlocks :-/ The
current rwlock_t implementation is rather primitive by modern standards.
(We'd probably have killed rwlock_t long ago if n
On Sat, Sep 28, 2013 at 11:55:26AM -0700, Linus Torvalds wrote:
> So if the primary reason for this is really just that f*cking anon_vma
> lock, then I would seriously suggest:
I would still like to see the rwsem patches merged; even if we end up
going back to a spin style anon_vma lock.
There's
On Sat, Sep 28, 2013 at 12:33:36PM -0700, Linus Torvalds wrote:
> The old rwlock's really have been a disappointment - they are slower
> than spinlocks, and seldom/never end up scaling any better. Their
> main advantage was literally the irq behavior - allowing readers to
> happen without the expe
* Michel Lespinasse wrote:
> That said, I am very scared of using rwlock_t here, and I would much
> prefer we choose a fair lock (either spinlock or a new rwlock
> implementation which guarantees not to starve any locker thread)
Given how few users rwlock_t has today we could attempt to make
* Linus Torvalds wrote:
> [...]
>
> And your numbers for Ingo's patch:
>
> > After testing Ingo's anon-vma rwlock_t conversion (v2) on a 8 socket,
> > 80 core system with aim7, I am quite surprised about the numbers -
> > considering the lack of queuing in rwlocks. A lot of the tests didn't
On Sat, Sep 28, 2013 at 11:55 AM, Linus Torvalds
wrote:
> Btw, I really hate that thing. I think we should turn it back into a
> spinlock. None of what it protects needs a mutex or an rwsem.
>
> Because you guys talk about the regression of turning it into a rwsem,
> but nobody talks about the *or
On Sun, Sep 29, 2013 at 5:40 PM, Davidlohr Bueso wrote:
>
> Hmm, I'm getting the following at bootup:
>
> May be due to missing lock nesting notation
Yes it is. And that reminds me of a problem I think we had with this
code: we had a possible case of the preemption counter nesting too
deeply. I
On Sun, 2013-09-29 at 16:26 -0700, Linus Torvalds wrote:
> On Sun, Sep 29, 2013 at 4:06 PM, Davidlohr Bueso wrote:
> >>
> >> Btw, I really hate that thing. I think we should turn it back into a
> >> spinlock. None of what it protects needs a mutex or an rwsem.
> >
> > The same should apply to i_mm
On Sun, Sep 29, 2013 at 4:06 PM, Davidlohr Bueso wrote:
>>
>> Btw, I really hate that thing. I think we should turn it back into a
>> spinlock. None of what it protects needs a mutex or an rwsem.
>
> The same should apply to i_mmap_mutex, having a similar responsibility
> to the anon-vma lock with
On Sat, 2013-09-28 at 11:55 -0700, Linus Torvalds wrote:
> On Sat, Sep 28, 2013 at 12:41 AM, Ingo Molnar wrote:
> >
> >
> > Yeah, I fully agree. The reason I'm still very sympathetic to Tim's
> > efforts is that they address a regression caused by a mechanic
> > mutex->rwsem conversion:
> >
> >
* Linus Torvalds wrote:
> On Sat, Sep 28, 2013 at 12:21 PM, Ingo Molnar wrote:
> >
> > If we do that then I suspect the next step will be queued rwlocks :-/ The
> > current rwlock_t implementation is rather primitive by modern standards.
> > (We'd probably have killed rwlock_t long ago if not f
On Sat, Sep 28, 2013 at 12:21 PM, Ingo Molnar wrote:
>
> If we do that then I suspect the next step will be queued rwlocks :-/ The
> current rwlock_t implementation is rather primitive by modern standards.
> (We'd probably have killed rwlock_t long ago if not for the
> tasklist_lock.)
Yeah, I'm n
On Sat, Sep 28, 2013 at 12:13 PM, Andi Kleen wrote:
>
> And afaik anon_vma is usually hold short.
Yes.
But the problem with anon_vma is that the "usually" may be the 99.9%
case, but then there are some insane loads that do tons of forking
without execve, and they really make some of the rmap cod
* Linus Torvalds wrote:
> On Sat, Sep 28, 2013 at 12:41 AM, Ingo Molnar wrote:
> >
> >
> > Yeah, I fully agree. The reason I'm still very sympathetic to Tim's
> > efforts is that they address a regression caused by a mechanic
> > mutex->rwsem conversion:
> >
> > 5a505085f043 mm/rmap: Convert
> Of course, since then, we may well have screwed things up and now we
> sleep under it, but I still really think it was a mistake to do it in
> the first place.
>
> So if the primary reason for this is really just that f*cking anon_vma
> lock, then I would seriously suggest:
>
> - turn it back
On Sat, Sep 28, 2013 at 12:41 AM, Ingo Molnar wrote:
>
>
> Yeah, I fully agree. The reason I'm still very sympathetic to Tim's
> efforts is that they address a regression caused by a mechanic
> mutex->rwsem conversion:
>
> 5a505085f043 mm/rmap: Convert the struct anon_vma::mutex to an rwsem
>
>
* Linus Torvalds wrote:
> On Fri, Sep 27, 2013 at 12:00 PM, Waiman Long wrote:
> >
> > On a large NUMA machine, it is entirely possible that a fairly large
> > number of threads are queuing up in the ticket spinlock queue to do
> > the wakeup operation. In fact, only one will be needed. This
* Tim Chen wrote:
> On Fri, 2013-09-27 at 12:39 -0700, Davidlohr Bueso wrote:
> > On Fri, 2013-09-27 at 12:28 -0700, Linus Torvalds wrote:
> > > On Fri, Sep 27, 2013 at 12:00 PM, Waiman Long wrote:
> > > >
> > > > On a large NUMA machine, it is entirely possible that a fairly large
> > > > numb
On 09/27/2013 03:32 PM, Peter Hurley wrote:
On 09/27/2013 03:00 PM, Waiman Long wrote:
With the 3.12-rc2 kernel, there is sizable spinlock contention on
the rwsem wakeup code path when running AIM7's high_systime workload
on a 8-socket 80-core DL980 (HT off) as reported by perf:
7.64% reai
On Fri, 2013-09-27 at 12:39 -0700, Davidlohr Bueso wrote:
> On Fri, 2013-09-27 at 12:28 -0700, Linus Torvalds wrote:
> > On Fri, Sep 27, 2013 at 12:00 PM, Waiman Long wrote:
> > >
> > > On a large NUMA machine, it is entirely possible that a fairly large
> > > number of threads are queuing up in t
On Fri, 2013-09-27 at 12:28 -0700, Linus Torvalds wrote:
> On Fri, Sep 27, 2013 at 12:00 PM, Waiman Long wrote:
> >
> > On a large NUMA machine, it is entirely possible that a fairly large
> > number of threads are queuing up in the ticket spinlock queue to do
> > the wakeup operation. In fact, on
On 09/27/2013 03:00 PM, Waiman Long wrote:
With the 3.12-rc2 kernel, there is sizable spinlock contention on
the rwsem wakeup code path when running AIM7's high_systime workload
on a 8-socket 80-core DL980 (HT off) as reported by perf:
7.64% reaim [kernel.kallsyms] [k] _raw_spin_lock_irq
On Fri, Sep 27, 2013 at 12:00 PM, Waiman Long wrote:
>
> On a large NUMA machine, it is entirely possible that a fairly large
> number of threads are queuing up in the ticket spinlock queue to do
> the wakeup operation. In fact, only one will be needed. This patch
> tries to reduce spinlock conte
33 matches
Mail list logo