On Fri, Nov 02, 2007 at 04:33:32PM +0100, Ingo Molnar wrote:
>
> * Nick Piggin <[EMAIL PROTECTED]> wrote:
>
> > Anyway, if this can make its way to the x86 tree, I think it will get
> > pulled into -mm (?) and get some exposure...
>
> ok, we can certainly try it there.
Anything particular I ha
On Fri, Nov 02, 2007 at 08:56:46PM -0400, Chuck Ebbert wrote:
> On 11/02/2007 07:01 PM, Nick Piggin wrote:
> >
> > In the contended multi-threaded tight loop, the xchg lock is slower than inc
> > lock but still beats the fair xadd lock, but that's only because it is
> > just as unfair if not more
On 11/02/2007 07:01 PM, Nick Piggin wrote:
>
> In the contended multi-threaded tight loop, the xchg lock is slower than inc
> lock but still beats the fair xadd lock, but that's only because it is
> just as unfair if not more so on this hardware (runtime difference of up to
> about 10%)
>
I mean
On Fri, Nov 02, 2007 at 09:51:27AM -0700, Linus Torvalds wrote:
>
>
> On Fri, 2 Nov 2007, Chuck Ebbert wrote:
> >
> > There's also a very easy way to get better fairness with our current
> > spinlocks:
> > use xchg to release the lock instead of mov.
>
> That does nothing at all.
>
> Yes, it
On Fri, Nov 02, 2007 at 10:05:37AM -0400, Rik van Riel wrote:
> On Fri, 2 Nov 2007 07:42:20 +0100
> Nick Piggin <[EMAIL PROTECTED]> wrote:
>
> > On Thu, Nov 01, 2007 at 06:19:41PM -0700, Linus Torvalds wrote:
> > >
> > >
> > > On Thu, 1 Nov 2007, Rik van Riel wrote:
> > > >
> > > > Larry Woodma
On Fri, 2 Nov 2007, Chuck Ebbert wrote:
>
> There's also a very easy way to get better fairness with our current
> spinlocks:
> use xchg to release the lock instead of mov.
That does nothing at all.
Yes, it slows the unlock down, which in turn on some machines will make it
easier for another
On 11/01/2007 10:03 AM, Nick Piggin wrote:
> Introduce ticket lock spinlocks for x86 which are FIFO. The implementation
> is described in the comments. The straight-line lock/unlock instruction
> sequence is slightly slower than the dec based locks on modern x86 CPUs,
> however the difference is qu
* Nick Piggin <[EMAIL PROTECTED]> wrote:
> Anyway, if this can make its way to the x86 tree, I think it will get
> pulled into -mm (?) and get some exposure...
ok, we can certainly try it there. Your code is really nifty.
Ingo
-
To unsubscribe from this list: send the line "unsubscribe
Linus Torvalds wrote:
>
> On Thu, 1 Nov 2007, Gregory Haskins wrote:
>> I had observed this phenomenon on some 8-ways here as well, but I didn't
>> have the bandwidth to code something up. Thumbs up!
>
> Can you test under interesting loads?
Sure thing. Ill try this next week.
>
> We're inte
On Fri, 2 Nov 2007 07:42:20 +0100
Nick Piggin <[EMAIL PROTECTED]> wrote:
> On Thu, Nov 01, 2007 at 06:19:41PM -0700, Linus Torvalds wrote:
> >
> >
> > On Thu, 1 Nov 2007, Rik van Riel wrote:
> > >
> > > Larry Woodman managed to wedge the VM into a state where, on his
> > > 4x dual core system,
On Thu, Nov 01, 2007 at 06:19:41PM -0700, Linus Torvalds wrote:
>
>
> On Thu, 1 Nov 2007, Rik van Riel wrote:
> >
> > Larry Woodman managed to wedge the VM into a state where, on his
> > 4x dual core system, only 2 cores (on the same CPU) could get the
> > zone->lru_lock overnight. The other 6
On Thu, 1 Nov 2007 18:19:41 -0700 (PDT)
Linus Torvalds <[EMAIL PROTECTED]> wrote:
> On Thu, 1 Nov 2007, Rik van Riel wrote:
> >
> > Larry Woodman managed to wedge the VM into a state where, on his
> > 4x dual core system, only 2 cores (on the same CPU) could get the
> > zone->lru_lock overnight.
On Thu, 1 Nov 2007, Rik van Riel wrote:
>
> Larry Woodman managed to wedge the VM into a state where, on his
> 4x dual core system, only 2 cores (on the same CPU) could get the
> zone->lru_lock overnight. The other 6 cores on the system were
> just spinning, without being able to get the lock.
On Thu, 1 Nov 2007 09:38:22 -0700 (PDT)
Linus Torvalds <[EMAIL PROTECTED]> wrote:
> So "unfair" is obviously always bad. Except when it isn't.
Larry Woodman managed to wedge the VM into a state where, on his
4x dual core system, only 2 cores (on the same CPU) could get the
zone->lru_lock overnigh
On Thu, Nov 01, 2007 at 04:01:45PM -0400, Chuck Ebbert wrote:
> On 11/01/2007 10:03 AM, Nick Piggin wrote:
>
> [edited to show the resulting code]
>
> > + __asm__ __volatile__ (
> > + LOCK_PREFIX "xaddw %w0, %1\n"
> > + "1:\t"
> > + "cmpb %h0, %b0\n\t"
> > +
On 11/01/2007 10:03 AM, Nick Piggin wrote:
[edited to show the resulting code]
> + __asm__ __volatile__ (
> + LOCK_PREFIX "xaddw %w0, %1\n"
> + "1:\t"
> + "cmpb %h0, %b0\n\t"
> + "je 2f\n\t"
> + "rep ; nop\n\t"
> + "movb
On Thu, 1 Nov 2007, Gregory Haskins wrote:
>
> I had observed this phenomenon on some 8-ways here as well, but I didn't
> have the bandwidth to code something up. Thumbs up!
Can you test under interesting loads?
We're interested in:
- is the unfairness fix really noticeable (or does it just
Nick Piggin wrote:
> Introduce ticket lock spinlocks for x86 which are FIFO. The implementation
> is described in the comments. The straight-line lock/unlock instruction
> sequence is slightly slower than the dec based locks on modern x86 CPUs,
> however the difference is quite small on Core2 and O
Introduce ticket lock spinlocks for x86 which are FIFO. The implementation
is described in the comments. The straight-line lock/unlock instruction
sequence is slightly slower than the dec based locks on modern x86 CPUs,
however the difference is quite small on Core2 and Opteron when working out of
19 matches
Mail list logo