This ladder locking is also known as lock-coupling in the database world.
It's a good solution when you have lots of tasks and need high concurrency,
and proven in a lot of literature. You might notice more overhead if you only
have a small number of tasks (e.g. 2-3) but it definitely scales bet
On Mon, 2007-01-29 at 10:15 -0800, Christoph Lameter wrote:
> On Mon, 29 Jan 2007, Peter Zijlstra wrote:
>
> > Ladder locking would end up:
> >
> > lock A0
> > lock B1
> > unlock A0 -> a new operation can start
> > lock C2
> > unlock B1
> > lock D5
> > unlock C2
> > ** we do stuff to D5
> > unloc
On Mon, 29 Jan 2007, Peter Zijlstra wrote:
> Ladder locking would end up:
>
> lock A0
> lock B1
> unlock A0 -> a new operation can start
> lock C2
> unlock B1
> lock D5
> unlock C2
> ** we do stuff to D5
> unlock D5
>
Instead of taking one lock we would need to take 4? Wont doing so cause
sign
unlock D5
For path locking, this would end up being something like this:
(say we can determine we'll never cross C2 back up)
lock A0
lock B1
lock C2
unlock A0 -> a new operation can start
unlock B1
lock D5
** we do stuff to D5 and walk back up to C2
unlock C2
unlock D5
> > Aside from
e side still looks scary to me and
introduces new ways of locking. Ladder locking?
> Aside from breaking MTD this version of the concurrent page cache seems
> rock solid on my dual core x86_64 box.
What exactly is the MTD doing and how does it break?
-
To unsubscribe from this list: s
With Nick leading the way to getting rid of the read side of the tree_lock,
this work continues by breaking the write side of said lock.
Aside from breaking MTD this version of the concurrent page cache seems
rock solid on my dual core x86_64 box.
-
To unsubscribe from this list: send the line
6 matches
Mail list logo