On Fri, 25 Nov 2016, Peter Zijlstra wrote:
> On Fri, Nov 25, 2016 at 10:23:26AM +0100, Peter Zijlstra wrote:
> > On Thu, Nov 24, 2016 at 07:58:07PM +0100, Peter Zijlstra wrote:
> >
> > > OK, so clearly I'm confused. So let me try again.
> > >
> > > LOCK_PI, does in one function: lookup_pi_state,
On Thu, Oct 27, 2016 at 10:36:00PM +0200, Thomas Gleixner wrote:
> So that waiter which is now spinning on pi_mutex->lock has already set the
> waiters bit, which you undo. So you created the following scenario:
>
> CPU0CPU1 CPU2
>
> TID 0x1001
On Fri, Nov 25, 2016 at 10:23:26AM +0100, Peter Zijlstra wrote:
> On Thu, Nov 24, 2016 at 07:58:07PM +0100, Peter Zijlstra wrote:
>
> > OK, so clearly I'm confused. So let me try again.
> >
> > LOCK_PI, does in one function: lookup_pi_state, and fixup_owner. If
> > fixup_owner fails with -EAGAIN,
On Thu, Nov 24, 2016 at 07:58:07PM +0100, Peter Zijlstra wrote:
> OK, so clearly I'm confused. So let me try again.
>
> LOCK_PI, does in one function: lookup_pi_state, and fixup_owner. If
> fixup_owner fails with -EAGAIN, we can redo the pi_state lookup.
>
> The requeue stuff, otoh, has one each
On Thu, Nov 24, 2016 at 06:56:53PM +0100, Thomas Gleixner wrote:
> > I'm stumped on REQUEUE_PI.. this relies on attach_to_pi_owner() and
>
> You mean LOCK_PI, right?
>
> > fixup_owner() being in the same function. But this is not the case for
> > requeue. WAIT_REQUEUE has the fixup, as its retur
On Thu, 24 Nov 2016, Peter Zijlstra wrote:
> On Wed, Nov 23, 2016 at 08:20:05PM +0100, Peter Zijlstra wrote:
> > > + if (oldowner == &init_task && uval != 0) {
> > > + raw_spin_lock(&pi_state->owner->pi_lock);
> > > + list_del_init(&pi_state->list);
> > > + raw_spin_unlock(&
On Wed, Nov 23, 2016 at 08:20:05PM +0100, Peter Zijlstra wrote:
> > + if (oldowner == &init_task && uval != 0) {
> > + raw_spin_lock(&pi_state->owner->pi_lock);
> > + list_del_init(&pi_state->list);
> > + raw_spin_unlock(&pi_state->owner->pi_lock);
> > + pi
On Thu, Oct 27, 2016 at 10:36:00PM +0200, Thomas Gleixner wrote:
> > + new_owner = rt_mutex_next_owner(&pi_state->pi_mutex);
> > + if (!new_owner) {
> > + /*
> > +* This is the case where futex_lock_pi() has not yet or failed
> > +* to acquire the lock but stil
On Fri, 21 Oct 2016, Peter Zijlstra wrote:
> On Mon, Oct 10, 2016 at 12:17:48PM +0200, Thomas Gleixner wrote:
> > I fear, we need to rethink this whole locking/protection scheme from
> > scratch.
>
> Here goes... as discussed at ELCE this serializes the {uval,
> pi_state} state using pi_mutex->wai
On Mon, Oct 10, 2016 at 12:17:48PM +0200, Thomas Gleixner wrote:
> I fear, we need to rethink this whole locking/protection scheme from
> scratch.
Here goes... as discussed at ELCE this serializes the {uval,
pi_state} state using pi_mutex->wait_lock.
I did my best to reason about requeue_pi, but
On Sun, Oct 09, 2016 at 01:17:50PM +0200, Thomas Gleixner wrote:
> On Fri, 7 Oct 2016, Peter Zijlstra wrote:
> > top_waiter = futex_top_waiter(hb, &key);
> > if (top_waiter) {
> > - ret = wake_futex_pi(uaddr, uval, top_waiter, hb);
> > + struct futex_pi_state *pi_state =
On Mon, Oct 10, 2016 at 12:17:48PM +0200, Thomas Gleixner wrote:
> There is another problem with all that racing against fixup_owner()
> resp. fixup_pi_state_owner().
>
> I fear, we need to rethink this whole locking/protection scheme from
> scratch.
So for pi_state (ie, the attach_to_pi_state()
On Sat, 8 Oct 2016, Peter Zijlstra wrote:
> On Sat, Oct 08, 2016 at 05:53:49PM +0200, Thomas Gleixner wrote:
> > On Fri, 7 Oct 2016, Peter Zijlstra wrote:
> > > Solve all that by:
> > >
> > > - using futex specific rt_mutex calls that lack the fastpath, futexes
> > >have their own fastpath an
On Fri, 7 Oct 2016, Peter Zijlstra wrote:
> top_waiter = futex_top_waiter(hb, &key);
> if (top_waiter) {
> - ret = wake_futex_pi(uaddr, uval, top_waiter, hb);
> + struct futex_pi_state *pi_state = top_waiter->pi_state;
> +
> + ret = -EINVAL;
> +
On Fri, 7 Oct 2016, Peter Zijlstra wrote:
> + /*
> + * Grab a reference on the pi_state and drop hb->lock.
> + *
> + * The reference ensures pi_state lives, dropping the hb->lock
> + * is tricky.. wake_futex_pi() will take rt_mutex::wa
On Sat, 8 Oct 2016, Peter Zijlstra wrote:
> > So in this case we tell the caller on CPU 1 that the futex is in
> > inconsistent state, because pistate->owner still points to the unlocking
> > task while the user space value alread shows the new owner. So this sanity
> > check triggers and we simply
On Sat, Oct 08, 2016 at 05:53:49PM +0200, Thomas Gleixner wrote:
> On Fri, 7 Oct 2016, Peter Zijlstra wrote:
> > Solve all that by:
> >
> > - using futex specific rt_mutex calls that lack the fastpath, futexes
> >have their own fastpath anyway. This makes that
> >rt_mutex_futex_unlock() d
On Fri, 7 Oct 2016, Peter Zijlstra wrote:
> Solve all that by:
>
> - using futex specific rt_mutex calls that lack the fastpath, futexes
>have their own fastpath anyway. This makes that
>rt_mutex_futex_unlock() doesn't need to drop rt_mutex::wait_lock
>and the unlock is guaranteed if
New version..
This one seems to pass all the (pi) futex tests and survives many hours
of my modified pi_stress (I added MADV_UNMAP to punch holes in the
page-tables to trigger (minor) faults).
---
Subject: futex: Rewrite FUTEX_UNLOCK_PI
From: Peter Zijlstra
Date: Sun Oct 2 18:42:33 CEST 2016
On Mon, Oct 03, 2016 at 11:12:38AM +0200, Peter Zijlstra wrote:
> There's a number of 'interesting' problems with FUTEX_UNLOCK_PI, all
> caused by holding hb->lock while doing the rt_mutex_unlock()
> equivalient.
>
> This patch doesn't attempt to fix any of the actual problems, but
> instead rewor
On Wed, Oct 05, 2016 at 10:21:02AM +0200, Sebastian Andrzej Siewior wrote:
> On 2016-10-05 10:09:12 [+0200], Peter Zijlstra wrote:
> > On Wed, Oct 05, 2016 at 09:41:47AM +0200, Sebastian Andrzej Siewior wrote:
> > > are those problems DL related?
> >
> > One of them, the other is that PI thing you
On 2016-10-05 10:09:12 [+0200], Peter Zijlstra wrote:
> On Wed, Oct 05, 2016 at 09:41:47AM +0200, Sebastian Andrzej Siewior wrote:
> > are those problems DL related?
>
> One of them, the other is that PI thing you did that ugly nodeboost
> thing for, right?
this no-de-boost yes. This is probably
On Wed, Oct 05, 2016 at 09:41:47AM +0200, Sebastian Andrzej Siewior wrote:
> On 2016-10-03 11:12:38 [+0200], Peter Zijlstra wrote:
> > There's a number of 'interesting' problems with FUTEX_UNLOCK_PI, all
> > caused by holding hb->lock while doing the rt_mutex_unlock()
> > equivalient.
>
> are thos
On 2016-10-03 11:12:38 [+0200], Peter Zijlstra wrote:
> There's a number of 'interesting' problems with FUTEX_UNLOCK_PI, all
> caused by holding hb->lock while doing the rt_mutex_unlock()
> equivalient.
are those problems DL related?
Sebastian
On Mon, 3 Oct 2016 17:45:14 +0200
Peter Zijlstra wrote:
> On Mon, Oct 03, 2016 at 11:36:24AM -0400, Steven Rostedt wrote:
>
> > > + WARN_ON_ONCE(!atomic_inc_not_zero(&pi_state->refcount));
> >
> > Don't we have a rule where WARN_ON() and BUG_ON() should never have
> > "side effects"? That is,
On Mon, Oct 03, 2016 at 11:36:24AM -0400, Steven Rostedt wrote:
> > + WARN_ON_ONCE(!atomic_inc_not_zero(&pi_state->refcount));
>
> Don't we have a rule where WARN_ON() and BUG_ON() should never have
> "side effects"? That is, they should only check values, but their
> contents should not update
On Mon, Oct 03, 2016 at 11:36:24AM -0400, Steven Rostedt wrote:
> > /*
> > -* If current does not own the pi_state then the futex is
> > -* inconsistent and user space fiddled with the futex value.
> > +* Now that we hold wait_lock, no new waiters can happen on the
> > +* rt_mut
On Mon, 03 Oct 2016 11:12:38 +0200
Peter Zijlstra wrote:
> There's a number of 'interesting' problems with FUTEX_UNLOCK_PI, all
> caused by holding hb->lock while doing the rt_mutex_unlock()
> equivalient.
>
> This patch doesn't attempt to fix any of the actual problems, but
> instead reworks th
There's a number of 'interesting' problems with FUTEX_UNLOCK_PI, all
caused by holding hb->lock while doing the rt_mutex_unlock()
equivalient.
This patch doesn't attempt to fix any of the actual problems, but
instead reworks the code to not hold hb->lock across the unlock,
paving the way to actual
29 matches
Mail list logo