On Mon, Oct 03, 2016 at 11:12:38AM +0200, Peter Zijlstra wrote: > There's a number of 'interesting' problems with FUTEX_UNLOCK_PI, all > caused by holding hb->lock while doing the rt_mutex_unlock() > equivalient. > > This patch doesn't attempt to fix any of the actual problems, but > instead reworks the code to not hold hb->lock across the unlock, > paving the way to actually fix the problems later. > > The current reason we hold hb->lock over unlock is that it serializes > against FUTEX_LOCK_PI and avoids new waiters from coming in, this then > ensures the rt_mutex_next_owner() value is stable and can be written > into the user-space futex value before doing the unlock. Such that the > unlock will indeed end up at new_owner. > > This patch recognises that holding rt_mutex::wait_lock results in the > very same guarantee, no new waiters can come in while we hold that > lock -- after all, waiters would need this lock to queue themselves. > > It therefore restructures the code to keep rt_mutex::wait_lock held. > > This (of course) is not entirely straight forward either, see the > comment in rt_mutex_slowunlock(), doing the unlock itself might drop > wait_lock, letting new waiters in. To cure this > rt_mutex_futex_unlock() becomes a variant of rt_mutex_slowunlock() > that return -EAGAIN instead. This ensures the FUTEX_UNLOCK_PI code > aborts and restarts the entire operation.
Urgh, I missed a bunch :/ So there's the !new_owner case in wake_futex_pi() which can happen if futex_lock_pi()'s rt_mutex_timed_futex_lock() failed but we still see that task on the futex_q list (it hasn't yet done unqueue_me). I wondered if we could sort this case by making fixup_owner() more interesting, which got me looking at that. And it turns out fixup_owner() relies on futex_pi_unlock() holding hb->lock as well.. It does rt_mutex_owner() while holding wait_lock, but then drops wait_lock to call fixup_pi_state_owner(), assuming the owner it read remains valid. ARGGH, what a mess. Lemme stare at this more..