On Fri, 9 Jun 2017 05:45:54 -0700 "Paul E. McKenney" <paul...@linux.vnet.ibm.com> wrote:
> On Fri, Jun 09, 2017 at 09:19:57AM +0200, Peter Zijlstra wrote: > > On Thu, Jun 08, 2017 at 08:25:46PM -0700, Krister Johansen wrote: > > > The behavior of swake_up() differs from that of wake_up(), and from the > > > swake_up() that came from RT linux. A memory barrier, or some other > > > synchronization, is needed prior to a swake_up so that the waiter sees > > > the condition set by the waker, and so that the waker does not see an > > > empty wait list. > > > > Urgh.. let me stare at that. But it sounds like the wrong solution since > > we wanted to keep the wait and swait APIs as close as possible. > > But don't they both need some sort of ordering, be it memory barriers or > locking, to handle the case where the wait/swait doesn't actually sleep? > Looking at an RCU example, and assuming that ordering can move around within a spin lock, and that changes can leak into a spin lock region from both before and after. Could we have: (looking at __call_rcu_core() and rcu_gp_kthread() CPU0 CPU1 ---- ---- __call_rcu_core() { spin_lock(rnp_root) need_wake = __rcu_start_gp() { rcu_start_gp_advanced() { gp_flags = FLAG_INIT } } rcu_gp_kthread() { swait_event_interruptible(wq, gp_flags & FLAG_INIT) { spin_lock(q->lock) *fetch wq->task_list here! * list_add(wq->task_list, q->task_list) spin_unlock(q->lock); *fetch old value of gp_flags here * spin_unlock(rnp_root) rcu_gp_kthread_wake() { swake_up(wq) { swait_active(wq) { list_empty(wq->task_list) } * return false * if (condition) * false * schedule(); Looks like a memory barrier is missing. Perhaps we should slap on into swait_active()? I don't think it is wise to let users add there own, as I think we currently have bugs now. -- Steve