The cmpxchg will fail when the task is already in the process
of waking up, and as such is an extremely rare occurrence.
Micro-optimize the call and put an unlikely() around it.

To no surprise, when using CONFIG_PROFILE_ANNOTATED_BRANCHES
under a number of workloads the incorrect rate was a mere 1-2%.

Signed-off-by: Davidlohr Bueso <dbu...@suse.de>
---
kernel/sched/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 091e089063be..f7747cf6e427 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -408,7 +408,7 @@ void wake_q_add(struct wake_q_head *head, struct 
task_struct *task)
         * This cmpxchg() executes a full barrier, which pairs with the full
         * barrier executed by the wakeup in wake_up_q().
         */
-       if (cmpxchg(&node->next, NULL, WAKE_Q_TAIL))
+       if (unlikely(cmpxchg(&node->next, NULL, WAKE_Q_TAIL)))
                return;

        get_task_struct(task);
--
2.16.4

Reply via email to