[ Upstream commit 3f9fbe9bd86c534eba2faf5d840fd44c6049f50e ]

Similar to how decrementing rb->next too early can cause data_head to
(temporarily) be observed to go backward, so too can this happen when
we increment too late.

This barrier() ensures the rb->head load happens after the increment,
both the one in the 'goto again' path, as the one from
perf_output_get_handle() -- albeit very unlikely to matter for the
latter.

Suggested-by: Yabin Cui <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vince Weaver <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Fixes: ef60777c9abd ("perf: Optimize the perf_output() path by removing 
IRQ-disables")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
 kernel/events/ring_buffer.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
index fde853270c09..aef2af80a927 100644
--- a/kernel/events/ring_buffer.c
+++ b/kernel/events/ring_buffer.c
@@ -49,6 +49,15 @@ static void perf_output_put_handle(struct perf_output_handle 
*handle)
        unsigned long head;
 
 again:
+       /*
+        * In order to avoid publishing a head value that goes backwards,
+        * we must ensure the load of @rb->head happens after we've
+        * incremented @rb->nest.
+        *
+        * Otherwise we can observe a @rb->head value before one published
+        * by an IRQ/NMI happening between the load and the increment.
+        */
+       barrier();
        head = local_read(&rb->head);
 
        /*
-- 
2.20.1



Reply via email to