From: Joerg Roedel <jroe...@suse.de>

The ring-buffer is accessed in the NMI handler, so we better
avoid faulting on it. Sync the vmalloc range with all
page-tables in system to make sure everyone has it mapped.

This fixes a WARN_ON_ONCE() that can be triggered with PTI
enabled on x86-32:

        WARNING: CPU: 4 PID: 0 at arch/x86/mm/fault.c:320 
vmalloc_fault+0x220/0x230

This triggers because with PTI enabled on an PAE kernel the
PMDs are no longer shared between the page-tables, so the
vmalloc changes do not propagate automatically.

Signed-off-by: Joerg Roedel <jroe...@suse.de>
---
 kernel/events/ring_buffer.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
index 5d3cf40..7b0e9aa 100644
--- a/kernel/events/ring_buffer.c
+++ b/kernel/events/ring_buffer.c
@@ -814,6 +814,9 @@ static void rb_free_work(struct work_struct *work)
 
        vfree(base);
        kfree(rb);
+
+       /* Make sure buffer is unmapped in all page-tables */
+       vmalloc_sync_all();
 }
 
 void rb_free(struct ring_buffer *rb)
@@ -840,6 +843,13 @@ struct ring_buffer *rb_alloc(int nr_pages, long watermark, 
int cpu, int flags)
        if (!all_buf)
                goto fail_all_buf;
 
+       /*
+        * The buffer is accessed in NMI handlers, make sure it is
+        * mapped in all page-tables in the system so that we don't
+        * fault on the range in an NMI handler.
+        */
+       vmalloc_sync_all();
+
        rb->user_page = all_buf;
        rb->data_pages[0] = all_buf + PAGE_SIZE;
        if (nr_pages) {
-- 
2.7.4

Reply via email to