Commit-ID:  77754cfa09a6c528c38cbca9ee4cc4f7cf6ad6f2
Gitweb:     https://git.kernel.org/tip/77754cfa09a6c528c38cbca9ee4cc4f7cf6ad6f2
Author:     Joerg Roedel <jroe...@suse.de>
AuthorDate: Fri, 20 Jul 2018 18:22:22 +0200
Committer:  Thomas Gleixner <t...@linutronix.de>
CommitDate: Fri, 20 Jul 2018 22:33:41 +0200

perf/core: Make sure the ring-buffer is mapped in all page-tables

The ring-buffer is accessed in the NMI handler, so it's better to avoid
faulting on it. Sync the vmalloc range with all page-tables in system to
make sure everyone has it mapped.

This fixes a WARN_ON_ONCE() that can be triggered with PTI enabled on
x86-32:

  WARNING: CPU: 4 PID: 0 at arch/x86/mm/fault.c:320 vmalloc_fault+0x220/0x230

This triggers because with PTI enabled on an PAE kernel the PMDs are no
longer shared between the page-tables, so the vmalloc changes do not
propagate automatically.

Note: Andy said rightfully that we should try to fix the vmalloc code for
that case, but that's not a hot fix for the issue at hand.

Fixes: 7757d607c6b3 ("x86/pti: Allow CONFIG_PAGE_TABLE_ISOLATION for x86_32")
Signed-off-by: Joerg Roedel <jroe...@suse.de>
Signed-off-by: Thomas Gleixner <t...@linutronix.de>
Cc: "H . Peter Anvin" <h...@zytor.com>
Cc: linux...@kvack.org
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Andy Lutomirski <l...@kernel.org>
Cc: Dave Hansen <dave.han...@intel.com>
Cc: Josh Poimboeuf <jpoim...@redhat.com>
Cc: Juergen Gross <jgr...@suse.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Borislav Petkov <b...@alien8.de>
Cc: Jiri Kosina <jkos...@suse.cz>
Cc: Boris Ostrovsky <boris.ostrov...@oracle.com>
Cc: Brian Gerst <brge...@gmail.com>
Cc: David Laight <david.lai...@aculab.com>
Cc: Denys Vlasenko <dvlas...@redhat.com>
Cc: Eduardo Valentin <edu...@amazon.com>
Cc: Greg KH <gre...@linuxfoundation.org>
Cc: Will Deacon <will.dea...@arm.com>
Cc: aligu...@amazon.com
Cc: daniel.gr...@iaik.tugraz.at
Cc: hu...@google.com
Cc: keesc...@google.com
Cc: Andrea Arcangeli <aarca...@redhat.com>
Cc: Waiman Long <ll...@redhat.com>
Cc: Pavel Machek <pa...@ucw.cz>
Cc: "David H . Gutteridge" <dhgutteri...@sympatico.ca>
Cc: Arnaldo Carvalho de Melo <a...@kernel.org>
Cc: Alexander Shishkin <alexander.shish...@linux.intel.com>
Cc: Jiri Olsa <jo...@redhat.com>
Cc: Namhyung Kim <namhy...@kernel.org>
Cc: j...@8bytes.org
Link: 
https://lkml.kernel.org/r/1532103744-31902-2-git-send-email-j...@8bytes.org

---
 kernel/events/ring_buffer.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
index 5d3cf407e374..df2d8cf0072c 100644
--- a/kernel/events/ring_buffer.c
+++ b/kernel/events/ring_buffer.c
@@ -814,6 +814,13 @@ static void rb_free_work(struct work_struct *work)
 
        vfree(base);
        kfree(rb);
+
+       /*
+        * FIXME: PAE workaround for vmalloc_fault(): Make sure buffer is
+        * unmapped in all page-tables.
+        */
+       if (IS_ENABLED(CONFIG_X86_PAE))
+               vmalloc_sync_all();
 }
 
 void rb_free(struct ring_buffer *rb)
@@ -840,6 +847,15 @@ struct ring_buffer *rb_alloc(int nr_pages, long watermark, 
int cpu, int flags)
        if (!all_buf)
                goto fail_all_buf;
 
+       /*
+        * FIXME: PAE workaround for vmalloc_fault(): The buffer is
+        * accessed in NMI handlers, make sure it is mapped in all
+        * page-tables in the system so that we don't fault on the range in
+        * an NMI handler.
+        */
+       if (IS_ENABLED(CONFIG_X86_PAE))
+               vmalloc_sync_all();
+
        rb->user_page = all_buf;
        rb->data_pages[0] = all_buf + PAGE_SIZE;
        if (nr_pages) {

Reply via email to