KVM support may be compiled as dynamic module, which triggers the
following splat on modprobe (under CONFIG_DEBUG_PREEMPT):

 KVM: vmx: using Hyper-V Enlightened VMCS
 BUG: using smp_processor_id() in preemptible [00000000] code: modprobe/466
 caller is debug_smp_processor_id+0x17/0x19
 CPU: 0 PID: 466 Comm: modprobe Kdump: loaded Not tainted 4.19.43 #1
 Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS 
090007  06/02/2017
 Call Trace:
  dump_stack+0x61/0x7e
  check_preemption_disabled+0xd4/0xe6
  debug_smp_processor_id+0x17/0x19
  set_hv_tscchange_cb+0x1b/0x89
  kvm_arch_init+0x14a/0x163 [kvm]
  kvm_init+0x30/0x259 [kvm]
  vmx_init+0xed/0x3db [kvm_intel]
  do_one_initcall+0x89/0x1bc
  do_init_module+0x5f/0x207
  load_module+0x1b34/0x209b
  __ia32_sys_init_module+0x17/0x19
  do_fast_syscall_32+0x121/0x1fa
  entry_SYSENTER_compat+0x7f/0x91

Hold cpus_read_lock() so that MSR will be written for an online CPU,
even if set_hv_tscchange_cb() gets being preempted.
While at it, cleanup smp_processor_id()'s in hv_cpu_init() and add a
lockdep assert into hv_cpu_die().

Fixes: 93286261de1b4 ("x86/hyperv: Reenlightenment notifications
support")

Cc: Andy Lutomirski <l...@kernel.org>
Cc: Borislav Petkov <b...@alien8.de>
Cc: Cathy Avery <cav...@redhat.com>
Cc: Haiyang Zhang <haiya...@microsoft.com>
Cc: "H. Peter Anvin" <h...@zytor.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: "K. Y. Srinivasan" <k...@microsoft.com>
Cc: "Michael Kelley (EOSG)" <michael.h.kel...@microsoft.com>
Cc: Mohammed Gamal <mmo...@redhat.com>
Cc: Paolo Bonzini <pbonz...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Radim Krčmář <rkrc...@redhat.com>
Cc: Roman Kagan <rka...@virtuozzo.com>
Cc: Sasha Levin <sas...@kernel.org>
Cc: Stephen Hemminger <sthem...@microsoft.com>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Vitaly Kuznetsov <vkuzn...@redhat.com>

Cc: de...@linuxdriverproject.org
Cc: k...@vger.kernel.org
Cc: linux-hyp...@vger.kernel.org
Cc: x...@kernel.org
Reported-by: Prasanna Panchamukhi <panchamu...@arista.com>
Signed-off-by: Dmitry Safonov <d...@arista.com>
---
v1 link: lkml.kernel.org/r/20190611212003.26382-1-d...@arista.com

NOTE that I hadn't a chance to test v2 on hyperv machine so far,
ONLY BUILD TESTED. (In hope that the patch still makes sense and Kbuild
bot will report any issue).

 arch/x86/hyperv/hv_init.c | 16 +++++++++++++---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
index 1608050e9df9..ec7fd7d6c125 100644
--- a/arch/x86/hyperv/hv_init.c
+++ b/arch/x86/hyperv/hv_init.c
@@ -20,6 +20,7 @@
 #include <linux/clockchips.h>
 #include <linux/hyperv.h>
 #include <linux/slab.h>
+#include <linux/cpu.h>
 #include <linux/cpuhotplug.h>
 
 #ifdef CONFIG_HYPERV_TSCPAGE
@@ -91,7 +92,7 @@ EXPORT_SYMBOL_GPL(hv_max_vp_index);
 static int hv_cpu_init(unsigned int cpu)
 {
        u64 msr_vp_index;
-       struct hv_vp_assist_page **hvp = &hv_vp_assist_page[smp_processor_id()];
+       struct hv_vp_assist_page **hvp = &hv_vp_assist_page[cpu];
        void **input_arg;
        struct page *pg;
 
@@ -103,7 +104,7 @@ static int hv_cpu_init(unsigned int cpu)
 
        hv_get_vp_index(msr_vp_index);
 
-       hv_vp_index[smp_processor_id()] = msr_vp_index;
+       hv_vp_index[cpu] = msr_vp_index;
 
        if (msr_vp_index > hv_max_vp_index)
                hv_max_vp_index = msr_vp_index;
@@ -182,7 +183,6 @@ void set_hv_tscchange_cb(void (*cb)(void))
        struct hv_reenlightenment_control re_ctrl = {
                .vector = HYPERV_REENLIGHTENMENT_VECTOR,
                .enabled = 1,
-               .target_vp = hv_vp_index[smp_processor_id()]
        };
        struct hv_tsc_emulation_control emu_ctrl = {.enabled = 1};
 
@@ -196,7 +196,16 @@ void set_hv_tscchange_cb(void (*cb)(void))
        /* Make sure callback is registered before we write to MSRs */
        wmb();
 
+       /*
+        * As reenlightenment vector is global, there is no difference which
+        * CPU will register MSR, though it should be an online CPU.
+        * hv_cpu_die() callback guarantees that on CPU teardown
+        * another CPU will re-register MSR back.
+        */
+       cpus_read_lock();
+       re_ctrl.target_vp = hv_vp_index[raw_smp_processor_id()];
        wrmsrl(HV_X64_MSR_REENLIGHTENMENT_CONTROL, *((u64 *)&re_ctrl));
+       cpus_read_unlock();
        wrmsrl(HV_X64_MSR_TSC_EMULATION_CONTROL, *((u64 *)&emu_ctrl));
 }
 EXPORT_SYMBOL_GPL(set_hv_tscchange_cb);
@@ -239,6 +248,7 @@ static int hv_cpu_die(unsigned int cpu)
 
        rdmsrl(HV_X64_MSR_REENLIGHTENMENT_CONTROL, *((u64 *)&re_ctrl));
        if (re_ctrl.target_vp == hv_vp_index[cpu]) {
+               lockdep_assert_cpus_held();
                /* Reassign to some other online CPU */
                new_cpu = cpumask_any_but(cpu_online_mask, cpu);
 
-- 
2.22.0

_______________________________________________
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel

Reply via email to