On 18/01/2017 03:19, Xishi Qiu wrote: > On 2017/1/17 23:18, Paolo Bonzini wrote: > >> >> >> On 14/01/2017 02:42, Xishi Qiu wrote: >>> From: Tiantian Feng <fengtiant...@huawei.com> >>> >>> We need to disable VMX on all CPUs before stop cpu when OS panic, >>> otherwisewe risk hanging up the machine, because the CPU ignore INIT >>> signals when VMX is enabled. In kernel mainline this issue existence. >>> >>> Signed-off-by: Tiantian Feng <fengtiant...@huawei.com> >> >> Xishi, >> >> it's still missing your Signed-off-by. >> > > Hi Paolo, > > This patch is from fengtiantian, and I just send it for him, > so still should add my SOB?
Yes, both of them should be there. The "signed-off-by" is a sequence of all people that managed the patch---so that would be Tiantian first, then you, then an x86 maintainer. Paolo > Thanks, > Xishi Qiu > >> Paolo >> >>> --- >>> arch/x86/kernel/smp.c | 3 +++ >>> 1 file changed, 3 insertions(+) >>> >>> diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c >>> index 68f8cc2..b574d55 100644 >>> --- a/arch/x86/kernel/smp.c >>> +++ b/arch/x86/kernel/smp.c >>> @@ -33,6 +33,7 @@ >>> #include <asm/mce.h> >>> #include <asm/trace/irq_vectors.h> >>> #include <asm/kexec.h> >>> +#include <asm/virtext.h> >>> >>> /* >>> * Some notes on x86 processor bugs affecting SMP operation: >>> @@ -162,6 +163,7 @@ static int smp_stop_nmi_callback(unsigned int val, >>> struct pt_regs *regs) >>> if (raw_smp_processor_id() == atomic_read(&stopping_cpu)) >>> return NMI_HANDLED; >>> >>> + cpu_emergency_vmxoff(); >>> stop_this_cpu(NULL); >>> >>> return NMI_HANDLED; >>> @@ -174,6 +176,7 @@ static int smp_stop_nmi_callback(unsigned int val, >>> struct pt_regs *regs) >>> asmlinkage __visible void smp_reboot_interrupt(void) >>> { >>> ipi_entering_ack_irq(); >>> + cpu_emergency_vmxoff(); >>> stop_this_cpu(NULL); >>> irq_exit(); >>> } >>> >> >> . >> > > >