On 01/10/2019 20:15, Igor Druzhinin wrote:
> There is a small window where shootdown NMI might come to a CPU
> (e.g. in serial interrupt handler) where console lock is taken. In order
> not to leave following console prints waiting infinitely for shot down
> CPUs to free the lock - force unlock the console.
>
> The race has been frequently observed while crashing nested Xen in
> an HVM domain.
>
> Signed-off-by: Igor Druzhinin <igor.druzhi...@citrix.com>
> ---
>  xen/arch/x86/crash.c | 2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/xen/arch/x86/crash.c b/xen/arch/x86/crash.c
> index 6e1d3d3..a20ec8a 100644
> --- a/xen/arch/x86/crash.c
> +++ b/xen/arch/x86/crash.c
> @@ -29,6 +29,7 @@
>  #include <asm/io_apic.h>
>  #include <xen/iommu.h>
>  #include <asm/hpet.h>
> +#include <xen/console.h>
>  
>  static cpumask_t waiting_to_crash;
>  static unsigned int crashing_cpu;
> @@ -155,6 +156,7 @@ static void nmi_shootdown_cpus(void)
>      }
>  
>      /* Leave a hint of how well we did trying to shoot down the other cpus */
> +    console_force_unlock();
>      if ( cpumask_empty(&waiting_to_crash) )
>          printk("Shot down all CPUs\n");
>      else

The overall change, R-by me, but I'd prefer something along the lines of:

diff --git a/xen/arch/x86/crash.c b/xen/arch/x86/crash.c
index 6e1d3d3a84..41687465ac 100644
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -154,6 +154,12 @@ static void nmi_shootdown_cpus(void)
         msecs--;
     }
 
+    /*
+     * We may have NMI'd another CPU while it was holding the console lock.
+     * It won't be in a position to release the lock...
+     */
+    console_force_unlock();
+
     /* Leave a hint of how well we did trying to shoot down the other
cpus */
     if ( cpumask_empty(&waiting_to_crash) )
         printk("Shot down all CPUs\n");


If you're happy, I can fold this in on commit.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to