Suppose that stop_machine(fn) hangs because fn() hangs. In this case NMI
hard-lockup can be triggered on another CPU which does nothing wrong and
the trace from nmi_panic() won't help to investigate the problem.

And this change "fixes" the problem we (seem to) hit in practice.

- stop_two_cpus(0, 1) races with show_state_filter() running on CPU_0.

- CPU_1 already spins in MULTI_STOP_PREPARE state, it detects the soft
  lockup and tries to report the problem.

- show_state_filter() enables preemption, CPU_0 calls multi_cpu_stop()
  which goes to MULTI_STOP_DISABLE_IRQ state and disables interrupts.

- CPU_1 spends more than 10 seconds trying to flush the log buffer to
  the slow serial console.

- NMI interrupt on CPU_0 (which now waits for CPU_1) calls nmi_panic().

Reported-by: Wang Shu <shuw...@redhat.com>
Signed-off-by: Oleg Nesterov <o...@redhat.com>
---
 kernel/stop_machine.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
index a467e6c..4a1ca5f 100644
--- a/kernel/stop_machine.c
+++ b/kernel/stop_machine.c
@@ -21,6 +21,7 @@
 #include <linux/smpboot.h>
 #include <linux/atomic.h>
 #include <linux/lglock.h>
+#include <linux/nmi.h>
 
 /*
  * Structure to determine completion condition and record errors.  May
@@ -209,6 +210,13 @@ static int multi_cpu_stop(void *data)
                                break;
                        }
                        ack_state(msdata);
+               } else if (curstate > MULTI_STOP_PREPARE) {
+                       /*
+                        * At this stage all other CPUs we depend on must spin
+                        * in the same loop. Any reason for hard-lockup should
+                        * be detected and reported on their side.
+                        */
+                       touch_nmi_watchdog();
                }
        } while (curstate != MULTI_STOP_EXIT);
 
-- 
2.5.0


Reply via email to