On Thu, Jul 18, 2019 at 11:58:47AM +0200, Thomas Gleixner wrote:
> Subject: smp: Warn on function calls from softirq context
> From: Thomas Gleixner <t...@linutronix.de>
> Date: Thu, 18 Jul 2019 11:20:09 +0200
> 
> It's clearly documented that smp function calls cannot be invoked from
> softirq handling context. Unfortunately nothing enforces that or emits a
> warning.
> 
> A single function call can be invoked from softirq context only via
> smp_call_function_single_async().
> 
> Reported-by: luferry <lufe...@163.com>
> Signed-off-by: Thomas Gleixner <t...@linutronix.de>
> ---
>  kernel/smp.c |   16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
> 
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -291,6 +291,15 @@ int smp_call_function_single(int cpu, sm
>       WARN_ON_ONCE(cpu_online(this_cpu) && irqs_disabled()
>                    && !oops_in_progress);
>  
> +     /*
> +      * Can deadlock when the softirq is executed on return from
> +      * interrupt and the interrupt hit between llist_add() and
> +      * arch_send_call_function_single_ipi() because then this
> +      * invocation sees the list non-empty, skips the IPI send
> +      * and waits forever.
> +      */
> +     WARN_ON_ONCE(is_serving_softirq() && wait);
> +
>       csd = &csd_stack;
>       if (!wait) {
>               csd = this_cpu_ptr(&csd_data);
> @@ -416,6 +425,13 @@ void smp_call_function_many(const struct
>       WARN_ON_ONCE(cpu_online(this_cpu) && irqs_disabled()
>                    && !oops_in_progress && !early_boot_irqs_disabled);
>  
> +     /*
> +      * Bottom half handlers are not allowed to call this as they might
> +      * corrupt cfd_data when the interrupt which triggered softirq
> +      * processing hit this function.
> +      */
> +     WARN_ON_ONCE(is_serving_softirq());
> +
>       /* Try to fastpath.  So, what's a CPU they want? Ignoring this one. */
>       cpu = cpumask_first_and(mask, cpu_online_mask);
>       if (cpu == this_cpu)

As we discussed on IRC, it is worse, we can only use these functions
from task/process context. We need something like the below.

I've build a kernel with this applied and nothing went *splat*.

diff --git a/kernel/smp.c b/kernel/smp.c
index 616d4d114847..7dbcb402c2fc 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -291,6 +291,14 @@ int smp_call_function_single(int cpu, smp_call_func_t 
func, void *info,
        WARN_ON_ONCE(cpu_online(this_cpu) && irqs_disabled()
                     && !oops_in_progress);
 
+       /*
+        * When @wait we can deadlock when we interrupt between llist_add() and
+        * arch_send_call_function_ipi*(); when !@wait we can deadlock due to
+        * csd_lock() on because the interrupt context uses the same csd
+        * storage.
+        */
+       WARN_ON_ONCE(!in_task());
+
        csd = &csd_stack;
        if (!wait) {
                csd = this_cpu_ptr(&csd_data);
@@ -416,6 +424,14 @@ void smp_call_function_many(const struct cpumask *mask,
        WARN_ON_ONCE(cpu_online(this_cpu) && irqs_disabled()
                     && !oops_in_progress && !early_boot_irqs_disabled);
 
+       /*
+        * When @wait we can deadlock when we interrupt between llist_add() and
+        * arch_send_call_function_ipi*(); when !@wait we can deadlock due to
+        * csd_lock() on because the interrupt context uses the same csd
+        * storage.
+        */
+       WARN_ON_ONCE(!in_task());
+
        /* Try to fastpath.  So, what's a CPU they want? Ignoring this one. */
        cpu = cpumask_first_and(mask, cpu_online_mask);
        if (cpu == this_cpu)

Reply via email to