From: Alan Brady <alan.br...@intel.com>
Date: Wed,  7 Feb 2024 16:42:43 -0800

> From: Emil Tantilov <emil.s.tanti...@intel.com>
> 
> Fix softirq's not being handled during napi_schedule() call when
> receiving marker packets for queue disable by disabling local bottom
> half.

BTW, how exactly does this help?

__napi_schedule() already disables interrupts (local_irq_save()).
napi_schedule_prep() only has READ_ONCE() and other atomic read/write
helpers.

It's always been safe to call napi_schedule() with enabled BH, so I
don't really understand how this works.

> 
> The issue can be seen on ifdown:
> NOHZ tick-stop error: Non-RCU local softirq work is pending, handler #08!!!
> 
> Using ftrace to catch the failing scenario:
> ifconfig   [003] d.... 22739.830624: softirq_raise: vec=3 [action=NET_RX]
> <idle>-0   [003] ..s.. 22739.831357: softirq_entry: vec=3 [action=NET_RX]
> 
> No interrupt and CPU is idle.
> 
> After the patch, with BH locks:
> ifconfig   [003] d.... 22993.928336: softirq_raise: vec=3 [action=NET_RX]
> ifconfig   [003] ..s1. 22993.928337: softirq_entry: vec=3 [action=NET_RX]
> 
> Fixes: c2d548cad150 ("idpf: add TX splitq napi poll support")
> Reviewed-by: Jesse Brandeburg <jesse.brandeb...@intel.com>
> Reviewed-by: Przemek Kitszel <przemyslaw.kits...@intel.com>
> Signed-off-by: Emil Tantilov <emil.s.tanti...@intel.com>
> Signed-off-by: Alan Brady <alan.br...@intel.com>
> ---
>  drivers/net/ethernet/intel/idpf/idpf_virtchnl.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c 
> b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
> index d0cdd63b3d5b..390977a76de2 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
> +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
> @@ -2087,8 +2087,10 @@ int idpf_send_disable_queues_msg(struct idpf_vport 
> *vport)
>               set_bit(__IDPF_Q_POLL_MODE, vport->txqs[i]->flags);
>  
>       /* schedule the napi to receive all the marker packets */
> +     local_bh_disable();
>       for (i = 0; i < vport->num_q_vectors; i++)
>               napi_schedule(&vport->q_vectors[i].napi);
> +     local_bh_enable();
>  
>       return idpf_wait_for_marker_event(vport);
>  }

Thanks,
Olek

Reply via email to