On Tue, 2017-04-18 at 15:03 +0200, Florian Westphal wrote:
> mirred is prone to deadlocks as it invokes dev_queue_xmit while
> holding one or more qdisc locks.
> 
> Avoid lock recursions by moving tx context to a tasklet.
> 
> Signed-off-by: Florian Westphal <f...@strlen.de>
> ---
>  This a stab at removing the lock recursions discussed during netconf.
> 
>  Taking the cost of the tasklet appears to be the only solution;
>  i tried to use a percpu 'history' instead but its not clear to
>  me that this avoids all corner cases.
> 
>  While this patch doesn't avoid loops we don't hang the kernel
>  anymore and removing the 'looping' filter makes things calm
>  down again (there are also other ways to create such loops anyway,
>  including use of a cable... )
> 
> diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c
> index 1b5549ababd4..3dd61eba4741 100644
> --- a/net/sched/act_mirred.c
> +++ b/net/sched/act_mirred.c
> @@ -17,6 +17,7 @@
>  #include <linux/string.h>
>  #include <linux/errno.h>
>  #include <linux/skbuff.h>
> +#include <linux/skb_array.h>
>  #include <linux/rtnetlink.h>
>  #include <linux/module.h>
>  #include <linux/init.h>
> @@ -25,10 +26,19 @@
>  #include <net/net_namespace.h>
>  #include <net/netlink.h>
>  #include <net/pkt_sched.h>
> +#include <net/dst.h>
>  #include <linux/tc_act/tc_mirred.h>
>  #include <net/tc_act/tc_mirred.h>
>  
> +#define MIRRED_TXLEN   512

Using an skb array looks overkill to me, especially if using per cpu
queue. A standard skb list should be good enough ?


> +static void mirred_cleanup_pcpu(void)
> +{
> +     int cpu;
> +
> +     for_each_possible_cpu(cpu) {
> +             struct mirred_tx_data *data;
> +
> +             data = per_cpu_ptr(&mirred_tx_data, cpu);
> +
> +             skb_array_cleanup(&data->skb_array);

This wont do the dev_put() on skb->dev

> +             tasklet_kill(&data->mirred_tasklet);

You might need to kill the tasklet _before_ doing the cleanup ?

> +     }
> +}
> +




Reply via email to