On Sun, 24 Mar 2019 07:58:34 +0100 Felix Fietkau <n...@nbd.name> wrote:
> Since we're freeing multiple skbs, we might as well use bulk free to save a > few cycles. Use the same conditions for bulk free as in napi_consume_skb. > > Signed-off-by: Felix Fietkau <n...@nbd.name> Thanks for working on this, it's been on my todo list for a very long time. I just discussed this with Florian at NetDevconf. > --- > net/core/skbuff.c | 35 +++++++++++++++++++++++++++++++---- > 1 file changed, 31 insertions(+), 4 deletions(-) > > diff --git a/net/core/skbuff.c b/net/core/skbuff.c > index 2415d9cb9b89..ec030ab7f1e7 100644 > --- a/net/core/skbuff.c > +++ b/net/core/skbuff.c > @@ -666,12 +666,39 @@ EXPORT_SYMBOL(kfree_skb); > > void kfree_skb_list(struct sk_buff *segs) > { > - while (segs) { > - struct sk_buff *next = segs->next; > + struct sk_buff *next = segs; > + void *skbs[16]; > + int n_skbs = 0; > > - kfree_skb(segs); > - segs = next; > + while ((segs = next) != NULL) { > + next = segs->next; > + > + if (!skb_unref(segs)) > + continue; > + > + if (segs->fclone != SKB_FCLONE_UNAVAILABLE || > + n_skbs >= ARRAY_SIZE(skbs)) { You could call kmem_cache_free_bulk() here and reset n_skbs=0. > + kfree_skb(segs); > + continue; > + } > + > + trace_kfree_skb(segs, __builtin_return_address(0)); > + > + /* drop skb->head and call any destructors for packet */ > + skb_release_all(segs); > + > +#ifdef CONFIG_SLUB > + /* SLUB writes into objects when freeing */ > + prefetchw(segs); > +#endif > + > + skbs[n_skbs++] = segs; > } > + > + if (!n_skbs) > + return; > + > + kmem_cache_free_bulk(skbuff_head_cache, n_skbs, skbs); > } > EXPORT_SYMBOL(kfree_skb_list); > -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer