On Mon, 22 Mar 2021 18:03:01 +0100 Matteo Croce <mcr...@linux.microsoft.com> wrote:
> From: Matteo Croce <mcr...@microsoft.com> > > Use the new recycling API for page_pool. > In a drop rate test, the packet rate increased di 10%, > from 269 Kpps to 296 Kpps. > > perf top on a stock system shows: > > Overhead Shared Object Symbol > 21.78% [kernel] [k] __pi___inval_dcache_area > 21.66% [mvneta] [k] mvneta_rx_swbm > 7.00% [kernel] [k] kmem_cache_alloc > 6.05% [kernel] [k] eth_type_trans > 4.44% [kernel] [k] kmem_cache_free.part.0 > 3.80% [kernel] [k] __netif_receive_skb_core > 3.68% [kernel] [k] dev_gro_receive > 3.65% [kernel] [k] get_page_from_freelist > 3.43% [kernel] [k] page_pool_release_page > 3.35% [kernel] [k] free_unref_page > > And this is the same output with recycling enabled: > > Overhead Shared Object Symbol > 24.10% [kernel] [k] __pi___inval_dcache_area > 23.02% [mvneta] [k] mvneta_rx_swbm > 7.19% [kernel] [k] kmem_cache_alloc > 6.50% [kernel] [k] eth_type_trans > 4.93% [kernel] [k] __netif_receive_skb_core > 4.77% [kernel] [k] kmem_cache_free.part.0 > 3.93% [kernel] [k] dev_gro_receive > 3.03% [kernel] [k] build_skb > 2.91% [kernel] [k] page_pool_put_page > 2.85% [kernel] [k] __xdp_return > > The test was done with mausezahn on the TX side with 64 byte raw > ethernet frames. > > Signed-off-by: Matteo Croce <mcr...@microsoft.com> > --- > drivers/net/ethernet/marvell/mvneta.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/drivers/net/ethernet/marvell/mvneta.c > b/drivers/net/ethernet/marvell/mvneta.c > index a635cf84608a..8b3250394703 100644 > --- a/drivers/net/ethernet/marvell/mvneta.c > +++ b/drivers/net/ethernet/marvell/mvneta.c > @@ -2332,7 +2332,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct > mvneta_rx_queue *rxq, > if (!skb) > return ERR_PTR(-ENOMEM); > > - page_pool_release_page(rxq->page_pool, virt_to_page(xdp->data)); > + skb_mark_for_recycle(skb, virt_to_page(xdp->data), &xdp->rxq->mem); > > skb_reserve(skb, xdp->data - xdp->data_hard_start); > skb_put(skb, xdp->data_end - xdp->data); > @@ -2344,7 +2344,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct > mvneta_rx_queue *rxq, > skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, > skb_frag_page(frag), skb_frag_off(frag), > skb_frag_size(frag), PAGE_SIZE); > - page_pool_release_page(rxq->page_pool, skb_frag_page(frag)); > + skb_mark_for_recycle(skb, skb_frag_page(frag), &xdp->rxq->mem); > } > > return skb; This cause skb_mark_for_recycle() to set 'skb->pp_recycle=1' multiple times, for the same SKB. (copy-pasted function below signature to help reviewers). This makes me question if we need an API for setting this per page fragment? Or if the API skb_mark_for_recycle() need to walk the page fragments in the SKB and set the info stored in the page for each? -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer