> Hi Lorenzo, 
> 
> On Thu, Oct 10, 2019 at 01:18:33AM +0200, Lorenzo Bianconi wrote:
> > Refactor mvneta_rx_swbm code introducing mvneta_swbm_rx_frame and
> > mvneta_swbm_add_rx_fragment routines. Rely on build_skb in oreder to
> > allocate skb since the previous patch introduced buffer recycling using
> > the page_pool API.
> > This patch fixes even an issue in the original driver where dma buffers
> > are accessed before dma sync
> > 
> > Signed-off-by: Ilias Apalodimas <ilias.apalodi...@linaro.org>
> > Signed-off-by: Jesper Dangaard Brouer <bro...@redhat.com>
> > Signed-off-by: Lorenzo Bianconi <lore...@kernel.org>
> > ---
> >  drivers/net/ethernet/marvell/mvneta.c | 198 ++++++++++++++------------
> >  1 file changed, 104 insertions(+), 94 deletions(-)
> > 
> > diff --git a/drivers/net/ethernet/marvell/mvneta.c 
> > b/drivers/net/ethernet/marvell/mvneta.c
> > index 31cecc1ed848..79a6bac0192b 100644
> > --- a/drivers/net/ethernet/marvell/mvneta.c
> > +++ b/drivers/net/ethernet/marvell/mvneta.c
> > @@ -323,6 +323,11 @@
> >           ETH_HLEN + ETH_FCS_LEN,                        \
> >           cache_line_size())
> >  
> > +#define MVNETA_SKB_PAD     (SKB_DATA_ALIGN(sizeof(struct skb_shared_info) 
> > + \
> > +                    NET_SKB_PAD))
> > +#define MVNETA_SKB_SIZE(len)       (SKB_DATA_ALIGN(len) + MVNETA_SKB_PAD)
> > +#define MVNETA_MAX_RX_BUF_SIZE     (PAGE_SIZE - MVNETA_SKB_PAD)
> > +
> >  #define IS_TSO_HEADER(txq, addr) \
> >     ((addr >= txq->tso_hdrs_phys) && \
> >      (addr < txq->tso_hdrs_phys + txq->size * TSO_HEADER_SIZE))
> > @@ -646,7 +651,6 @@ static int txq_number = 8;
> >  static int rxq_def;
> >  
> >  static int rx_copybreak __read_mostly = 256;
> > -static int rx_header_size __read_mostly = 128;
> >  
> >  /* HW BM need that each port be identify by a unique ID */
> >  static int global_port_id;
> > +   if (rxq->left_size > MVNETA_MAX_RX_BUF_SIZE) {
> 
> [...]
> 
> > +           len = MVNETA_MAX_RX_BUF_SIZE;
> > +           data_len = len;
> > +   } else {
> > +           len = rxq->left_size;
> > +           data_len = len - ETH_FCS_LEN;
> > +   }
> > +   dma_dir = page_pool_get_dma_dir(rxq->page_pool);
> > +   dma_sync_single_range_for_cpu(dev->dev.parent,
> > +                                 rx_desc->buf_phys_addr, 0,
> > +                                 len, dma_dir);
> > +   if (data_len > 0) {
> > +           /* refill descriptor with new buffer later */
> > +           skb_add_rx_frag(rxq->skb,
> > +                           skb_shinfo(rxq->skb)->nr_frags,
> > +                           page, NET_SKB_PAD, data_len,
> > +                           PAGE_SIZE);
> > +
> > +           page_pool_release_page(rxq->page_pool, page);
> > +           rx_desc->buf_phys_addr = 0;
> 
> Shouldn't we unmap and set the buf_phys_addr to 0 regardless of the data_len?

ack, right. I will fix it in v3.

Regards,
Lorenzo

> 
> > +   }
> > +   rxq->left_size -= len;
> > +}
> > +
> >             mvneta_rxq_buf_size_set(pp, rxq, PAGE_SIZE < SZ_64K ?
> 
> [...]
> 
> > -                                   PAGE_SIZE :
> > +                                   MVNETA_MAX_RX_BUF_SIZE :
> >                                     MVNETA_RX_BUF_SIZE(pp->pkt_size));
> >             mvneta_rxq_bm_disable(pp, rxq);
> >             mvneta_rxq_fill(pp, rxq, rxq->size);
> > @@ -4656,7 +4666,7 @@ static int mvneta_probe(struct platform_device *pdev)
> >     SET_NETDEV_DEV(dev, &pdev->dev);
> >  
> >     pp->id = global_port_id++;
> > -   pp->rx_offset_correction = 0; /* not relevant for SW BM */
> > +   pp->rx_offset_correction = NET_SKB_PAD;
> >  
> >     /* Obtain access to BM resources if enabled and already initialized */
> >     bm_node = of_parse_phandle(dn, "buffer-manager", 0);
> > -- 
> > 2.21.0
> > 
> 
> Regards
> /Ilias

Attachment: signature.asc
Description: PGP signature

Reply via email to