If we store the pgcnt on few fragments while being in the middle of
gathering the whole frame and we stumbled upon DD bit not being set, we
terminate the NAPI Rx processing loop and come back later on. Then on
next NAPI execution we work on previously stored pgcnt.

Imagine that second half of page was used actively by networking stack
and by the time we came back, stack is not busy with this page anymore
and decremented the refcnt. The page reuse algorithm in this case should
be good to reuse the page but given the old refcnt it will not do so and
attempt to release the page via page_frag_cache_drain() with
pagecnt_bias used as an arg. This in turn will result in negative refcnt
on struct page, which was initially observed by Xu Du.

Therefore, move the page count storage from ice_get_rx_buf() to a place
where we are sure that whole frame has been collected, but before
calling XDP program as it internally can also change the page count of
fragments belonging to xdp_buff.

Fixes: ac0753391195 ("ice: Store page count inside ice_rx_buf")
Reported-and-tested-by: Xu Du <x...@redhat.com>
Reviewed-by: Przemek Kitszel <przemyslaw.kits...@intel.com>
Co-developed-by: Jacob Keller <jacob.e.kel...@intel.com>
Signed-off-by: Jacob Keller <jacob.e.kel...@intel.com>
Signed-off-by: Maciej Fijalkowski <maciej.fijalkow...@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_txrx.c | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c 
b/drivers/net/ethernet/intel/ice/ice_txrx.c
index f2134ad57ead..9aa53ad2d8f2 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -924,7 +924,6 @@ ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned 
int size,
        struct ice_rx_buf *rx_buf;
 
        rx_buf = &rx_ring->rx_buf[ntc];
-       rx_buf->pgcnt = page_count(rx_buf->page);
        prefetchw(rx_buf->page);
 
        if (!size)
@@ -940,6 +939,22 @@ ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned 
int size,
        return rx_buf;
 }
 
+static void ice_get_pgcnts(struct ice_rx_ring *rx_ring)
+{
+       u32 nr_frags = rx_ring->nr_frags + 1;
+       u32 idx = rx_ring->first_desc;
+       struct ice_rx_buf *rx_buf;
+       u32 cnt = rx_ring->count;
+
+       for (int i = 0; i < nr_frags; i++) {
+               rx_buf = &rx_ring->rx_buf[idx];
+               rx_buf->pgcnt = page_count(rx_buf->page);
+
+               if (++idx == cnt)
+                       idx = 0;
+       }
+}
+
 /**
  * ice_build_skb - Build skb around an existing buffer
  * @rx_ring: Rx descriptor ring to transact packets on
@@ -1230,6 +1245,7 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int 
budget)
                if (ice_is_non_eop(rx_ring, rx_desc))
                        continue;
 
+               ice_get_pgcnts(rx_ring);
                ice_run_xdp(rx_ring, xdp, xdp_prog, xdp_ring, rx_buf, rx_desc);
                if (rx_buf->act == ICE_XDP_PASS)
                        goto construct_skb;
-- 
2.43.0

Reply via email to