On Fri, Feb 27, 2026 at 12:22:41PM +0100, Maciej Fijalkowski wrote:
> On Tue, Feb 17, 2026 at 02:24:42PM +0100, Larysa Zaremba wrote:
> > The only user of frag_size field in XDP RxQ info is
> > bpf_xdp_frags_increase_tail(). It clearly expects whole buff size instead
> > of DMA write size. Different assumptions in ice driver configuration lead
> > to negative tailroom.
> > 
> > This allows to trigger kernel panic, when using
> > XDP_ADJUST_TAIL_GROW_MULTI_BUFF xskxceiver test and changing packet size to
> > 6912 and the requested offset to a huge value, e.g.
> > XSK_UMEM__MAX_FRAME_SIZE * 100.
> > 
> > Due to other quirks of the ZC configuration in ice, panic is not observed
> > in ZC mode, but tailroom growing still fails when it should not.
> > 
> > Use fill queue buffer truesize instead of DMA write size in XDP RxQ info.
> > Fix ZC mode too by using the new helper.
> > 
> > Fixes: 2fba7dc5157b ("ice: Add support for XDP multi-buffer on Rx side")
> > Reviewed-by: Aleksandr Loktionov <[email protected]>
> > Signed-off-by: Larysa Zaremba <[email protected]>
> > ---
> >  drivers/net/ethernet/intel/ice/ice_base.c | 9 ++++-----
> >  1 file changed, 4 insertions(+), 5 deletions(-)
> > 
> > diff --git a/drivers/net/ethernet/intel/ice/ice_base.c 
> > b/drivers/net/ethernet/intel/ice/ice_base.c
> > index 511d803cf0a4..27ab899a4052 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_base.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_base.c
> > @@ -659,7 +659,6 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
> >  {
> >     struct device *dev = ice_pf_to_dev(ring->vsi->back);
> >     u32 num_bufs = ICE_DESC_UNUSED(ring);
> > -   u32 rx_buf_len;
> >     int err;
> >  
> >     if (ring->vsi->type == ICE_VSI_PF || ring->vsi->type == ICE_VSI_SF) {
> > @@ -669,12 +668,12 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
> >                     return err;
> >  
> >             if (ring->xsk_pool) {
> > -                   rx_buf_len =
> > -                           xsk_pool_get_rx_frame_size(ring->xsk_pool);
> 
> ice_setup_rx_ctx() consumes ring->rx_buf_len. This can't come from
> page_pool when you have configured xsk_pool on a given rxq. I believe we
> need a setting:
> 
>       ring->rx_buf_len =
>               xsk_pool_get_rx_frame_size(ring->xsk_pool);
> 

Yes, but doing this via xsk_pool_get_rx_frame_size() as it is now will 
introduce 
a regression, due to lack of tailroom, so I decided not to touch this logic for 
now, as you indend to improve xsk_pool_get_rx_frame_size() for mbuf soon.

> > +                   u32 frag_size =
> > +                           xsk_pool_get_rx_frag_step(ring->xsk_pool);
> >                     err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
> >                                              ring->q_index,
> >                                              ring->q_vector->napi.napi_id,
> > -                                            rx_buf_len);
> > +                                            frag_size);
> >                     if (err)
> >                             return err;
> >                     err = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq,
> > @@ -694,7 +693,7 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
> >                     err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
> >                                              ring->q_index,
> >                                              ring->q_vector->napi.napi_id,
> > -                                            ring->rx_buf_len);
> > +                                            ring->truesize);
> >                     if (err)
> >                             goto err_destroy_fq;
> >  
> > -- 
> > 2.52.0
> > 

Reply via email to