On Thu, Aug 22, 2024 at 08:53:09AM -0700, Shradha Gupta wrote:
> Currently the values of WQs for RX and TX queues for MANA devices
> are hardcoded to default sizes.
> Allow configuring these values for MANA devices as ringparam
> configuration(get/set) through ethtool_ops.
> Pre-allocate buffers at the beginning of this operation, to
> prevent complete network loss in low-memory conditions.
> 
> Signed-off-by: Shradha Gupta <shradhagu...@linux.microsoft.com>
> Reviewed-by: Haiyang Zhang <haiya...@microsoft.com>
> ---
>  Changes in v4:
>  * Roundup the ring parameter value to a power of 2
>  * Skip the max value check for parameters
>  * Use extack to log errors
> ---
>  Changes in v3:
>  * pre-allocate buffers before changing the queue sizes
>  * rebased to latest net-next
> ---
>  Changes in v2:
>  * Removed unnecessary validations in mana_set_ringparam()
>  * Fixed codespell error
>  * Improved error message to indicate issue with the parameter
> ---
>  drivers/net/ethernet/microsoft/mana/mana_en.c | 24 +++---
>  .../ethernet/microsoft/mana/mana_ethtool.c    | 74 +++++++++++++++++++
>  include/net/mana/mana.h                       | 23 +++++-
>  3 files changed, 108 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c 
> b/drivers/net/ethernet/microsoft/mana/mana_en.c
> index d2f07e179e86..4e3ade5926bc 100644
> --- a/drivers/net/ethernet/microsoft/mana/mana_en.c
> +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
> @@ -511,7 +511,7 @@ static u16 mana_select_queue(struct net_device *ndev, 
> struct sk_buff *skb,
>  }
>  
>  /* Release pre-allocated RX buffers */
> -static void mana_pre_dealloc_rxbufs(struct mana_port_context *mpc)
> +void mana_pre_dealloc_rxbufs(struct mana_port_context *mpc)
>  {
>       struct device *dev;
>       int i;
> @@ -604,7 +604,7 @@ static void mana_get_rxbuf_cfg(int mtu, u32 *datasize, 
> u32 *alloc_size,
>       *datasize = mtu + ETH_HLEN;
>  }
>  
> -static int mana_pre_alloc_rxbufs(struct mana_port_context *mpc, int new_mtu)
> +int mana_pre_alloc_rxbufs(struct mana_port_context *mpc, int new_mtu)
>  {
>       struct device *dev;
>       struct page *page;
> @@ -618,7 +618,7 @@ static int mana_pre_alloc_rxbufs(struct mana_port_context 
> *mpc, int new_mtu)
>  
>       dev = mpc->ac->gdma_dev->gdma_context->dev;
>  
> -     num_rxb = mpc->num_queues * RX_BUFFERS_PER_QUEUE;
> +     num_rxb = mpc->num_queues * mpc->rx_queue_size;
>  
>       WARN(mpc->rxbufs_pre, "mana rxbufs_pre exists\n");
>       mpc->rxbufs_pre = kmalloc_array(num_rxb, sizeof(void *), GFP_KERNEL);
> @@ -1899,14 +1899,15 @@ static int mana_create_txq(struct mana_port_context 
> *apc,
>               return -ENOMEM;
>  
>       /*  The minimum size of the WQE is 32 bytes, hence
> -      *  MAX_SEND_BUFFERS_PER_QUEUE represents the maximum number of WQEs
> +      *  apc->tx_queue_size represents the maximum number of WQEs
>        *  the SQ can store. This value is then used to size other queues
>        *  to prevent overflow.
> +      *  Also note that the txq_size is always going to be MANA_PAGE_ALIGNED,
> +      *  as tx_queue_size is always a power of 2.
>        */

        MANA_PAGE_ALIGNED aligned means aligned by 0x1000. tx_queue_size being
        'power of 2' * 32 is not a sufficient condition for it to be aligned to
        0x1000. We possibly can explain more.


> -     txq_size = MAX_SEND_BUFFERS_PER_QUEUE * 32;
> -     BUILD_BUG_ON(!MANA_PAGE_ALIGNED(txq_size));
> +     txq_size = apc->tx_queue_size * 32;
>  
> -     cq_size = MAX_SEND_BUFFERS_PER_QUEUE * COMP_ENTRY_SIZE;
> +     cq_size = apc->tx_queue_size * COMP_ENTRY_SIZE;
>       cq_size = MANA_PAGE_ALIGN(cq_size);

        COMP_ENTRY_SIZE is 64, that means cq_size is double of txq_size.
        If we are certain that txq_size is always aligned to MANA_PAGE,
        that means cq_size is already aligned to MANA_PAGE as well.

- Saurabh

Reply via email to