On 4/15/2024 3:40 PM, Alan Elder wrote:
> The previous code allowed the number of Tx queues to be set higher than the 
> number of Rx queues.  If a packet was sent on a Tx queue with index
>> = number Rx queues there was a segfault due to accessing beyond the end of 
>> the dev->data->rx_queues[] array.
> 
> #0 rte_spinlock_trylock (sl = invalid address) at /include/rte_spinlock.h L63
> #1  hn_process_events at /drivers/net/netvsc/hn_rxtx.c L 1129
> #2  hn_xmit_pkts at /drivers/net/netvsc/hn_rxtx.c L1553
> 
> This commit fixes the issue by creating an Rx queue for every Tx queue 
> meaning that an event buffer is allocated to handle receiving Tx completion 
> messages.
> 
> mbuf pool and Rx ring are not allocated for these additional Rx queues and 
> RSS configuration ensures that no packets are received on them.
> 
> Fixes: 4e9c73e96e83 ("net/netvsc: add Hyper-V network device")
> Cc: sthem...@microsoft.com
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Alan Elder <alan.el...@microsoft.com>

<...>

> @@ -552,10 +595,12 @@ static void hn_rxpkt(struct hn_rx_queue *rxq, struct 
> hn_rx_bufinfo *rxb,
>                    const struct hn_rxinfo *info)
>  {
>       struct hn_data *hv = rxq->hv;
> -     struct rte_mbuf *m;
> +     struct rte_mbuf *m = NULL;
>       bool use_extbuf = false;
>  
> -     m = rte_pktmbuf_alloc(rxq->mb_pool);
> +     if (likely(rxq->mb_pool != NULL))
> +             m = rte_pktmbuf_alloc(rxq->mb_pool);
> +
>

This introduced additional check in Rx path, not sure what is the
performance impact.

I can see Long already acked the v3, I just want to double check.
If Tx queue number > Rx queue number is not a common usecase, perhaps it
can be an option to forbid it instead of getting performance hit.
Or it can be possible to have a dedicated Rx queue, like queue_id 0, for
Tx completion events for Tx queue_id > Rx queue number, etc..

But Long if you prefer to continue with this patch, please ack it and I
can continue with it.

Reply via email to