On 4/7/21 12:17 PM, Balazs Nemeth wrote:
> Now that all allocation and freeing has been moved together, use the
> faster bulk versions instead of handling packets one by one.
> 
> Signed-off-by: Balazs Nemeth <bnem...@redhat.com>
> ---
>  lib/librte_vhost/virtio_net.c | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> index 496f750e3..2f0c97b91 100644
> --- a/lib/librte_vhost/virtio_net.c
> +++ b/lib/librte_vhost/virtio_net.c
> @@ -2469,10 +2469,10 @@ virtio_dev_tx_packed(struct virtio_net *dev,
>  {
>       uint32_t pkt_idx = 0;
>       uint32_t remained = count;
> -     uint16_t i;
>  
> -     for (i = 0; i < count; ++i)
> -             pkts[i] = rte_pktmbuf_alloc(mbuf_pool);
> +     if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) {
> +             return 0;
> +     }

OK, get it now :)
Maybe just add a comment in previous patches that it is going to be
handled when moving to bulk allocation.

>  
>       do {
>               rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]);
> @@ -2497,8 +2497,9 @@ virtio_dev_tx_packed(struct virtio_net *dev,
>  
>       } while (remained);
>  
> -     for (i = pkt_idx; i < count; ++i)
> -             rte_pktmbuf_free(pkts[i]);
> +     if (pkt_idx != count) {
> +             rte_pktmbuf_free_bulk(&pkts[pkt_idx], count - pkt_idx);
> +     }
>  
>       if (vq->shadow_used_idx) {
>               do_data_copy_dequeue(vq);
> 

Reply via email to