> -----Original Message----- > From: Maxime Coquelin [mailto:maxime.coque...@redhat.com] > Sent: Friday, February 9, 2018 10:27 PM > To: Bie, Tiwei <tiwei....@intel.com>; y...@fridaylinux.org; Yigit, Ferruh > <ferruh.yi...@intel.com>; vict...@redhat.com > Cc: dev@dpdk.org; sta...@dpdk.org; Wang, Zhihong > <zhihong.w...@intel.com>; Xu, Qian Q <qian.q...@intel.com>; Yao, Lei A > <lei.a....@intel.com>; Maxime Coquelin <maxime.coque...@redhat.com> > Subject: [PATCH 1/2] virtio: fix resuming traffic with rx vector path > > This patch fixes traffic resuming issue seen when using > Rx vector path. > > Fixes: efc83a1e7fc3 ("net/virtio: fix queue setup consistency") > > Signed-off-by: Tiwei Bie <tiwei....@intel.com> > Signed-off-by: Maxime Coquelin <maxime.coque...@redhat.com> Tested-by: Lei Yao <lei.a....@intel.com> This patch has been tested by regression test suite. It can fix the traffic resume issue with vector path. No performance drop during PVP test:
Following test are also checked and passed: Vhost/virtio multi queue Virtio-user Virtio-user as exception path Vhost/virtio reconnect My server info: OS: Ubuntu 16.04 Kernel: 4.4.0-110 CPU: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz BR Lei > --- > drivers/net/virtio/virtio_rxtx.c | 34 > ++++++++++++++++++--------------- > drivers/net/virtio/virtio_rxtx_simple.c | 2 +- > drivers/net/virtio/virtio_rxtx_simple.h | 2 +- > 3 files changed, 21 insertions(+), 17 deletions(-) > > diff --git a/drivers/net/virtio/virtio_rxtx.c > b/drivers/net/virtio/virtio_rxtx.c > index 854af399e..505283edd 100644 > --- a/drivers/net/virtio/virtio_rxtx.c > +++ b/drivers/net/virtio/virtio_rxtx.c > @@ -30,6 +30,7 @@ > #include "virtio_pci.h" > #include "virtqueue.h" > #include "virtio_rxtx.h" > +#include "virtio_rxtx_simple.h" > > #ifdef RTE_LIBRTE_VIRTIO_DEBUG_DUMP > #define VIRTIO_DUMP_PACKET(m, len) rte_pktmbuf_dump(stdout, m, len) > @@ -446,25 +447,28 @@ virtio_dev_rx_queue_setup_finish(struct > rte_eth_dev *dev, uint16_t queue_idx) > &rxvq->fake_mbuf; > } > > - while (!virtqueue_full(vq)) { > - m = rte_mbuf_raw_alloc(rxvq->mpool); > - if (m == NULL) > - break; > + if (hw->use_simple_rx) { > + while (vq->vq_free_cnt >= > RTE_VIRTIO_VPMD_RX_REARM_THRESH) { > + virtio_rxq_rearm_vec(rxvq); > + nbufs += RTE_VIRTIO_VPMD_RX_REARM_THRESH; > + } > + } else { > + while (!virtqueue_full(vq)) { > + m = rte_mbuf_raw_alloc(rxvq->mpool); > + if (m == NULL) > + break; > > - /* Enqueue allocated buffers */ > - if (hw->use_simple_rx) > - error = virtqueue_enqueue_recv_refill_simple(vq, > m); > - else > + /* Enqueue allocated buffers */ > error = virtqueue_enqueue_recv_refill(vq, m); > - > - if (error) { > - rte_pktmbuf_free(m); > - break; > + if (error) { > + rte_pktmbuf_free(m); > + break; > + } > + nbufs++; > } > - nbufs++; > - } > > - vq_update_avail_idx(vq); > + vq_update_avail_idx(vq); > + } > > PMD_INIT_LOG(DEBUG, "Allocated %d bufs", nbufs); > > diff --git a/drivers/net/virtio/virtio_rxtx_simple.c > b/drivers/net/virtio/virtio_rxtx_simple.c > index 7247a0822..0a79d1d5b 100644 > --- a/drivers/net/virtio/virtio_rxtx_simple.c > +++ b/drivers/net/virtio/virtio_rxtx_simple.c > @@ -77,7 +77,7 @@ virtio_xmit_pkts_simple(void *tx_queue, struct > rte_mbuf **tx_pkts, > rte_compiler_barrier(); > > if (nb_used >= VIRTIO_TX_FREE_THRESH) > - virtio_xmit_cleanup(vq); > + virtio_xmit_cleanup_simple(vq); > > nb_commit = nb_pkts = RTE_MIN((vq->vq_free_cnt >> 1), nb_pkts); > desc_idx = (uint16_t)(vq->vq_avail_idx & desc_idx_max); > diff --git a/drivers/net/virtio/virtio_rxtx_simple.h > b/drivers/net/virtio/virtio_rxtx_simple.h > index 2d8e6b14a..303904d64 100644 > --- a/drivers/net/virtio/virtio_rxtx_simple.h > +++ b/drivers/net/virtio/virtio_rxtx_simple.h > @@ -60,7 +60,7 @@ virtio_rxq_rearm_vec(struct virtnet_rx *rxvq) > #define VIRTIO_TX_FREE_NR 32 > /* TODO: vq->tx_free_cnt could mean num of free slots so we could avoid > shift */ > static inline void > -virtio_xmit_cleanup(struct virtqueue *vq) > +virtio_xmit_cleanup_simple(struct virtqueue *vq) > { > uint16_t i, desc_idx; > uint32_t nb_free = 0; > -- > 2.14.3