vhost removes limit of RX burst size(32 pkts) and supports to make
an best effort to receive pkts.

Cc: yuanhan....@linux.intel.com
Cc: maxime.coque...@redhat.com
Signed-off-by: Zhiyong Yang <zhiyong.y...@intel.com>
Acked-by: Konstantin Ananyev <konstantin.anan...@intel.com>
---
 doc/guides/rel_notes/release_17_05.rst |  4 ++++
 drivers/net/vhost/rte_eth_vhost.c      | 17 +++++++++++++++--
 2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/doc/guides/rel_notes/release_17_05.rst 
b/doc/guides/rel_notes/release_17_05.rst
index 0c33f7b..226ecd6 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -127,6 +127,10 @@ New Features
   performance enhancements viz. configurable TX data ring, Receive
   Data Ring, ability to register memory regions.
 
+* **Kept consistent PMD batching behaviour.**
+
+  Removed the limit of fm10k/i40e/ixgbe TX burst size and vhost RX/TX burst 
size in
+  order to support the same policy "make an best effort to RX/TX pkts" for 
PMDs.
 
 Resolved Issues
 ---------------
diff --git a/drivers/net/vhost/rte_eth_vhost.c 
b/drivers/net/vhost/rte_eth_vhost.c
index 2a38b19..7f5cd7e 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -392,6 +392,7 @@ eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t 
nb_bufs)
 {
        struct vhost_queue *r = q;
        uint16_t i, nb_rx = 0;
+       uint16_t nb_receive = nb_bufs;
 
        if (unlikely(rte_atomic32_read(&r->allow_queuing) == 0))
                return 0;
@@ -402,8 +403,20 @@ eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t 
nb_bufs)
                goto out;
 
        /* Dequeue packets from guest TX queue */
-       nb_rx = rte_vhost_dequeue_burst(r->vid,
-                       r->virtqueue_id, r->mb_pool, bufs, nb_bufs);
+       while (nb_receive) {
+               uint16_t nb_pkts;
+               uint16_t num = (uint16_t)RTE_MIN(nb_receive,
+                                                VHOST_MAX_PKT_BURST);
+
+               nb_pkts = rte_vhost_dequeue_burst(r->vid, r->virtqueue_id,
+                                                 r->mb_pool, &bufs[nb_rx],
+                                                 num);
+
+               nb_rx += nb_pkts;
+               nb_receive -= nb_pkts;
+               if (nb_pkts < num)
+                       break;
+       }
 
        r->stats.pkts += nb_rx;
 
-- 
2.7.4

Reply via email to