The current implementation of rte_kni_rx_burst polls the fifo for buffers.
Irrespective of success or failure, it allocates the mbuf and try to put them 
into the alloc_q
if the buffers are not added to alloc_q, it frees them.
This waste lots of cpu cycles in allocating and freeing the buffers if alloc_q 
is full.

The logic has been changed to:
1. Initially allocand add buffer(burstsize) to alloc_q
2. Add buffers to alloc_q only when you are pulling out the buffers.

Signed-off-by: Hemant Agrawal <Hemant at freescale.com>
---
 lib/librte_kni/rte_kni.c |    8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c
index 76feef4..01e85f8 100644
--- a/lib/librte_kni/rte_kni.c
+++ b/lib/librte_kni/rte_kni.c
@@ -263,6 +263,9 @@ rte_kni_alloc(struct rte_mempool *pktmbuf_pool,

        ctx->in_use = 1;

+       /* Allocate mbufs and then put them into alloc_q */
+       kni_allocate_mbufs(ctx);
+
        return ctx;

 fail:
@@ -369,8 +372,9 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf 
**mbufs, unsigned num)
 {
        unsigned ret = kni_fifo_get(kni->tx_q, (void **)mbufs, num);

-       /* Allocate mbufs and then put them into alloc_q */
-       kni_allocate_mbufs(kni);
+       /* If buffers removed, allocate mbufs and then put them into alloc_q */
+       if(ret)
+               kni_allocate_mbufs(kni);

        return ret;
 }
-- 
1.7.9.6

Reply via email to