Thank you Hemant, I think there might be one issue left with the patch though. The alloc_q must initially be filled with mbufs before getting mbuf back on the tx_q.
So the patch should allow rte_kni_rx_burst to check if alloc_q is empty. If so, it should invoke kni_allocate_mbufs(kni, 0) (to fill the alloc_q with MAX_MBUF_BURST_NUM mbufs) The patch for rte_kni_rx_burst would then look like: @@ -575,7 +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned num) /* If buffers removed, allocate mbufs and then put them into alloc_q */ if (ret) - kni_allocate_mbufs(kni); + kni_allocate_mbufs(kni, ret); + else if (unlikely(kni->alloc_q->write == kni->alloc_q->read)) + kni_allocate_mbufs(kni, 0); Olivier. On 25/02/15 11:48, Hemant Agrawal wrote: > From: Hemant Agrawal <hemant at freescale.com> > > if any buffer is read from the tx_q, MAX_BURST buffers will be allocated and > attempted to be added to to the alloc_q. > This seems terribly inefficient and it also looks like the alloc_q will > quickly fill to its maximum capacity. If the system buffers are low in > number, it will reach "out of memory" situation. > > This patch allocates the number of buffers as many dequeued from tx_q. > > Signed-off-by: Hemant Agrawal <hemant at freescale.com> > --- > lib/librte_kni/rte_kni.c | 13 ++++++++----- > 1 file changed, 8 insertions(+), 5 deletions(-) > > diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c > index 4e70fa0..4cf8e30 100644 > --- a/lib/librte_kni/rte_kni.c > +++ b/lib/librte_kni/rte_kni.c > @@ -128,7 +128,7 @@ struct rte_kni_memzone_pool { > > > static void kni_free_mbufs(struct rte_kni *kni); > -static void kni_allocate_mbufs(struct rte_kni *kni); > +static void kni_allocate_mbufs(struct rte_kni *kni, int num); > > static volatile int kni_fd = -1; > static struct rte_kni_memzone_pool kni_memzone_pool = { > @@ -575,7 +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf > **mbufs, unsigned num) > > /* If buffers removed, allocate mbufs and then put them into alloc_q */ > if (ret) > - kni_allocate_mbufs(kni); > + kni_allocate_mbufs(kni, ret); > > return ret; > } > @@ -594,7 +594,7 @@ kni_free_mbufs(struct rte_kni *kni) > } > > static void > -kni_allocate_mbufs(struct rte_kni *kni) > +kni_allocate_mbufs(struct rte_kni *kni, int num) > { > int i, ret; > struct rte_mbuf *pkts[MAX_MBUF_BURST_NUM]; > @@ -620,7 +620,10 @@ kni_allocate_mbufs(struct rte_kni *kni) > return; > } > > - for (i = 0; i < MAX_MBUF_BURST_NUM; i++) { > + if (num == 0 || num > MAX_MBUF_BURST_NUM) > + num = MAX_MBUF_BURST_NUM; > + > + for (i = 0; i < num; i++) { > pkts[i] = rte_pktmbuf_alloc(kni->pktmbuf_pool); > if (unlikely(pkts[i] == NULL)) { > /* Out of memory */ > @@ -636,7 +639,7 @@ kni_allocate_mbufs(struct rte_kni *kni) > ret = kni_fifo_put(kni->alloc_q, (void **)pkts, i); > > /* Check if any mbufs not put into alloc_q, and then free them */ > - if (ret >= 0 && ret < i && ret < MAX_MBUF_BURST_NUM) {MAX_MBUF_BURST_NUM > > + if (ret >= 0 && ret < i && ret < num) { > int j; > > for (j = ret; j < i; j++) -- *Olivier Dem?* *Druid Software Ltd.* *Tel: +353 1 202 1831* *Email: odeme at druidsoftware.com <mailto:odeme at druidsoftware.com>* *URL: http://www.druidsoftware.com* *Hall 7, stand 7F70.* Druid Software: Monetising enterprise small cells solutions.