I guess it would be unusual but possible for the kernel to enqueue 
faster to tx_q than the application dequeues.
But that would also be possible with a real NIC, so I think it is 
acceptable for the kernel to have to drop egress packets in that case.


On 25/02/15 12:24, Hemant at freescale.com wrote:
> Hi OIivier
>        Comments inline.
> Regards,
> Hemant
>
>> -----Original Message-----
>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Olivier Deme
>> Sent: 25/Feb/2015 5:44 PM
>> To: dev at dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH] kni:optimization of rte_kni_rx_burst
>>
>> Thank you Hemant, I think there might be one issue left with the patch 
>> though.
>> The alloc_q must initially be filled with mbufs before getting mbuf back on 
>> the
>> tx_q.
>>
>> So the patch should allow rte_kni_rx_burst to check if alloc_q is empty.
>> If so, it should invoke kni_allocate_mbufs(kni, 0) (to fill the alloc_q with
>> MAX_MBUF_BURST_NUM mbufs)
>>
>> The patch for rte_kni_rx_burst would then look like:
>>
>> @@ -575,7 +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf
>> **mbufs, unsigned num)
>>
>>        /* If buffers removed, allocate mbufs and then put them into alloc_q 
>> */
>>        if (ret)
>> -        kni_allocate_mbufs(kni);
>> +      kni_allocate_mbufs(kni, ret);
>> +  else if (unlikely(kni->alloc_q->write == kni->alloc_q->read))
>> +      kni_allocate_mbufs(kni, 0);
>>
> [hemant]  This will introduce a run-time check.
>
> I missed to include the other change in the patch.
>   I am doing it in kni_alloc i.e. initiate the alloc_q with default burst 
> size.
>       kni_allocate_mbufs(ctx, 0);
>
> In a way, we are now suggesting to reduce the size of alloc_q to only default 
> burst size.
>
> Can we reach is situation, when the kernel is adding packets faster in tx_q 
> than the application is able to dequeue?
>   alloc_q  can be empty in this case and kernel will be striving.
>
>> Olivier.
>>
>> On 25/02/15 11:48, Hemant Agrawal wrote:
>>> From: Hemant Agrawal <hemant at freescale.com>
>>>
>>> if any buffer is read from the tx_q, MAX_BURST buffers will be allocated and
>> attempted to be added to to the alloc_q.
>>> This seems terribly inefficient and it also looks like the alloc_q will 
>>> quickly fill
>> to its maximum capacity. If the system buffers are low in number, it will 
>> reach
>> "out of memory" situation.
>>> This patch allocates the number of buffers as many dequeued from tx_q.
>>>
>>> Signed-off-by: Hemant Agrawal <hemant at freescale.com>
>>> ---
>>>    lib/librte_kni/rte_kni.c | 13 ++++++++-----
>>>    1 file changed, 8 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c index
>>> 4e70fa0..4cf8e30 100644
>>> --- a/lib/librte_kni/rte_kni.c
>>> +++ b/lib/librte_kni/rte_kni.c
>>> @@ -128,7 +128,7 @@ struct rte_kni_memzone_pool {
>>>
>>>
>>>    static void kni_free_mbufs(struct rte_kni *kni); -static void
>>> kni_allocate_mbufs(struct rte_kni *kni);
>>> +static void kni_allocate_mbufs(struct rte_kni *kni, int num);
>>>
>>>    static volatile int kni_fd = -1;
>>>    static struct rte_kni_memzone_pool kni_memzone_pool = { @@ -575,7
>>> +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf
>>> **mbufs, unsigned num)
>>>
>>>     /* If buffers removed, allocate mbufs and then put them into alloc_q
>> */
>>>     if (ret)
>>> -           kni_allocate_mbufs(kni);
>>> +           kni_allocate_mbufs(kni, ret);
>>>
>>>     return ret;
>>>    }
>>> @@ -594,7 +594,7 @@ kni_free_mbufs(struct rte_kni *kni)
>>>    }
>>>
>>>    static void
>>> -kni_allocate_mbufs(struct rte_kni *kni)
>>> +kni_allocate_mbufs(struct rte_kni *kni, int num)
>>>    {
>>>     int i, ret;
>>>     struct rte_mbuf *pkts[MAX_MBUF_BURST_NUM]; @@ -620,7 +620,10
>> @@
>>> kni_allocate_mbufs(struct rte_kni *kni)
>>>             return;
>>>     }
>>>
>>> -   for (i = 0; i < MAX_MBUF_BURST_NUM; i++) {
>>> +   if (num == 0 || num > MAX_MBUF_BURST_NUM)
>>> +           num = MAX_MBUF_BURST_NUM;
>>> +
>>> +   for (i = 0; i < num; i++) {
>>>             pkts[i] = rte_pktmbuf_alloc(kni->pktmbuf_pool);
>>>             if (unlikely(pkts[i] == NULL)) {
>>>                     /* Out of memory */
>>> @@ -636,7 +639,7 @@ kni_allocate_mbufs(struct rte_kni *kni)
>>>     ret = kni_fifo_put(kni->alloc_q, (void **)pkts, i);
>>>
>>>     /* Check if any mbufs not put into alloc_q, and then free them */
>>> -   if (ret >= 0 && ret < i && ret < MAX_MBUF_BURST_NUM)
>> {MAX_MBUF_BURST_NUM
>>> +   if (ret >= 0 && ret < i && ret < num) {
>>>             int j;
>>>
>>>             for (j = ret; j < i; j++)
>> --
>>      *Olivier Dem?*
>> *Druid Software Ltd.*
>> *Tel: +353 1 202 1831*
>> *Email: odeme at druidsoftware.com <mailto:odeme at druidsoftware.com>*
>> *URL: http://www.druidsoftware.com*
>>      *Hall 7, stand 7F70.*
>> Druid Software: Monetising enterprise small cells solutions.

-- 
        *Olivier Dem?*
*Druid Software Ltd.*
*Tel: +353 1 202 1831*
*Email: odeme at druidsoftware.com <mailto:odeme at druidsoftware.com>*
*URL: http://www.druidsoftware.com*
        *Hall 7, stand 7F70.*
Druid Software: Monetising enterprise small cells solutions.

Reply via email to