On Wed, Feb 25, 2015 at 6:38 AM, Marc Sune <marc.sune at bisdn.de> wrote:

>
> On 25/02/15 13:24, Hemant at freescale.com wrote:
>
>> Hi OIivier
>>          Comments inline.
>> Regards,
>> Hemant
>>
>>  -----Original Message-----
>>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Olivier Deme
>>> Sent: 25/Feb/2015 5:44 PM
>>> To: dev at dpdk.org
>>> Subject: Re: [dpdk-dev] [PATCH] kni:optimization of rte_kni_rx_burst
>>>
>>> Thank you Hemant, I think there might be one issue left with the patch
>>> though.
>>> The alloc_q must initially be filled with mbufs before getting mbuf back
>>> on the
>>> tx_q.
>>>
>>> So the patch should allow rte_kni_rx_burst to check if alloc_q is empty.
>>> If so, it should invoke kni_allocate_mbufs(kni, 0) (to fill the alloc_q
>>> with
>>> MAX_MBUF_BURST_NUM mbufs)
>>>
>>> The patch for rte_kni_rx_burst would then look like:
>>>
>>> @@ -575,7 +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf
>>> **mbufs, unsigned num)
>>>
>>>        /* If buffers removed, allocate mbufs and then put them into
>>> alloc_q */
>>>        if (ret)
>>> -        kni_allocate_mbufs(kni);
>>> +      kni_allocate_mbufs(kni, ret);
>>> +  else if (unlikely(kni->alloc_q->write == kni->alloc_q->read))
>>> +      kni_allocate_mbufs(kni, 0);
>>>
>>>  [hemant]  This will introduce a run-time check.
>>
>> I missed to include the other change in the patch.
>>   I am doing it in kni_alloc i.e. initiate the alloc_q with default burst
>> size.
>>         kni_allocate_mbufs(ctx, 0);
>>
>> In a way, we are now suggesting to reduce the size of alloc_q to only
>> default burst size.
>>
>
> As an aside comment here, I think that we should allow to tweak the
> userspace <-> kernel queue sizes (rx_q, tx_q, free_q and alloc_q) . Whether
> this should be a build configuration option or a parameter to
> rte_kni_init(), it is not completely clear to me, but I guess
> rte_kni_init() is a better option.
>

rte_kni_init() is definitely a better option. It allows things to be tuned
based on individual system config rather than requiring different builds.


> Having said that, the original mail from Hemant was describing that KNI
> was giving an out-of-memory. This to me indicates that the pool is
> incorrectly dimensioned. Even if KNI will not pre-allocate in the alloc_q,
> or not completely, in the event of high load, you will get this same "out
> of memory".
>
> We can reduce the usage of buffers by the KNI subsystem in kernel space
> and in userspace, but the kernel will always need a small cache of
> pre-allocated buffers (coming from user-space), since the KNI kernel module
> does not know where to grab the packets from (which pool). So my guess is
> that the dimensioning problem experienced by Hemant would be the same, even
> with the proposed changes.
>
>
>> Can we reach is situation, when the kernel is adding packets faster in
>> tx_q than the application is able to dequeue?
>>
>
> I think so. We cannot control much how the kernel will schedule the KNI
> thread(s), specially if the # of threads in relation to the cores is
> incorrect (not enough), hence we need at least a reasonable amount of
> buffering to prevent early dropping to those "internal" burst side effects.
>
> Marc


Strongly agree with Marc here. We *really* don't want just a single burst
worth of mbufs available to the kernel in alloc_q. That's just asking for
congestion when there's no need for it.

The original problem reported by Olivier is more of a resource tuning
problem than anything else. The number of mbufs you need in the system has
to take into account internal queue depths.

Jay

Reply via email to