On 1/4/2023 11:57 AM, Matt wrote: > Hi Ferruh, > > In my case, the traffic is not large, so I can't see the impact. > I also tested under high load(>2Mpps with 2 DPDK cores and 2 kernel threads) > and found no significant difference in performance either. > I think the reason should be that it will not > run to 'kni_fifo_count(kni->alloc_q) == 0' under high load. >
I agree, additional check most likely hit on the low bandwidth, thanks for checking for performance impact. > On Tue, Jan 3, 2023 at 8:47 PM Ferruh Yigit <ferruh.yi...@amd.com > <mailto:ferruh.yi...@amd.com>> wrote: > > On 12/30/2022 4:23 AM, Yangchao Zhou wrote: > > In some scenarios, mbufs returned by rte_kni_rx_burst are not freed > > immediately. So kni_allocate_mbufs may be failed, but we don't know. > > > > Even worse, when alloc_q is completely exhausted, kni_net_tx in > > rte_kni.ko will drop all tx packets. kni_allocate_mbufs is never > > called again, even if the mbufs are eventually freed. > > > > In this patch, we try to allocate mbufs for alloc_q when it is empty. > > > > According to historical experience, the performance bottleneck of KNI > > is offen the usleep_range of kni thread in rte_kni.ko. > > The check of kni_fifo_count is trivial and the cost should be > acceptable. > > > > Hi Yangchao, > > Are you observing any performance impact with this change in you use > case? > > > > Fixes: 3e12a98fe397 ("kni: optimize Rx burst") > > Cc: sta...@dpdk.org <mailto:sta...@dpdk.org> > > > > Signed-off-by: Yangchao Zhou <zhouya...@gmail.com > <mailto:zhouya...@gmail.com>> Acked-by: Ferruh Yigit <ferruh.yi...@amd.com>