Hi Ferruh,

On Wednesday 31 May 2017 09:51 PM, Ferruh Yigit wrote:
<cut>
I have sampled below data in x86_64 for KNI on ixgbe pmd. iperf server
runs on
remote interface connecting PMD and iperf client runs on KNI interface,
so as to
create more egress from KNI into DPDK (w/o and with this patch) for 1MB and
100MB data. rx and tx stats are from kni app (USR1).

100MB w/o patch 1.28Gbps
rx      tx        alloc_call  alloc_call_mt1tx freembuf_call
3933 72464 51042      42472              1560540
Some math:

alloc called 51042 times with allocating 32 mbufs each time,
51042 * 32 = 1633344

freed mbufs: 1560540

used mbufs: 1633344 - 1560540 = 72804

72804 =~ 72464, so looks correct.

Which means rte_kni_rx_burst() called 51042 times and 72464 buffers
received.

As you already mentioned, for each call kernel able to put only 1-2
packets into the fifo. This number is close to 3 for my test with KNI PMD.

And for this case, agree your patch looks reasonable.

But what if kni has more egress traffic, that able to put >= 32 packets
between each rte_kni_rx_burst()?
For that case this patch introduces extra cost to get allocq_free count.

Are there case(s) we see kernel thread writing txq faster at a rate higher than kni application could dequeue it ?. In my understanding, KNI is suppose to be a slow path as it puts
packets back into network stack (control plane ?).

Regards,
Gowrishankar

Overall I am not disagree with patch, but I have concern if this would
cause performance loss some cases while making better for this one. That
would help a lot if KNI users test and comment.

For me, applying patch didn't give any difference in final performance
numbers, but if there is no objection, I am OK to get this patch.



<cut>

Reply via email to