Hi, thanks for the quick answer. I ran the same test with one kni port with the following configuration:
sudo insmod rte_kni.ko lo_mode=lo_mode_fifo_skb sudo ./kni -c 0xf -n 4 -- -P p 0x1 --config="(0,1,2,3)" and transmitted towards the single port traffic with rate of 300Mbps. as you can see below I have packet drops: what is the difference between what you tested and what I see below? is there some other configuration or adjustments to be done on the kni that I am missing? ====== ============== ============ ============ ============ ============ **KNI example application statistics** ====== ============== ============ ============ ============ ============ Port Lcore(RX/TX) rx_packets rx_dropped tx_packets tx_dropped ------ -------------- ------------ ------------ ------------ ------------ 0 1/ 2 106059534 21978 106058868 0 ====== ============== ============ ============ ============ ============ **KNI example application statistics** ====== ============== ============ ============ ============ ============ Port Lcore(RX/TX) rx_packets rx_dropped tx_packets tx_dropped ------ -------------- ------------ ------------ ------------ ------------ 0 1/ 2 106137710 21995 106137054 0 ====== ============== ============ ============ ============ ============ **KNI example application statistics** ====== ============== ============ ============ ============ ============ Port Lcore(RX/TX) rx_packets rx_dropped tx_packets tx_dropped ------ -------------- ------------ ------------ ------------ ------------ 0 1/ 2 106217479 21995 106216603 0 ====== ============== ============ ============ ============ ============ **KNI example application statistics** ====== ============== ============ ============ ============ ============ Port Lcore(RX/TX) rx_packets rx_dropped tx_packets tx_dropped ------ -------------- ------------ ------------ ------------ ------------ 0 1/ 2 116349769 24150 116349796 0 ====== ============== ============ ============ ============ ============ **KNI example application statistics** ====== ============== ============ ============ ============ ============ Port Lcore(RX/TX) rx_packets rx_dropped tx_packets tx_dropped ------ -------------- ------------ ------------ ------------ ------------ 0 1/ 2 116349769 24150 116349796 0 ====== ============== ============ ============ ============ ============ ^[[A **KNI example application statistics** ====== ============== ============ ============ ============ ============ Port Lcore(RX/TX) rx_packets rx_dropped tx_packets tx_dropped ------ -------------- ------------ ------------ ------------ ------------ 0 1/ 2 117954991 24530 117954361 0 ====== ============== ============ ============ ============ ============ On Tue, Jan 17, 2017 at 7:57 PM, Ferruh Yigit <ferruh.yi...@intel.com> wrote: > On 1/17/2017 5:46 PM, Ferruh Yigit wrote: > > On 1/16/2017 2:58 PM, Shirley Avishour wrote: > >> I am currently using the kernel interface for recording the received > >> traffic by duplicating the received packets and sending a copy to the > >> kni (and performing pcap_open_live on the kni). > >> my goal rate is around 500Mbps. is it possible to achieve it via the > kni ?? > > > > According quick experiment, > > - with kni module lo_mode_fifo_skb (which send all received packets to > > tx, but allocating and copying data to skb to be more realistic) > > - single kernel thread > > - kernel thread bind to a core > > - using kni sample app > > - With small packets > > > > Best numbers get when rx,tx and kernel cores are in same socket with the > > NIC, it is ~1.7Mpps (million packet per second) > > > > When KNI_RX_LOOP_NUM increased to 10000, it becomes ~1.9Mpps. > > > > And again, very quick test, between two KNI ports, with kni sample app, > using iperf default values, gives ~3 Gbits/sec > > >> > >> > >> On Mon, Jan 16, 2017 at 4:55 PM, Ferruh Yigit <ferruh.yi...@intel.com > >> <mailto:ferruh.yi...@intel.com>> wrote: > >> > >> On 1/16/2017 2:47 PM, Shirley Avishour wrote: > >> > Hi, > >> > As I wrote the kernel thread runs on a dedicated lcore. > >> > running top while my application is running I see kni_single and > the cpu > >> > usage is really low... > >> > Is there any rate limitation for transmitting to the kernel > interface > >> > (since packets are being copied in the kernel). > >> > >> Yes, kind of, kernel thread sleeps periodically, with a value > defined by > >> KNI_KTHREAD_RESCHEDULE_INTERVAL. You can try tweaking this value, > if you > >> want thread do more work, less sleep J > >> > >> Also KNI_RX_LOOP_NUM can be updated for same purpose. > >> > >> > > >> > > >> > On Mon, Jan 16, 2017 at 4:42 PM, Ferruh Yigit < > ferruh.yi...@intel.com <mailto:ferruh.yi...@intel.com> > >> > <mailto:ferruh.yi...@intel.com <mailto:ferruh.yi...@intel.com>>> > >> wrote: > >> > > >> > On 1/16/2017 12:20 PM, Shirley Avishour wrote: > >> > > Hi, > >> > > I have an application over dpdk which is consisted of the > >> following threads > >> > > each running on a separate core: > >> > > 1) rx thread which listens on in a poll mode for traffic > >> > > 2) 2 packet processing threads (for load balancing) > >> > > 3) kni thread (which also runs on a separate core). > >> > > >> > This is kernel thread, right? Is it bind to any specific core? > >> > Is it possible that this thread shares the core with 2nd > >> processing > >> > thread when enabled? > >> > > >> > > > >> > > the rx thread receives packets and clones them and transmit > >> a copy > >> > to the > >> > > kni and the other packet is sent to the packet processing > unit > >> > (hashing > >> > > over 2 threads). > >> > > the receive traffic rate is 100Mbps. > >> > > When working with single packet processing thread I am able > >> to get > >> > all the > >> > > 100Mbps towards the kni with no drops. > >> > > but when I activate my application with 2 packet processing > >> > threads I start > >> > > facing drops towards the kni. > >> > > the way I see it the only difference now is that I have > another > >> > threads > >> > > which handles an mbuf and frees it once processing is > completed. > >> > > Can anyone assist with this case please? > >> > > > >> > > Thanks! > >> > > > >> > > >> > > >> > >> > > > >