I've observed CPU stats with top command, and found that ksoftirqd is processing software interrupts which might come from dpdk-kni application and would be processed by KNI and kernel net stack.
My observation shows that 1. dpdk-kni-application drops a half of rx packets (i.e. fail to deliver packets to skb). this seems the rx_q is full in KNI side. I think this is because processing in KNI and IP stack is much slow and receiving packets from device via dpdk is much faster. 2. bonding multiple KNI interfaces to spread loads across multiple kernel threads does not help reduce that processing time. In addition, packets are transmitted out of order throughout multiple KNIs, which requires reordering at the communication end point. 3. NAT with native kernel performs twice better than that of KNI + native kernel even though the latter does not incur hardware interrupts. Anyway, my experiment was done in limited environment, so this does not reflect any general case. My wish for simple NAT solution seems not feasible with KNI, thus I should change my approach from KNI to pure dpdk application. On Fri, Sep 18, 2015 at 8:53 PM, Moon-Sang Lee <sang0627 at gmail.com> wrote: > > I'm a newbie and testing DPDK KNI with 1G intel NIC. > > According to my understanding of DPDK documents, > KNI should not raise interrupts when sending/receiving packets. > > But when I transmit bunch of packets to my KNI ports, > 'top command' shows ksoftirqd with 50% CPU load. > > Would you give me some comments about this situation? > > > > -- > Moon-Sang Lee, SW Engineer > Email: sang0627 at gmail.com > Wisdom begins in wonder. *Socrates* > -- Moon-Sang Lee, SW Engineer Email: sang0627 at gmail.com Wisdom begins in wonder. *Socrates*