> -----Original Message----- > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of JP M. > Sent: Sunday, March 1, 2015 6:40 AM > To: dev at dpdk.org > Subject: [dpdk-dev] KNI with multiple kthreads per port > > Howdy! First time posting; please be gentle. :-) > > Environment: > * DPDK 1.8.0 release > * Linux kernel 3.0.3x-ish > * 32-bit (yes, KNI works fine, after a few tweaks hugepage init strategy) Interesting! How did you get it works?
> > I'm trying to use the KNI example app with a configuration where multiple > kthreads are created for a physical port. Per the user guide and code, the > first > such kthread is the "master", any the only one configurable; I'll refer to the > additional kthread(s) as "slaves", although their relationship to the master > kthread isn't discussed anywhere that I've looked thus far. > > # insmod rte_kni.ko kthread_mode=multiple # kni [....] --config="(0,0,1,2,3)" > # ifconfig vEth0_0 10.0.0.1 netmask 255.255.255.0 > > From the above: PMD-bound physical port0. Rx/Tx on cores 0 and 1, > respectively. Master thread on core 2, one slave kthread on core 3. Upon > startup, KNI devices vEth0_0 (master) and vEth0_1 (slave) are created. > After ifconfig, vEth0_0 works fine; by design, vEth0_1 cannot be configured. What do you mean "vEth0_1 cannot be configured"? > > The problem I'm encountering is that the subset of packets hitting vEth0_1 are > being dropped... somewhere. They're definitely getting as far as the call to > netif_rx(skb). I'll try on a newer system for comparison. But before I go > too > much further, I'd like to establish the correct set-up and expectations. So you can check the receiving side in KNI kernel function. > > Should I be bonding vEth0_0 and vEth0_1? Because I tried doing so (via > sysfs); > however, attempts to add either as slaves to bond0 were ignored. What do you mean bonding here? Basically KNI has no relationship to bonding. > > Any ideas appreciated. (Though it may end up being a moot point, with the > other work this past week on KNI performance.)