Tim, Thanks for sharing this. If nothing else, I wanted to at least provide some feedback on the parts that look useful to me for my applications/product. Bits that make me interested in the release:
*> 2.0 (Q1 2015) DPDK Features:> Bifurcated Driver: With the Bifurcated Driver, the kernel will retain direct control of the NIC, and will assign specific queue pairs to DPDK. Configuration of the NIC is controlled by the kernel via ethtool.* Having NIC configuration, port stats, etc. available via the normal Linux tools is very helpful - particularly on new products just getting started with DPDK. *> Packet Reordering: Assign a sequence number to packets on Rx, and then provide the ability to reorder on Tx to preserve the original order.* This could be extremely useful but it depends on where it goes. The current design being discussed seems fundamentally flawed to me. See the thread on the RFC for details. *> Packet Distributor (phase 2): Implement the following enhancements to the Packet Distributor that was originally delivered in the DPDK 1.7 release: performance improvements; the ability for packets from a flow to be processed by multiple worker cores in parallel and then reordered on Tx using the Packet Reordering feature; the ability to have multiple Distributors which share Worker cores.* TBD on this for me. The 1.0 version of our product is based on DPDK 1.6 and I haven't had a chance to look at what is happening with Packet Distributor yet. An area of potential interest at least. *> Cuckoo Hash: A new hash algorithm was implemented as part of the Cuckoo Switch project (see http://www.cs.cmu.edu/~dongz/papers/cuckooswitch.pdf <http://www.cs.cmu.edu/~dongz/papers/cuckooswitch.pdf>), and shows some promising performance results. This needs to be modified to make it more generic, and then incorporated into DPDK.* More performance == creamy goodness, especially if it is in the plumbing and doesn't require significant app changes. *> Interrupt mode for PMD: Allow DPDK process to transition to interrupt mode when load is low so that other processes can run, or else power can be saved. This will increase latency/jitter.* Yes! I don't care about power savings, but I do care about giving a good product impression in the lab during evals without having to sacrifice overall system performance when under load. Hybrid drivers that use interrupts when load is low and poll-mode when loaded are ideal, IMO. It seems an odd thing, but during lab testing, it is normal for customers to fire the box up and just start running pings or some other low volume traffic through the box. If the PMDs are configured to batch in sizes optimal for best performance under load, the system can look *really* bad in these initial tests. We go through a fair bit of gymnastics right now to work around this without just giving up on batching in the PMDs. *> DPDK Headroom: Provide a mechanism to indicate how much headroom (spare capacity) exists in a DPDK process.* Very helpful in the field. Anything that helps customers understand how much headroom is left on their box before they need to take action is a huge win. CPU utilization is a bad indicator, especially with a PMD architecture. Hope this type of feedback is helpful. Regards, Jay