Manual pinning of RX queues to PMD threads required for performance
optimisation. This will give to user ability to achieve max. performance
using less number of CPUs because currently only user may know which
ports are heavy loaded and which are not.

To give full controll on ports TX queue manipulation mechanisms also
required. For example, to avoid issue described in 'dpif-netdev: XPS
(Transmit Packet Steering) implementation.' which becomes worse with
ability of manual pinning.
( http://openvswitch.org/pipermail/dev/2016-March/067152.html )

First 3 patches: prerequisites to XPS implementation.
Patch #4: XPS implementation.
Patches #5 and #6: Manual pinning implementation.

Version 2:
        * Rebased on current master.
        * Fixed initialization of newly allocated memory in
          'port_reconfigure()'.

Ilya Maximets (6):
  netdev-dpdk: Use instant sending instead of queueing of packets.
  dpif-netdev: Allow configuration of number of tx queues.
  netdev-dpdk: Mandatory locking of TX queues.
  dpif-netdev: XPS (Transmit Packet Steering) implementation.
  dpif-netdev: Add dpif-netdev/pmd-reconfigure appctl command.
  dpif-netdev: Add dpif-netdev/pmd-rxq-set appctl command.

 INSTALL.DPDK.md            |  44 +++--
 NEWS                       |   4 +
 lib/dpif-netdev.c          | 393 ++++++++++++++++++++++++++++++++++-----------
 lib/netdev-bsd.c           |   1 -
 lib/netdev-dpdk.c          | 198 ++++++-----------------
 lib/netdev-dummy.c         |   1 -
 lib/netdev-linux.c         |   1 -
 lib/netdev-provider.h      |  18 +--
 lib/netdev-vport.c         |   1 -
 lib/netdev.c               |  30 ----
 lib/netdev.h               |   1 -
 vswitchd/ovs-vswitchd.8.in |  10 ++
 12 files changed, 400 insertions(+), 302 deletions(-)

-- 
2.5.0

_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to