Additional configuration is required if you want to run ovs-vswitchd with DPDK backend inside a QEMU virtual machine. This happens because, by default, virtio NIC provided to the guest doesn't support multiple TX queues which are required by ovs-vswitchd/dpdk. This commit updates INSTALL.DPDK.md to provide guidelines on how to enable support for multiple TX queues using QEMU command line and Libvirt config file.
Signed-off-by: Oleg Strikov <oleg.stri...@canonical.com> --- INSTALL.DPDK.md | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/INSTALL.DPDK.md b/INSTALL.DPDK.md index a05367a..068d631 100644 --- a/INSTALL.DPDK.md +++ b/INSTALL.DPDK.md @@ -560,6 +560,24 @@ steps in the previous section before proceeding with the following steps: 5. Use virt-manager to launch the VM +Running ovs-vswitchd with DPDK backend inside a VM +-------------------------------------------------- + +Please note that additional configuration is required if you want to run +ovs-vswitchd with DPDK backend inside a QEMU virtual machine. Ovs-vswitchd +creates separate DPDK TX queues for each CPU core available. This operation +fails inside QEMU virtual machine because, by default, VirtIO NIC provided +to the guest is configured to support only single TX queue and single RX +queue. To change this behavior, you need to turn on 'mq' (multiqueue) +property of all virtio-net-pci devices emulated by QEMU and used by DPDK. +You may do it manually (by changing QEMU command line) or, if you use Libvirt, +by adding the following string: + +`<driver name='vhost' queues='N'/>` + +to <interface> sections of all network devices used by DPDK. Parameter 'N' +determines how many queues can be used by the guest. + Restrictions: ------------- -- 2.1.4 _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev