On 5/21/15 3:49 AM, Ouyang Changchun wrote:
> This patch set supports the multiple queues for each virtio device in vhost.
> The vhost-user is used to enable the multiple queues feature, It's not ready
> for vhost-cuse.
Thanks. I tried it and verified that this patch applies cleanly to
master. Could you also notify the list when qemu patch is available.
Thanks again!
>
> One prerequisite to enable this feature is that a QEMU patch plus a fix is
> required to apply
> on QEMU2.2/2.3, pls refer to this link for the details of the patch and the
> fix:
> http://lists.nongnu.org/archive/html/qemu-devel/2015-04/msg00917.html
>
> A formal v3 patch for the code change and the fix will be sent to qemu
> community soon.
>
> Basicaly vhost sample leverages the VMDq+RSS in HW to receive packets and
> distribute them
> into different queue in the pool according to their 5 tuples.
>
> On the other hand, it enables multiple queues mode in vhost/virtio layer by
> setting the queue
> number as the value larger than 1.
>
> HW queue numbers in pool is required to be exactly same with the queue number
> in each virtio
> device, e.g. rxq = 4, the queue number is 4, it means there are 4 HW queues
> in each VMDq pool,
> and 4 queues in each virtio device/port, every queue in pool maps to one
> qeueu in virtio device.
>
> =========================================
> ==================| |==================|
> vport0 | | vport1 |
> --- --- --- ---| |--- --- --- ---|
> q0 | q1 | q2 | q3 | |q0 | q1 | q2 | q3 |
> /\= =/\= =/\= =/\=| |/\= =/\= =/\= =/\=|
> || || || || || || || ||
> || || || || || || || ||
> ||= =||= =||= =||=| =||== ||== ||== ||=|
> q0 | q1 | q2 | q3 | |q0 | q1 | q2 | q3 |
>
> ------------------| |------------------|
> VMDq pool0 | | VMDq pool1 |
> ==================| |==================|
>
> In RX side, it firstly polls each queue of the pool and gets the packets from
> it and enqueue them into its corresponding queue in virtio device/port.
> In TX side, it dequeue packets from each queue of virtio device/port and send
> to either physical port or another virtio device according to its destination
> MAC address.
>
> It includes a workaround here in virtio as control queue not work for
> vhost-user
> multiple queues. It needs further investigate to root the cause, hopefully it
> could
> be addressed in next version.
>
> Here is some test guidance.
> 1. On host, firstly mount hugepage, and insmod uio, igb_uio, bind one nic on
> igb_uio;
> and then run vhost sample, key steps as follows:
> sudo mount -t hugetlbfs nodev /mnt/huge
> sudo modprobe uio
> sudo insmod $RTE_SDK/$RTE_TARGET/kmod/igb_uio.ko
>
> $RTE_SDK/tools/dpdk_nic_bind.py --bind igb_uio 0000:08:00.0
> sudo $RTE_SDK/examples/vhost/build/vhost-switch -c 0xf0 -n 4 --huge-dir
> /mnt/huge --socket-mem 1024,0 -- -p 1 --vm2vm 0 --dev-basename usvhost --rxq 2
>
> 2. After step 1, on host, modprobe kvm and kvm_intel, and use qemu command
> line to start one guest:
> modprobe kvm
> modprobe kvm_intel
> sudo mount -t hugetlbfs nodev /dev/hugepages -o pagesize=1G
>
> $QEMU_PATH/qemu-system-x86_64 -enable-kvm -m 4096 -object
> memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on -numa
> node,memdev=mem -mem-prealloc -smp 10 -cpu core2duo,+sse3,+sse4.1,+sse4.2
> -name <vm-name> -drive file=<img-path>/vm.img -chardev
> socket,id=char0,path=<usvhost-path>/usvhost -netdev
> type=vhost-user,id=hostnet2,chardev=char0,vhostforce=on,queues=2 -device
> virtio-net-pci,mq=on,vectors=6,netdev=hostnet2,id=net2,mac=52:54:00:12:34:56,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> -chardev socket,id=char1,path=<usvhost-path>/usvhost -netdev
> type=vhost-user,id=hostnet3,chardev=char1,vhostforce=on,queues=2 -device
> virtio-net-pci,mq=on,vectors=6,netdev=hostnet3,id=net3,mac=52:54:00:12:34:57,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
>
> 3. Log on guest, use testpmd(dpdk based) to test, use multiple virtio queues
> to rx and tx packets.
> modprobe uio
> insmod $RTE_SDK/$RTE_TARGET/kmod/igb_uio.ko
> echo 1024 >
> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> ./tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 00:04.0
>
> $RTE_SDK/$RTE_TARGET/app/testpmd -c 1f -n 4 -- --rxq=2 --txq=2 --nb-cores=4
> --rx-queue-stats-mapping="(0,0,0),(0,1,1),(1,0,2),(1,1,3)"
> --tx-queue-stats-mapping="(0,0,0),(0,1,1),(1,0,2),(1,1,3)" -i
> --disable-hw-vlan --txqflags 0xf00
>
> 4. Use packet generator to send packets with dest MAC:52 54 00 12 34 57 VLAN
> tag:1001,
> select IPv4 as protocols and continuous incremental IP address.
>
> 5. Testpmd on guest can display packets received/transmitted in both queues
> of each virtio port.
>
> Changchun Ouyang (6):
> ixgbe: Support VMDq RSS in non-SRIOV environment
> lib_vhost: Support multiple queues in virtio dev
> lib_vhost: Set memory layout for multiple queues mode
> vhost: Add new command line option: rxq
> vhost: Support multiple queues
> virtio: Resolve for control queue
>
> examples/vhost/main.c | 199
> +++++++++++++++++---------
> lib/librte_ether/rte_ethdev.c | 40 ++++++
> lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 82 +++++++++--
> lib/librte_pmd_virtio/virtio_ethdev.c | 6 +
> lib/librte_vhost/rte_virtio_net.h | 25 +++-
> lib/librte_vhost/vhost_cuse/virtio-net-cdev.c | 57 ++++----
> lib/librte_vhost/vhost_rxtx.c | 53 +++----
> lib/librte_vhost/vhost_user/vhost-net-user.c | 4 +-
> lib/librte_vhost/vhost_user/virtio-net-user.c | 156 ++++++++++++++------
> lib/librte_vhost/vhost_user/virtio-net-user.h | 2 +
> lib/librte_vhost/virtio-net.c | 158 ++++++++++++--------
> 11 files changed, 545 insertions(+), 237 deletions(-)
>