https://bugs.dpdk.org/show_bug.cgi?id=806
Bug ID: 806 Summary: Throughput with dpdk + vhost-user + vIOMMU is lower with it's first connection Product: DPDK Version: 20.11 Hardware: x86 OS: Linux Status: UNCONFIRMED Severity: major Priority: Normal Component: core Assignee: dev@dpdk.org Reporter: pezh...@redhat.com Target Milestone: --- Versions: OS:RHEl8 DPDK: We tested dpdk 19.11, 20.11. Both hit this issue. Kernel: 4.18 Qemu: 6.0 Steps: 1. In the host, boot testpmd with iommu enabled /home/dpdk-stable/build/app/testpmd \ -l 2,4,6,8,10,12,14,16,18 \ --socket-mem 1024,1024 \ -n 4 \ --vdev 'net_vhost0,iface=/tmp/vhost-user1,queues=2,client=1,iommu-support=1' \ --vdev 'net_vhost1,iface=/tmp/vhost-user2,queues=2,client=1,iommu-support=1' \ -b 0000:3b:00.0 -b 0000:3b:00.1 \ -d /home/dpdk-stable/build/lib/librte_pmd_vhost.so \ -- \ --portmask=f \ -i \ --rxd=512 --txd=512 \ --rxq=2 --txq=2 \ --nb-cores=8 \ --forward-mode=io testpmd> set portlist 0,2,1,3 testpmd> start 2. Boot VM with vhost-user with iommu enabled <interface type="vhostuser"> <mac address="88:66:da:5f:dd:02" /> <source mode="server" path="/tmp/vhost-user1" type="unix" /> <model type="virtio" /> <driver ats="on" iommu="on" name="vhost" queues="2" rx_queue_size="1024" /> <address bus="0x6" domain="0x0000" function="0x0" slot="0x00" type="pci" /> </interface> <interface type="vhostuser"> <mac address="88:66:da:5f:dd:03" /> <source mode="server" path="/tmp/vhost-user2" type="unix" /> <model type="virtio" /> <driver ats="on" iommu="on" name="vhost" queues="2" rx_queue_size="1024" /> <address bus="0x7" domain="0x0000" function="0x0" slot="0x00" type="pci" /> </interface> 3. In VM, start testpmd to receive/send packets 4. In another server, start Moongen as the packets generator. 5. Check the throughput value, it's very low. Additional info: 1. Disabling viommu, we can get expected throughput. -- You are receiving this mail because: You are the assignee for the bug.