Hi,
I am using 32bit VIRTIO PMD from dpdk-1.6.0r1 and seeing a basic packet I/O issue under some VM configurations when testing with l2fwd application. The issue is that Tx on virtio NIC is not working. Packets enqueued by virtio pmd on Tx queue are not dequeued by the backend vhost-net for some reason. I confirmed this after seeing that the RX counter on the corresponding vnetx interface on the KVM host is zero. As a result, after enqueuing the first 128(half of 256 total size) packets the Tx queue becomes full and no more packets can be enqueued. Each packet using 2 descriptors in the Tx queue allows 128 packets to be enqueued. The issue is not seen when using 64bit l2fwd application that uses 64 bit virtio pmd. With 32bit l2fwd application I see this issue for some combination of core and RAM allocated to the VM, but works in other cases as below: Failure cases: 8 cores and 16G/12G RAM allocated to VM Some of the Working cases: 8 cores and 8G/9G/10G/11G/13G allocated to VM 2 cores and any RAM allocation including 16G&12G One more observation is: By default I reserve 128 2MB hugepages for DPDK. After seeing the above failure scenario, if I just kill l2fwd and reduce the number of hugepages to 64 with the command, echo 64 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages the same l2fwd app starts working. I believe the issue has something to do with the physical memzone virtqueue is allocated each time. I am using igb_uio.ko built from x86_64-default-linuxapp-gcc config and all other dpdk libs built from i686-default-linuxapp-gcc. This is because my kernel is 64bit and my application is 32 bit. Below are the details of my setup: Linux kernel : 2.6.32-220.el6.x86_64 DPDK version : dpdk-1.6.0r1 Hugepages : 128 2MB hugepages DPDK Binaries used: * 64bit igb_uio.ko * 32bit l2fwd application I'd appreciate if you could give me some pointers on debugging the issue ? Thanks, Vijay