OK. Finally I got it.

There is not good distribution of rx queues between pmd
threads for dpdk0 port.

> # ./ovs/utilities/ovs-appctl dpif-netdev/pmd-rxq-show
> pmd thread numa_id 0 core_id 13:
>         port: vhost-user1       queue-id: 1
>         port: dpdk0     queue-id: 3
> pmd thread numa_id 0 core_id 14:
>         port: vhost-user1       queue-id: 2
> pmd thread numa_id 0 core_id 16:
>         port: dpdk0     queue-id: 0
> pmd thread numa_id 0 core_id 17:
>         port: dpdk0     queue-id: 1
> pmd thread numa_id 0 core_id 12:
>         port: vhost-user1       queue-id: 0
>         port: dpdk0     queue-id: 2
> pmd thread numa_id 0 core_id 15:
>         port: vhost-user1       queue-id: 3
> ------------------------------------------------------

As we can see above dpdk0 port polled by threads on cores:
12, 13, 16 and 17.
By design of dpif-netdev, there is only one TX queue-id assigned
to each pmd thread. This queue-id's are sequential similar to
core-id's. And thread will send packets to queue with exact this
queue-id regardless of port.

In our case:
pmd thread on core 12 will send packets to tx queue 0
pmd thread on core 13 will send packets to tx queue 1
...
pmd thread on core 17 will send packets to tx queue 5

So, for dpdk0 port:
core 12 --> TX queue-id 0
core 13 --> TX queue-id 1
core 16 --> TX queue-id 4
core 17 --> TX queue-id 5

After truncating in netdev-dpdk:
core 12 --> TX queue-id 0 % 4 == 0
core 13 --> TX queue-id 1 % 4 == 1
core 16 --> TX queue-id 4 % 4 == 0
core 17 --> TX queue-id 5 % 4 == 1

As a result only 2 queues used.
This is not a good behaviour. Thanks for reporting.
I'll try to fix rx queue distribution in dpif-netdev.

Best regards, Ilya Maximets.

P.S. There will be no packet loss on low speeds. Only 2x
     performance drop.
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to