> Hello Jun,
>
> Are the 20G, 17G, and 11G figures a speed? Or the volume of data sent?
> How are you measuring this?
>
> DPDK can be very sensitive to which numa node, or even core a PMD is
> running on. But I don't know what is causing this issue specifically.
>
> Cheers,
> M

Yes, I am referring to speed. I used iperf3 to send TCP traffic from a virtual 
machine,
 which then flows into the physical machine. I captured packets on the 
vhost-user-client
interface on the physical machine and compared it with traffic mirrored to the 
veth virtio-user interface. I found that after mirroring, the forwarding 
performance 
degradation was more significant.

It seems that the expected result—where capturing packets on the virtio-user 
interface performs better than on the vhost-user-client interface—was not 
achieved. 
I'm not sure if the issue is related to mirroring. The specific PMD and 
interface 
distribution is as follows:

[root@compute0-dpdk /]# ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 1 core_id 21:
  isolated : false
  port: tun_port_p0       queue-id:  0 (enabled)   pmd usage:  0 %
  port: veth1             queue-id:  0 (enabled)   pmd usage:  0 %
  port: vh-userclient-08a1f95c-3b  queue-id:  3 (enabled)   pmd usage:  0 %
  port: vh-userclient-08a1f95c-3b  queue-id:  7 (enabled)   pmd usage:  0 %
  port: vh-userclient-08a1f95c-3b  queue-id: 11 (enabled)   pmd usage:  0 %
  port: vh-userclient-08a1f95c-3b  queue-id: 15 (enabled)   pmd usage:  0 %
  port: vh-userclient-e5281ddf-c8  queue-id:  0 (enabled)   pmd usage:  0 %
  overhead:  0 %
pmd thread numa_id 1 core_id 22:
  isolated : false
  port: tun_port_p0       queue-id:  1 (enabled)   pmd usage:  0 %
  port: veth1             queue-id:  1 (enabled)   pmd usage:  0 %
  port: vh-userclient-08a1f95c-3b  queue-id:  0 (enabled)   pmd usage:  0 %
  port: vh-userclient-08a1f95c-3b  queue-id:  4 (enabled)   pmd usage:  0 %
  port: vh-userclient-08a1f95c-3b  queue-id:  8 (enabled)   pmd usage:  0 %
  port: vh-userclient-08a1f95c-3b  queue-id: 12 (enabled)   pmd usage:  0 %
  overhead:  0 %
pmd thread numa_id 1 core_id 23:
  isolated : false
  port: tun_port_p1       queue-id:  0 (enabled)   pmd usage:  0 %
  port: veth1             queue-id:  2 (enabled)   pmd usage:  0 %
  port: vh-userclient-08a1f95c-3b  queue-id:  1 (enabled)   pmd usage:  0 %
  port: vh-userclient-08a1f95c-3b  queue-id:  5 (enabled)   pmd usage:  0 %
  port: vh-userclient-08a1f95c-3b  queue-id:  9 (enabled)   pmd usage:  0 %
  port: vh-userclient-08a1f95c-3b  queue-id: 13 (enabled)   pmd usage:  0 %
  overhead:  0 %
pmd thread numa_id 1 core_id 24:
  isolated : false
  port: tun_port_p1       queue-id:  1 (enabled)   pmd usage:  0 %
  port: veth1             queue-id:  3 (enabled)   pmd usage:  0 %
  port: vh-userclient-08a1f95c-3b  queue-id:  2 (enabled)   pmd usage:  0 %
  port: vh-userclient-08a1f95c-3b  queue-id:  6 (enabled)   pmd usage:  0 %
  port: vh-userclient-08a1f95c-3b  queue-id: 10 (enabled)   pmd usage:  0 %
  port: vh-userclient-08a1f95c-3b  queue-id: 14 (enabled)   pmd usage:  0 %
  overhead:  0 %

>>
>> When capturing directly on the vhost-user interface using:
>>
>>     ovs-tcpdump -i vh-userclient-08a1f95c-3b -w test.pcap
>>
>> the performance dropped to around 17G.
>>
>> However, unfortunately, when using mirror-to to a virtio-user interface:
>>
>>     ovs-tcpdump -i vh-userclient-08a1f95c-3b --mirror-to veth1 -w test.pcap
>>
>> the performance dropped significantly to around 11G.
>>
>> This is quite strange. I'm not sure if the performance drop is caused by the 
>> mirroring mechanism itself. Any thoughts?
>>



Jun Wang
_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to