Hello,

Yes, this will result in better ovs-tcpdump performance. The reason
why it hasn't been added so far is because we can't guarantee or even
check for the existence of any given DPDK driver at runtime in a
generic fashion.

One option would be to select this type of interface with a
non-default command line flag.

What do you think?
M

On Mon, Feb 24, 2025 at 2:25 AM Jun Wang via discuss
<ovs-discuss@openvswitch.org> wrote:
>
> Hi,team.
>     In OVS-DPDK scenarios, using ovs-tcpdump for packet capture can 
> significantly impact forwarding performance due to the packet processing 
> between user space and kernel space.
> This impact is especially noticeable when there is high traffic or when 
> multiple capture ports are started, causing the performance to degrade 
> drastically.
> Therefore, the question is whether we have considered improving the packet 
> capture capability of ovs-tcpdump in OVS-DPDK scenarios. Based on my analysis,
> I believe replacing the default ovs-tcpdump mirror port with the DPDK 
> virtio-user interface would greatly enhance processing performance.
>
> https://doc.dpdk.org/guides-24.11/howto/virtio_user_as_exception_path.html
>
> Command to create a virtio-user port on an OVS-DPDK bridge:
>
> ovs-vsctl --may-exist add-port br-tun veth1 -- set interface veth1 type=dpdk 
> -- set interface veth1 
> options:dpdk-devargs="vdev:virtio_user0,path=/dev/vhost-net,iface=veth1"
>
> After creation, the corresponding interface veth1 is bound to PMD:
> [root@compute0-dpdk /]# ovs-appctl dpif-netdev/pmd-rxq-show
> pmd thread numa_id 1 core_id 21:
>   isolated : false
>   port: tun_port_p0       queue-id:  0 (enabled)   pmd usage:  0 %
>   port: vh-userclient-e5281ddf-c8  queue-id:  0 (enabled)   pmd usage:  0 %
>   overhead:  0 %
> pmd thread numa_id 1 core_id 22:
>   isolated : false
>   port: tun_port_p0       queue-id:  1 (enabled)   pmd usage:  0 %
>   port: veth1             queue-id:  0 (enabled)   pmd usage:  0 %
>   overhead:  0 %
> pmd thread numa_id 1 core_id 23:
>   isolated : false
>   port: tun_port_p1       queue-id:  0 (enabled)   pmd usage:  0 %
>   overhead:  0 %
> pmd thread numa_id 1 core_id 24:
>   isolated : false
>   port: tun_port_p1       queue-id:  1 (enabled)   pmd usage:  0 %
>   overhead:  0 %
>
> And the corresponding veth1 kernel interface can be seen in the kernel:
> [root@compute0-dpdk /]# ip ad|grep veth1
> 700: veth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group 
> default qlen 1000
>
> So, we only need to redirect the mirrored traffic to the virtio-user port, 
> which significantly improves performance compared to the default kernel-space 
> mirror port.
>
> ________________________________
> Jun Wang
> _______________________________________________
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to