Hi!

Thanks for your contribution!

I was already following the discussion in the linked forum thread and
shortly discussed this proposal with a colleague - but I wasn't able to
find the time yet to take a closer look at VPP itself in order to form
an opinion. I'll take a closer look in the coming days and give the
patches a spin on my machine.


On 3/16/26 11:27 PM, Ryosuke Nakayama wrote:
> From: ryskn <[email protected]>
> 
> This RFC series integrates VPP (Vector Packet Processor, fd.io) as an
> optional userspace dataplane alongside OVS in Proxmox VE.
> 
> VPP is a DPDK-based, userspace packet processing framework that
> provides VM networking via vhost-user sockets. It is already used in
> production by several cloud/telecom stacks. The motivation here is to
> expose VPP bridge domains natively in the PVE WebUI and REST API,
> following the same pattern as OVS integration.
> 
> Background and prior discussion:
>   
> https://forum.proxmox.com/threads/interest-in-vpp-vector-packet-processing-as-a-dataplane-option-for-proxmox.181530/
> 
> Note: the benchmark figures quoted in that forum thread are slightly
> off due to test configuration differences. Please use the numbers in
> this cover letter instead.
> 
> --- What the patches do ---
> 
> Patch 1 (pve-manager):
>   - Detect VPP bridges via 'vppctl show bridge-domain' and expose
>     them as type=VPPBridge in the network interface list
>   - Create/delete VPP bridge domains via vppctl
>   - Persist bridge domains to /etc/vpp/pve-bridges.conf (exec'd at
>     VPP startup) so they survive reboots
>   - Support vpp_vlan_aware flag (maps to bridge-domain learn flag)
>   - VPP VLAN subinterface create/delete/list, persisted to
>     /etc/vpp/pve-vlans.conf
>   - Exclude VPP bridges from the SDN-only access guard so they appear
>     in the WebUI NIC selector
>   - Vhost-user socket convention:
>     /var/run/vpp/qemu-<vmid>-<net>.sock
>   - pve8to9: add upgrade checker for VPP dependencies
> 
> Patch 2 (proxmox-widget-toolkit):
>   - Add VPPBridge/VPPVlan to network_iface_types (Utils.js)
>   - NetworkView: VPPBridge and VPPVlan entries in the Create menu;
>     render vlan-raw-device in Ports/Slaves column for VPPVlan;
>     vpp_vlan_aware support in VLAN aware column
>   - NetworkEdit: vppbrN name validator; vpp_bridge field for VPPVlan;
>     hide MTU/Autostart/IP fields for VPP types; use VlanName vtype
>     for VPPVlan (allows dot notation, e.g. tap0.100)
> 
> --- Testing ---
> 
> Due to the absence of physical NICs in my test environment, all
> benchmarks were performed as VM-to-VM communication over the
> hypervisor's virtual switch (vmbr1 or VPP bridge domain). These
> results reflect the virtual switching overhead, not physical NIC
> performance, where VPP's DPDK polling would show a larger advantage.
> 
> Host: Proxmox VE 8.x (Intel Xeon), VMs: Debian 12 (virtio-net q=1)
> VPP: 24.06, coalescing: frames=32 time=0.5ms, polling mode
> 
> iperf3 / netperf (single queue, VM-to-VM):
> 
>   Metric             vmbr1          VPP (vhost-user)
>   iperf3             31.0 Gbits/s   13.2 Gbits/s
>   netperf TCP_STREAM 32,243 Mbps    13,181 Mbps
>   netperf TCP_RR     15,734 tx/s    989 tx/s
> 
> VPP's raw throughput is lower than vmbr1 in this VM-to-VM setup due
> to vhost-user coalescing latency. Physical NIC testing (DPDK PMD) is
> expected to close or reverse this gap.
> 
> gRPC (unary, grpc-flow-bench, single queue, VM-to-VM):
> 
>   Flows  Metric    vmbr1     VPP
>   100    RPS       32,847    39,742
>   100    p99 lat   7.28 ms   6.16 ms
>   1000   RPS       40,315    41,139
>   1000   p99 lat   48.96 ms  31.96 ms
> 
> VPP's userspace polling removes kernel scheduler jitter, which is
> visible in the gRPC latency results even in the VM-to-VM scenario.
> 
> --- Known limitations / TODO ---
> 
> - No ifupdown2 integration yet; VPP config is managed separately via
>   /etc/vpp/pve-bridges.conf and pve-vlans.conf
> - No live migration path for vhost-user sockets (sockets must be
>   pre-created on the target host)
> - OVS and VPP cannot share the same physical NIC in this
>   implementation
> - VPP must be installed and running independently (not managed by PVE)
> 
> --- CLA ---
> 
> Individual CLA has been submitted to [email protected].
> 
> ---
> 
> ryskn (2):
>   api: network: add VPP (fd.io) dataplane bridge support
>   ui: network: add VPP (fd.io) bridge type support
> 
>  PVE/API2/Network.pm                 | 413 ++++++++++++++++++++++++++-
>  PVE/API2/Nodes.pm                   |  19 ++
>  PVE/CLI/pve8to9.pm                  |  48 ++++
>  www/manager6/form/BridgeSelector.js |   5 +
>  www/manager6/lxc/Network.js         |  34 +++
>  www/manager6/node/Config.js         |   1 +
>  www/manager6/qemu/NetworkEdit.js    |  27 ++
>  www/manager6/window/Migrate.js      |  48 ++++
>  src/Utils.js                        |   2 +
>  src/node/NetworkEdit.js             |  64 ++++-
>  src/node/NetworkView.js             |  35 +++
>  11 files changed, 675 insertions(+), 21 deletions(-)
> 




Reply via email to