Hi all,

VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
Are there some benchmarks about the cost of converting, from one format
to the other one, during Rx/Tx operations?

I'm sure there would be some benefits of switching VPP to natively use
the DPDK mbuf allocated in mempools.
What would be the drawbacks?

Last time I asked this question, the answer was about compatibility with
other driver backends, especially ODP. What happened?
DPDK drivers are still the only external drivers used by VPP?

When using DPDK, more than 40 networking drivers are available:
        https://core.dpdk.org/supported/
After 4 years of Open Source VPP, there are less than 10 native drivers:
        - virtual drivers: virtio, vmxnet3, af_packet, netmap, memif
        - hardware drivers: ixge, avf, pp2
And if looking at ixge driver, we can read:
"
        This driver is not intended for production use and it is unsupported.
        It is provided for educational use only.
        Please use supported DPDK driver instead.
"

So why not improving DPDK integration in VPP to make it faster?

DPDK mbuf has dynamic fields now; it can help to register metadata on demand.
And it is still possible to statically reserve some extra space for
application-specific metadata in each packet.

Other improvements, like meson packaging usable with pkg-config,
were done during last years and may deserve to be considered.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14758): https://lists.fd.io/g/vpp-dev/message/14758
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to