03/12/2019 20:01, Damjan Marion:
> On 3 Dec 2019, at 17:06, Thomas Monjalon wrote:
> > 03/12/2019 13:12, Damjan Marion:
> >> On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
> >>> 03/12/2019 00:26, Damjan Marion:
> >>>> On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
> >>>>> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> >>>>> Are there some benchmarks about the cost of converting, from one format
> >>>>> to the other one, during Rx/Tx operations?
> >>>> 
> >>>> We are benchmarking both dpdk i40e PMD performance and native VPP AVF 
> >>>> driver performance and we are seeing significantly better performance 
> >>>> with native AVF.
> >>>> If you taake a look at [1] you will see that DPDK i40e driver provides 
> >>>> 18.62 Mpps and exactly the same test with native AVF driver is giving us 
> >>>> arounf 24.86 Mpps.
> > [...]
> >>>> 
> >>>>> So why not improving DPDK integration in VPP to make it faster?
> >>>> 
> >>>> Yes, if we can get freedom to use parts of DPDK we want instead of being 
> >>>> forced to adopt whole DPDK ecosystem.
> >>>> for example, you cannot use dpdk drivers without using EAL, mempool, 
> >>>> rte_mbuf... rte_eal_init is monster which I was hoping that it will 
> >>>> disappear for long time...

As stated below, I take this feedback, thanks.
However it won't change VPP choice of not using rte_mbuf natively.

[...]
> >> At the moment we have good coverage of native drivers, and still there is 
> >> a option for people to use dpdk. It is now mainly up to driver vendors to 
> >> decide if they are happy with performance they wil get from dpdk pmd or 
> >> they want better...
> > 
> > Yes it is possible to use DPDK in VPP with degraded performance.
> > If an user wants best performance with VPP and a real NIC,
> > a new driver must be implemented for VPP only.
> > 
> > Anyway real performance benefits are in hardware device offloads
> > which will be hard to implement in VPP native drivers.
> > Support (investment) would be needed from vendors to make it happen.
> > About offloads, VPP is not using crypto or compression drivers
> > that DPDK provides (plus regex coming).
> 
> Nice marketing pitch for your company :)

I guess you mean Mellanox has a good offloads offering.
But my point is about the end of Moore's law,
and the offload trending of most of device vendors.
However I truly respect the choice of avoiding device offloads.

> > VPP is a CPU-based packet processing software.
> > If users want to leverage hardware device offloads,
> > a truly DPDK-based software is required.
> > If I understand well your replies, such software cannot be VPP.
> 
> Yes, DPDK is centre of the universe/

DPDK is where most of networking devices are supported in userspace.
That's all.


> So Dear Thomas, I can continue this discussion forever, but that is not 
> something I'm going to do as it started to be trolling contest.

I agree

> I can understand that you may be passionate about you project and that you 
> maybe think that it is the greatest thing after sliced bread, but please 
> allow that other people have different opinion. Instead of giving the lessons 
> to other people what they should do, if you are interested for dpdk to be 
> better consumed, please take a feedback provided to you. I assume that you 
> are interested as you showed up on this mailing list, if not there was no 
> reason for starting this thread in the first place.

Thank you for the feedbacks, this discussion was required:
1/ it gives more motivation to improve EAL API
2/ it confirms the VPP design choice of not being DPDK-dependent (at a 
performance cost)
3/ it confirms the VPP design choice of being focused on CPU-based processing


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14788): https://lists.fd.io/g/vpp-dev/message/14788
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to