> >
> > Actually native drivers (like Mellanox or AVF) can be faster w/o buffer
> > conversion and tend to be faster than when used by DPDK. I suspect VPP
> is
> not
> > the only project to report this extra cost.
> It would be good to know other projects that report this ext
> To: tho...@monjalon.net
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] efficient use of DPDK
>
> Actually native drivers (like Mellanox or AVF) can be faster w/o buffer
> conversion and tend to be faster than when used by DPDK. I suspect VPP is
not
> the
> -Original Message-
> From: vpp-dev@lists.fd.io On Behalf Of Jerome Tollet via
> Lists.Fd.Io
> Sent: Wednesday, December 4, 2019 9:33 AM
> To: tho...@monjalon.net
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] efficient use of DPDK
>
> Actually native dr
Hi Jerome,
Thanks for the clarification
Regards,
Nitin
> -Original Message-
> From: Jerome Tollet (jtollet)
> Sent: Wednesday, December 4, 2019 11:30 PM
> To: Nitin Saxena ; Thomas Monjalon
> ; Damjan Marion
> Cc: vpp-dev@lists.fd.io
> Subject: [EXT] Re: [vpp-
04/12/2019 16:29, Jerome Tollet (jtollet):
> Hi Thomas,
> I strongly disagree with your conclusions from this discussion:
>
> 1) Yes, VPP made the choice of not being DPDK dependent BUT certainly not at
> the cost of performance. (It's actually the opposite ie AVF driver)
I mean performance cost
ay, December 4, 2019 9:00 PM
> To: Thomas Monjalon ; Damjan Marion
>
> Cc: vpp-dev@lists.fd.io
> Subject: [EXT] Re: [vpp-dev] efficient use of DPDK
>
> External Email
>
> --
pp-dev@lists.fd.io On Behalf Of Jerome Tollet
> via Lists.Fd.Io
> Sent: Wednesday, December 4, 2019 9:00 PM
> To: Thomas Monjalon ; Damjan Marion
>
> Cc: vpp-dev@lists.fd.io
> Subject: [EXT] Re: [vpp-dev] e
Actually native drivers (like Mellanox or AVF) can be faster w/o buffer
conversion and tend to be faster than when used by DPDK. I suspect VPP is not
the only project to report this extra cost.
Jerome
Le 04/12/2019 15:43, « Thomas Monjalon » a écrit :
03/12/2019 22:11, Jerome Tollet (jtol
Hi Thomas,
I strongly disagree with your conclusions from this discussion:
1) Yes, VPP made the choice of not being DPDK dependent BUT certainly not at
the cost of performance. (It's actually the opposite ie AVF driver)
2) VPP is NOT exclusively CPU centric. I gave you the example of crypto offloa
03/12/2019 22:11, Jerome Tollet (jtollet):
> Thomas,
> I am afraid you may be missing the point. VPP is a framework where plugins
> are first class citizens. If a plugin requires leveraging offload (inline or
> lookaside), it is more than welcome to do it.
> There are multiple examples including
04/12/2019 15:25, Ole Troan:
> Thomas,
>
> > 2/ it confirms the VPP design choice of not being DPDK-dependent (at a
> > performance cost)
>
> Do you have any examples/features where a DPDK/offload solution would be
> performing better than VPP?
> Any numbers?
No sorry, I am not benchmarking VP
03/12/2019 20:56, Ole Troan:
> Interesting discussion.
>
> > Yes it is possible to use DPDK in VPP with degraded performance.
> > If an user wants best performance with VPP and a real NIC,
> > a new driver must be implemented for VPP only.
> >
> > Anyway real performance benefits are in hardware
Thomas,
> 2/ it confirms the VPP design choice of not being DPDK-dependent (at a
> performance cost)
Do you have any examples/features where a DPDK/offload solution would be
performing better than VPP?
Any numbers?
Best regards,
Ole-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent t
03/12/2019 20:01, Damjan Marion:
> On 3 Dec 2019, at 17:06, Thomas Monjalon wrote:
> > 03/12/2019 13:12, Damjan Marion:
> >> On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
> >>> 03/12/2019 00:26, Damjan Marion:
> On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
> > VPP has a buffer called
> On Dec 3, 2019, at 12:56 PM, Ole Troan wrote:
>
> If you don't want that, wouldn't you just build something with a Trident 4?
> ;-)
Or Tofino, if you want to go that direction. Even then, the amount of
packet-processing (especially the edge/exception conditions) can overwhelm a
hardware-b
Thomas,
I am afraid you may be missing the point. VPP is a framework where plugins are
first class citizens. If a plugin requires leveraging offload (inline or
lookaside), it is more than welcome to do it.
There are multiple examples including hw crypto accelerators
(https://software.intel.com/e
Interesting discussion.
> Yes it is possible to use DPDK in VPP with degraded performance.
> If an user wants best performance with VPP and a real NIC,
> a new driver must be implemented for VPP only.
>
> Anyway real performance benefits are in hardware device offloads
> which will be hard to imp
> On 3 Dec 2019, at 17:06, Thomas Monjalon wrote:
>
> 03/12/2019 13:12, Damjan Marion:
>>> On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
>>> 03/12/2019 00:26, Damjan Marion:
Hi THomas!
Inline...
>> On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
>
>
> On 3 Dec 2019, at 17:06, Thomas Monjalon wrote:
>
> 03/12/2019 13:12, Damjan Marion:
>>> On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
>>> 03/12/2019 00:26, Damjan Marion:
Hi THomas!
Inline...
>> On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
>
>
03/12/2019 13:12, Damjan Marion:
> > On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
> > 03/12/2019 00:26, Damjan Marion:
> >>
> >> Hi THomas!
> >>
> >> Inline...
> >>
> On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
> >>>
> >>> Hi all,
> >>>
> >>> VPP has a buffer called vlib_buffer_
>
> On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
>
> 03/12/2019 00:26, Damjan Marion:
>>
>> Hi THomas!
>>
>> Inline...
>>
On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
>>>
>>> Hi all,
>>>
>>> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
>>> Are there some be
03/12/2019 00:26, Damjan Marion:
>
> Hi THomas!
>
> Inline...
>
> > On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
> >
> > Hi all,
> >
> > VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> > Are there some benchmarks about the cost of converting, from one format
> > to the ot
Thanks for bringing up the discussion
> -Original Message-
> From: vpp-dev@lists.fd.io On Behalf Of Thomas
> Monjalon via Lists.Fd.Io
> Sent: Monday, December 2, 2019 4:35 PM
> To: vpp-dev@lists.fd.io
> Cc: vpp-dev@lists.fd.io
> Subject: [vpp-dev] efficient us
Hi THomas!
Inline...
> On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
>
> Hi all,
>
> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> Are there some benchmarks about the cost of converting, from one format
> to the other one, during Rx/Tx operations?
We are benchmarking bo
Hi all,
VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
Are there some benchmarks about the cost of converting, from one format
to the other one, during Rx/Tx operations?
I'm sure there would be some benefits of switching VPP to natively use
the DPDK mbuf allocated in mempools.
Wh
25 matches
Mail list logo