Mathias,

 

Working with IPv6 always incurs a memcpy.  I see quic VPP plugin using 
clib_memcpy.  Please replace any clib_memcpy with clib_memcpy_fast() and re 
test quic and let us know what performance gain you see.

 

Hemant

 

From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Mathias Raoul -X 
(mraoul - LIANEO at Cisco) via lists.fd.io
Sent: Monday, November 30, 2020 5:57 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP / Quic perfomances

 

Hello,

Following the work already done this year, we are currently updating our quic 
plugin in vpp to benefits from last features and optimizations introduced in 
quicly, the library on which we rely for quic protocol.

An article on the Cisco blog from Aloys Augustin [1] describe our integration 
with quicly and how we optimized this with vpp crypto API and packet batching. 

A simplified API for crypto offloading has been implemented after some 
discussions with the quicly’s team. This new API + all the optimizations done 
inside quicly permits us to have better results in term of throughput and to 
simplify the code of the quic plugin.

Here are some numbers obtained using patch 27845 [2] comparing to vpp master 
and quicly without vpp using the linux stack.
All tests are done on single thread/worker.
The qperf tool [3] is used for quicly/linux tests.
VPP’s tests are done using the host stack and the internal client/server.

Quicly / Linux : 3.5 Gb/s
VPP (master) quicly crypto engine : 3.8 Gb/s
VPP (master) vpp crypto engine : 4 Gb/s
VPP (27845) quicly crypto engine : 6.5 Gb/s
VPP (27845) vpp crypto engine : 6.7 Gb/s

[1] https://blogs.cisco.com/cloud/building-fast-quic-sockets-in-vpp
[2] https://gerrit.fd.io/r/c/vpp/+/27845
[3] https://github.com/rbruenig/qperf

Best,
Mathias 

Attachment: smime.p7s
Description: S/MIME cryptographic signature

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18968): https://lists.fd.io/g/vpp-dev/message/18968
Mute This Topic: https://lists.fd.io/mt/78606168/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to