> On Jul 1, 2018, at 11:05 PM, Vamsi Krishna <vamsi...@gmail.com> wrote: > > How is the performance of this code in terms of throughput, are there any > benchmarks that can be referred to?
Four host setup (2 hosts for tunnel endpoints, 2 hosts outside tunnel as source & sink) Source / sink Xeon E3-1275 v3 w/40G xl710 IPsec tunnel endpoints: Intel i7-6950X (10C i7 @ 3.0GHz), xl710 40G NICs, Coleto Creek QAT card (https://store.netgate.com/ADI/QuickAssist8955.aspx <https://store.netgate.com/ADI/QuickAssist8955.aspx>) Context: “Kernel” is linux kernel “User” is VPP Testing performed in April 2017 Context Crypto processing Crypto/AEAD algorithm Integrity algorithm # of SAs Total # streams Iperf3 TCP 1500 # of samples Kernel AES-NI AES-CBC-128 SHA1 1 1 2.09 Gbps 16 Kernel AES-NI AES-CBC-128 SHA1 1 4 2.07 Gbps 16 Kernel AES-NI AES-CBC-128 SHA1 8 8 10.85 Gbps 6 Kernel AES-NI AES-GCM-128-16 1 1 5.06 Gbps 16 Kernel AES-NI AES-GCM-128-16 1 4 5.06 Gbps 16 Kernel AES-NI AES-GCM-128-16 8 8 25.25 Gbps 6 Kernel QAT AES-CBC-128 SHA1 1 1 8.74 Gbps 16 Kernel QAT AES-CBC-128 SHA1 1 4 8.74 Gbps 16 Kernel QAT AES-CBC-128 SHA1 8 8 27.08 Gbps 6 User VPP native (OpenSSL 1.0.1) AES-CBC-128 SHA1 1 1 2.03 Gbps 16 User VPP native (OpenSSL 1.0.1) AES-CBC-128 SHA1 1 4 3.39 Gbps 16 User VPP native (OpenSSL 1.0.1) AES-CBC-128 SHA1 8 8 9.45 Gbps 5 User VPP AESNI MB cryptodev AES-CBC-128 SHA1 1 1 7.42 Gbps 6 User VPP AESNI MB cryptodev AES-CBC-128 SHA1 1 4 8.28 Gbps 6 User VPP AESNI GCM cryptodev AES-GCM-128-16 1 1 13.70 Gbps 6 User VPP AESNI GCM cryptodev AES-GCM-128-16 1 4 15.93 Gbps 6 User VPP QAT cryptodev AES-CBC-128 SHA1 1 1 32.68 Gbps 15 User VPP QAT cryptodev AES-CBC-128 SHA1 1 4 35.72 Gbps 16 User VPP QAT cryptodev AES-CBC-128 SHA1 8 8 36.32 Gbps 6 User VPP QAT cryptodev AES-GCM-128-16 1 1 32.73 Gbps 6 User VPP QAT cryptodev AES-GCM-128-16 1 4 32.98 Gbps 5 36.32Gbps is as close as you’re going to get to filling a 40Gbps NIC w/IPsec, due to framing overheads. Tests using VPP w/ GCM not performed at 8 SAs / 8 streams due to issues I posted about last week. We plan to repeat these with Skylake Xeon CPUs, a more modern VPP, 100Gbps NICs and a Lewisburg QAT device. L3 forwarding (no IPsec, minimal routes, no ACLs) using the same setup: Kernel, 1 stream, 64B: 804 kpps, 512B: 808 kpps, 1500B: 806 kpps Kernel, 4 stream, 64B: 2.93 Mpps, 512B: 2.92 Mpps, 1500B: 2.91 Mpps Kernel, 8 stream, 64B: 5.16 Mpps, 512B: 5.14 Mpps, 1500B: 3.28 Mpps VPP, 1 stream, 64B: 14.05 Mpps, 512B: 8.84 Mpps, 1500B: 3.28 kpps VPP, 4 stream, 64B: 32.23 Mpps, 512B: 9.39 Mpps, 1500B: 3.28 Mpps VPP, 8 stream, 64B: 42.60 Mpps, 512B: 9.39 Mpps, 1500B: 3.28 Mpps
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#9765): https://lists.fd.io/g/vpp-dev/message/9765 Mute This Topic: https://lists.fd.io/mt/22720913/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-