Hi Gabi, It looks like aesni_mb and aesni_gcm are disabled in VPP's DPDK build configuration. see build/external/packages/dpdk.mk. You would probably need to remove those from DPDK_DRIVERS_DISABLED and rebuild if you want to use them. That said, I doubt you would see much improvement as a result of using them. VPP's ipsecmb crypto plugin uses the same optimized crypto library that those vdev's use. I think VPP's native crypto plugin is assigned the highest priority, so that plugin is likely handling crypto operations for your tunnels by default. If you want to use the ipsecmb crypto plugin instead you can use a command like "vppctl set crypto handler <cipher> ipsecmb" for the ciphers used by your tunnels. I don't know if you'll see any difference in performance by using ipsecmb instead of native, but it doesn't hurt to try it.
Here are some thoughts and questions on tuning to improve IPsec throughput: - If you haven't already, you should configure at least one worker thread so your crypto operations are not being executed on the same CPU as the main thread. - Are you using one tunnel or multiple tunnels? An SA will be bound to a particular thread in order to keep packets in order. With synchronous crypto, all of the operations for the SA will be handled by that one thread and throughput will be limited to how much crypto the CPU that thread is bound to can handle. So you might get higher throughput by distributing traffic across multiple tunnels if possible. Or if you enable asynchronous crypto, the sw_scheduler plugin tries to distribute crypto operations to other threads, which might help. - With multiple workers, you could get encrypt & decrypt operations handled by different threads/cores. If you have a LAN interface and a WAN interface and your tunnel is terminated on the WAN interface to allow VMs on your LAN subnet to communicate with some remote systems on the other side of the tunnel, you could bind the RX queues for the interfaces to different threads. Outbound packets would be encrypted by the threads which handle the queues for the LAN interface. Inbound packets will be decrypted by the threads which handle the queues for the WAN interface. - You mentioned that you can't get better throughput from VPP than you can with kernel IPsec. Is the kernel getting the same throughput as VPP or higher? If it's close to the same, you may be hitting some external resource limit. E.g. the other end of the tunnel could be the bottleneck. Or AWS's traffic shaping might be preventing you from sending any faster. - Are you using policy-based IPsec or routed IPsec (creating a tunnel interface)? There have been patches merged recently which are intended to improve performance for policy-based IPsec, but if you are using policy-based IPsec you might try using a tunnel interface instead and see if your measurements improve. - Fragmentation and reassembly can impact IPsec throughput. If your packets are close to the size of the hardware interface that packets will be sent out, the encapsulation & crypto padding may push the packet size over the MTU and the encrypted packet may need to be fragmented before being sent. That means the other end of the tunnel will need to wait for all the fragments to arrive and reassemble them before it can decrypt the packet. If you are using a tunnel interface, you can set the MTU on the tunnel interface lower than the MTU on the hardware interface. Then packets would be fragmented by the tunnel interface before being encrypted and the other end would not need to reassemble them. -Matt On Fri, Jun 3, 2022 at 7:52 AM <gv.flor...@gmail.com> wrote: > Hi, > I am a beginner in VPP and DPDK stuff, I am trying to create a high > performance AWS VM which should do IPSec tunneling. > > The IPSEc traffic is running well, but I can not exceed 8Gb traffic > throughput and I can not convince VPP to beat the "ip xfrm" in terms of > IPSec traffic throughput. > > When the VPP starts, I get this warning all the times: > > dpdk/cryptodev [warn ]: dpdk_cryptodev_init: Not enough cryptodev > resources > > whatever CPU I have enabled. > > If I specify > vdev crypto_aesni_mb > or > vdev crypto_aesni_gcm > on the dpdk section of startup.conf file, I always hit this error: > 0: dpdk_config: rte_eal_init returned -1 > > I am using Ubuntu 20.04 LTS and the CPU flags are: > > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat > pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm > constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid > aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 > sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand > hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust > bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx > smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 > xsaves ida arat pku ospke > > Can somebody tell me what I am missing? or how can I find the right > configuration? > > Thank you a lot, > Gabi > > > >
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#21502): https://lists.fd.io/g/vpp-dev/message/21502 Mute This Topic: https://lists.fd.io/mt/91520137/21656 Mute #dpdk:https://lists.fd.io/g/vpp-dev/mutehashtag/dpdk Mute #ipsec:https://lists.fd.io/g/vpp-dev/mutehashtag/ipsec Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-