> -----Original Message-----
> From: Florian Westphal <f...@strlen.de>
> Sent: Monday, April 22, 2019 11:16 PM
> To: Vakul Garg <vakul.g...@nxp.com>
> Cc: netdev@vger.kernel.org
> Subject: Re: ipsec tunnel performance degrade
> 
> Vakul Garg <vakul.g...@nxp.com> wrote:
> > Post kernel 4.9, I am experiencing more than 50% degrade in ipsec
> performance on my arm64 based systems (with onchip crypto accelerator).
> > (We use only lts kernels). My understanding is that it is mainly due to xfrm
> flow cache removal in version 4.12.
> 
> Yes, likely.
> 
> > I am not sure whether any subsequent work could recover the lost
> performance.
> > With kernel 4.19, I see that xfrm_state_find() is taking a lot of cpu (more
> than 15%).
> 
> Can you share details about the setup?
> 
> I.e., how many policies, states etc.?

My setup has 2 ethernet interfaces. I am creating 64 ipsec tunnels for 
encapsulation.
I use 64 policies and 64 SAs.

> Do you use xfrm interfaces?

I don't think so. I use setkey to create policies/SAs.
Can you please give me some hint about it?

> 
> > Further, perf show that a lot of atomic primitives such as
> > __ll_sc___cmpxchg_case_mb_4(),
> > __ll_sc_atomic_sub_return() are being invoked. On 16 core system, they
> consume more than 30% of cpu.
> 
> Thats not good, perhaps we should look at pcpu refcounts for the xfrm state
> structs.

What else data can I collect?
Thanks.

Reply via email to