Valery Smyslov writes:
> > There is no point of one having for example 10 fast cpus sending
> > traffic over 10 Child SA, when the receiving end only has two cpus
> > which are about same than the other ends cpus. The receiving end will
> > not be able to keep up with the traffic it is getting in, thus it will
> > drop packets as it can't decrypt them fast enough.
> 
> I'm not so sure. Consider the situation when one host a single HSMs
> which is optimized for high-performance crypto operations,
> while the other is a general purpose server with several tens of CPUs.
> In this situation the HSM beats any CPU in performance, so if the 
> HSM can handle several SAs, it's beneficial to create as many SAs as 
> it can handle and distribute those SAs over CPUs on the other peer.

Whether HSM is faster than CPU depends on so many matters, and I have
seen so many cases where when new CPU architecture came out, then it
was again faster to do crypto on the CPU than offload it to HSM as the
interaction between the CPU and HSM was too slow. And then HSM was
upgraded and it was again faster etc.

I think they were always within order of magnitude of each other,
i.e., HSM was maximally only 10 times faster than CPU. There usually
was no need to do them faster than that as the line speeds used
limited the speeds needed anyways.

But my experience of them is more than 10 years old, so this might
have changed lately.

Question is how many CPUs do you need to saturate 100 Gbit/s network
link compared to how many HSM CPUs you need? is there more than 10
times bigger number between them.

Do you have any real world values for those? I.e., how fast can one
modern cpu do crypto (just plain crypto, no ipsec etc), and how fast
can some modern crypto hardware do the same?
-- 
kivi...@iki.fi

_______________________________________________
IPsec mailing list
IPsec@ietf.org
https://www.ietf.org/mailman/listinfo/ipsec

Reply via email to