On Wed, 24 Jul 2024 21:14:30 +0200
Mattias Rönnblom <hof...@lysator.liu.se> wrote:

> >> Ideally, you want to avoid system calls on lcore workers doing packet
> >> processing. If you have to do system calls (which I believe is the case
> >> here), it's better to a simple call, not so often.
> >>
> >> getentropy() seems to need about 800 core clock cycles on my x86_64, on
> >> average. (rte_rand() needs ~11 cc/call.) 800 cc is not too horrible, but
> >> system calls tend to have some pretty bad tail latencies.
> >>
> >> To improve efficiency, one could do a getentropy() on a relatively large
> >> buffer, and cache the result on a per-lcore basis, amortizing the system
> >> call overhead over many calls.
> >>
> >> You still have the tail latency issue to deal with. We could have a
> >> control thread providing entropy for the lcores, but that seems like
> >> massive overkill.  
> > 
> > 
> > Getrandom is a vsyscall on current kernels, and it manages use of entropy 
> > across
> > multiple sources. If you are doing lots of key generation, you don't want to
> > hit the hardware every time.
> > 
> > https://lwn.net/Articles/974468/
> > 
> >   
> 
> If I understand things correctly, the getrandom() vDSO support was 
> mainlined *today*, so you need to be current indeed to have a vDSO 
> getrandom(). :)

Yes, it is headed for 6.11, but doubt that any reasonable workload
is going to be constrained by crypto key generation.

> 
> The above benchmark (rand_perf_autotest with rte_rand() implemented with 
> getentropy()) was run on Linux 5.15 and glibc 2.35, so a regular system 
> call was used.
> 
> (getentropy() delegates to getrandom(), so the performance is the same.)

I would trust the upstream kernel support for secure random more than
anything DPDK could develop. As soon as we get deeper into crypto it
opens up a whole new security domain and attack surface.

Reply via email to