I don't see any general solution that is both correct and easy.
I don't think there is one.

In an ideal world all our computers would have a trusted source of true 
randomness. In practice that is not the case. Older computers don't have a 
hardware random number generator at all and newer computers have one hidden 
inside a big chip which the more paranoid think could have been backdoored.

So the Linux kernel must monitor a set of of events that it thinks are probably random 
and for each event add some "entropy" to the pool. Once enough entropy is 
collected the pool is considered ready. This immediately raises two failure modes.

1. If the "random" events don't come then the pool may never be considered 
ready.
2. If the "random" events aren't actually random then the "random" numbers 
returned to user-land may be predictable.

There is no generally "correct" solution here. Every solution is a compromise between the 
risk of "random" data being predictable and the risk of systems failing to operate in a 
timely manner (or even at all) due to unavailability or randomness.



Reply via email to