>>>>> "Ian" == Ian Jackson <ijack...@chiark.greenend.org.uk> writes:
Ian> Sam Hartman writes ("Re: FYI/RFC: early-rng-init-tools"): >> Ben Hutchings <b...@decadent.org.uk> writes: >[Someone:] >> The >> additional entropy gathered is for extra safety; it is not >> >> *depended* on for basic security assumptions. > [...] It is, >> because the the kernel is told to treat it as providing > a >> certain number of bits of entropy. >> >> I see no problem crediting the secret stored across the reboot >> with the entropy in the pool at the time of shutdown. Ian> Indeed. Ian> AIUI the reason given for not doing this by default is that Ian> nowadays many installations are VMs of some kind which may be Ian> cloned between shutdown and startup. Right, and I'm not talking about changing the default, simply saying that having a simple way to change that default on a given system by installing a package is important. >> I agree that the credits for the entropy of the additional >> information added may be too high. >> >> I'm skeptical that the actual entropy credits matter much once >> you have *enough*, but I agree that the /dev/random interface >> does depend on that, and the proposal as described may be >> violating that assumption. Ian> Linux /dev/random's notion that there is any difference between Ian> `enough' entropy and `more' is wrong. In particular its idea Ian> that taking PRNG output out of /dev/random could cause Ian> degradation of any kind is wrong. I absolutely agree with you, and I think a lot of people in the Linux community agree with you thus the getrandom syscall. I'm not a designer of cryptographic primitives, but I am qualified to evaluate their use in protocols. Having learned from my own mistakes and others, I'm very nervous of violating the explicit security assumptions or claims of an interface. If you asked me whether we should make available an interface like /dev/random that cared about entropy beyond "enough," I'd say "absolutely not." However, given that we have such an interface, should we violate its assumptions? I can't see the harm. I've been burned by the harm I didn't see too many times before. I'd do it on my system. I might or might not flag it in a security review of someone else's design depending on circumstances. Once someone flags it--and Ben has flagged it--I'm reluctant to dismiss it without careful consideration. But since we think good enough is good enough, what's the big deal? Credit the secret seed across boot but don't credit much for the low entropy stuff we're mixing in additionally.