nia wrote: > For years, the development hivemind's advice has been "/dev/random bad! > always urandom!", because having interfaces that unpredictably block on > you is a terrible idea.
Yes, that has been the advice. I think it's bad advice, but it's understandable given the historical /dev/random behavior of "exhausting" entropy, which has made it block too often and for no good reason, making more or less unusable in practice. I'm hoping that now that NetBSD's /dev/random no longer does that, we can get past that and start actually using it in more places. > As such, there's a hell of a lot of software out there that relies on > urandom (... "GRND_INSECURE", KERN_ARND equivalent in behaviour on > NetBSD) for tasks such as key generation, under the assumption it's > Good Enough. Tracking down all the instances of this (e.g. in scripts) > would take you a long time. Yes. But don't see how that's releveant to the question at hand, whether getentropy() should behave like /dev/random or /dev/urandom. > After some pushing, I'm fairly confident that NetBSD should enter > userland with a good-as-possible state for all observable random > number generators. After several reboots, on a long lived system, > it'll be in an even better state. Blocking should /only/ be observed > by applications that are started early in the life of a system that > /cannot/ provide enough entropy (because the hardware is old or very > low end, with few useful sources), or a system that's been initialized > improperly (with a bad on-disk seed). In this case the kernel makes sure > the operator is aware that the system is in a dangerous state. I actually agree with all of this. I also think one of the ways the kernel should make sure the operator is aware that the system is in a dangerous state is by making getentropy() block. > I think the naming (INSECURE is deeply scary) and general design of > the Linux interface is unfortunate. It pushes people towards bad > things, and misconceptions about how randomness actually works. The Linux interfaces confuse me. I just looked at the getrandom(2) man page on a Debian GNU/Linux 10 system, and it doesn't even mention an INSECURE flag. It says getentropy() uses "the same source as the /dev/urandom device" by default, and you need to specify GRND_RANDOM to get /dev/random. This seems to be at odds with what Taylor wrote earlier: /* * Recommended default usage -- may block once at boot time; * otherwise never blocks. Limited only to 33554431 bytes. */ if (getrandom(buf, buflen, 0) == -1) err(...); When I suggested making getentropy equivalent to getrandom(..., 0) earlier, I was referring to the getrandom() semantics from Taylor's mail, not those in the Debian 10 man page. > If implemented as described, GRND_INSECURE would not be particularly > scary on a typical NetBSD system. Maybe, but an interface documented as providing "high quality entropy" needs to be be non-scary even on non-typical systems. > Of course, none of this is at all helpful if there's doubt as to > whether the trusted sources of randomness actually can't be trusted > (plausible when running a VM on an untrusted host machine, less > plusible but still possible otherwise). > > In which case, we're all screwed, regardless of our interfaces. Of course. But the way I read the specification of getentropy(), it implies "trusted", and that of /dev/urandom implies "non-trusted", so you can't implement the former in terms of the lattter. -- Andreas Gustafsson, g...@gson.org