Hi, On 12.02.2013 23:49, Mike Mestnik wrote: > As indicated, this happens after the connection and thus there "can" > be plenty of entropy even in the daemon is started when there is not. > You can even create or push entropy by pinging the host at irregular > intervals or a verity of other activities. You can have the initrd > hit random.org a few times. I don't doubt that there can be enough entropy. I still want to make sure there actually is enough.
> A really good solution can be employed if you have an HA setup, once > past the point of loading the stored entropy, urandom can be securely > served out to the other node over a local network or serial connection. Agreed, but this is more complicated than just restoring a seed-file. > I'm not sure that having static entropy in an initrd would be good > either. You wouldn't gain entropy, only randomness. I'd be concerned > about using the same initrd image for more then 100 times or so. You > could regenerate the initrd though, but this starts to fall into the > category of custom solutions. A static seed is useless, I know that much. Regenerating the initrd is not necessary, as a second cpio archive containing the seedfile can just be appended to the original initrd archive (see http://www.kernel.org/doc/Documentation/early-userspace/buffer-format.txt). Some bootloaders even support the handing of multiple archive files as initrd (sys/extlinux for some time, grub from version 2.00). > entropy_avail has to do with the number of bytes one can read from > random. Values read from urandom are based off a 1k seed, knowing how > much this seed has ever been populated is not exported nor is the > number of times the current seed has been given out to users. I've been reading parts of drivers/char/random.c and I haven't seen what you are saying. Can you point me to your source? From my understanding entropy is collected in the input_pool and from there distributed to both the blocking and the nonblocking_pool (in a way that won't allow one to stave the other). entropy_avail gives an entropy estimation for the input_pool and is, thus, also important for urandom. >>> Only a dropbear developer would be able to insist that urandom is >>> only used when appropriate. Only you can prevent the >>> re-generation of ssh host keys. >> dropbear is especially targetted for embedded devices. I assume >> that gathering enough randomness from /dev/random is especially >> hard for those devices. The dropbear changelog >> (https://matt.ucc.asn.au/dropbear/CHANGES) contains an entry >> regarding the switch from /dev/random to /dev/urandom at version >> 0.50: - Use /dev/urandom by default, since that's what everyone >> does anyway > > This is where the need for the kernel to export information on the > viability of urandom would come into play. For example dropbear could > kick out new connections if there was not yet enough seed data, after > a few attempts there would be. Integrating mechanisms that will allow dropbear to check if enough entropy is available (and else trying to generate some) might be possible. However, I like to resort to well-known methods of ensuring good randomness. Restoring a seed-file and then using urandom fulfills that requirement. > Since startup the system is constantly collecting entropy, network > traffic sounds like the biggest source of entropy for your > configuration. If this host is on a segment with a handful of windows > machines then waiting a minute or two should generate more then enough > entropy. I agree that waiting some time should solve the problem. However there is no way for me to know when enough time has passed and opening a connection is secure. Regards Lukas
signature.asc
Description: OpenPGP digital signature