Hi there, This is a bit general, rather than a specific suggestion, since I don't think I can make tomorrow's meeting maybe people might like to discuss it there.
Reading the more recent OpenSSL man page it occurred to me that there may be a potential with some types of daemons (such as recent OpenSSL run with the 2.x default of --key-method 2) to facilitate remote system entropy exhaustion as a type of DoS. I am aware of entropy gathering daemons but none so much for entropy as an input to overall system management (connection throttling, control group freezing, etc.). Has there been any attempt to integrate system entropy management with any major Linux distribution? Has Gentoo hardened looked at entropy impact of various packages? I think there could be some research potential here. The first step might be to establish suitably automated mechanisms for formal testing of package / package revision entropy impact under various test loads, with a view towards enhancing the visibility and ease of entropy management across hardened gentoo systems. - I'm not sure if there is an automated test suite set up within the Gentoo infrastructure at present, though some packages have built-in tests, and some packages have a USE flag to enable/disable these, it seems difficult to re-run these tests after packages are emerged. - There are also more general testing tools such as the Apache Project's 'ab' and 'bonnie++' that could be used for some types of packages. Whilst it is easily possible to monitor entropy over time (eg: using rrdtool configured to source data from /proc/sys/kernel/random/entropy_avail) I have never heard of any attempt to monitor this availability over time and integrate it in to system management. I have however heard of problems with relatively commonly deployed daemons (I forget which now, but not so long ago it happened to me too) wherein entropy exhaustion causes a complete functional DoS for a process blocking for unavailable entropy. Perhaps a new control group implementation for maximum entropy draw rate per namespace might also be a useful contribution to the kernel? I suppose this sort of thing is tending to increase in importance as the density of services per physical system increases under the continuing trend towards high density virtual hosting (either container based or paravirtualized). First post here ... hope I'm not completely off track with this, just throwing the idea out there. Cheers, Walter