Nobody <nob...@nowhere.com> writes: >>> If you're going to read from /dev/urandom, limit it to a few bytes per >>> minute, not per request. >> That's really not going to help you. > In what way? > If I need security, I'll use /dev/random or /dev/urandom. If I don't, I'll > save the real entropy for something which needs it.
I just mean that if /dev/urandom has enough internal state then within practical bounds, its output is effectively random no matter how much you read from it. Did you look at the paper I linked? "Saving" the "real entropy" isn't feasible since the maximum capacity of the two "real" entropy pools is 4096 bits each. They will both fill pretty quickly on an active system. Reading /dev/urandom will empty the primary pool but /dev/random is fed by the secondary pool, which receives entropy from both the primary pool and physical sources. If you read too fast from /dev/urandom, the worst that happens (if I understand correctly) is that the rate you can read from /dev/random is cut in half and it will block more often. If that's a serious issue for your application, you should probably rethink your approach and get an HSM. -- http://mail.python.org/mailman/listinfo/python-list