On Mon, Mar 18, 2019 at 1:16 PM Andy Lutomirski <l...@kernel.org> wrote: > On Mon, Mar 18, 2019 at 2:41 AM Elena Reshetova > <elena.reshet...@intel.com> wrote: > > Performance: > > > > 1) lmbench: ./lat_syscall -N 1000000 null > > base: Simple syscall: 0.1774 microseconds > > random_offset (rdtsc): Simple syscall: 0.1803 microseconds > > random_offset (rdrand): Simple syscall: 0.3702 microseconds > > > > 2) Andy's tests, misc-tests: ./timing_test_64 10M sys_enosys > > base: 10000000 loops in 1.62224s = 162.22 nsec / > > loop > > random_offset (rdtsc): 10000000 loops in 1.64660s = 164.66 nsec / > > loop > > random_offset (rdrand): 10000000 loops in 3.51315s = 351.32 nsec / loop > > > > Egads! RDTSC is nice and fast but probably fairly easy to defeat. > RDRAND is awful. I had hoped for better.
RDRAND can also fail. > So perhaps we need a little percpu buffer that collects 64 bits of > randomness at a time, shifts out the needed bits, and refills the > buffer when we run out. I'd like to avoid saving the _exact_ details of where the next offset will be, but if nothing else works, this should be okay. We can use 8 bits at a time and call prandom_u32() every 4th call. Something like prandom_bytes(), but where it doesn't throw away the unused bytes. -- Kees Cook