On Wed, Oct 18, 2017 at 11:30 PM, Tobin C. Harding <m...@tobin.cc> wrote: > +static siphash_key_t ptr_secret __read_mostly; > +static atomic_t have_key = ATOMIC_INIT(0); > + > +static void initialize_ptr_secret(void) > +{ > + if (atomic_read(&have_key) == 1) > + return; > + > + get_random_bytes(&ptr_secret, sizeof(ptr_secret)); > + atomic_set(&have_key, 1); > +}
> + case -EALREADY: > + initialize_ptr_secret(); > + break; Unfortunately the above is racy, and the spinlock you had before was actually correct (though using an atomic inside a spinlock wasn't strictly necessary). The race is that two callers might hit initialize_ptr_secret at the same time, and have_key will be zero at the beginning for both. Then they'll both scribble over ptr_secret, and might wind up using a different value after if one finishes before the other. I see two ways of correcting this: 1) Go back to the spinlock yourself. 2) Use get_random_bytes_once(&ptr_secret, sizeof(ptr_secret)). I don't know lib/once.c especially well, but from cursory look, it appears to be taking a spinlock too, which means you're probably good. + if (atomic_read(&have_key) == 0) { + random_ready.owner = NULL; + random_ready.func = schedule_async_key_init; You can probably take care of this part in the initialization: static struct random_ready_callback random_ready = { .func = schedule_async_key_init }; Alternatively, you could put the actual call to add_random_ready_callback in an init function, but maybe how you have it is easier. Jason