On 03/26/2014 06:12 PM, Sasha Levin wrote:
Commit 4af712e8df ("random32: add prandom_reseed_late() and call when
nonblocking pool becomes initialized") has added a late reseed stage
that happens as soon as the nonblocking pool is marked as initialized.

This fails in the case that the nonblocking pool gets initialized
during __prandom_reseed()'s call to get_random_bytes(). In that case
we'd double back into __prandom_reseed() in an attempt to do a late
reseed - deadlocking on 'lock' early on in the boot process.

Instead, just avoid even waiting to do a reseed if a reseed is already
occuring.

Signed-off-by: Sasha Levin <sasha.le...@oracle.com>

Thanks for catching! (If you want Dave to pick it up, please also
Cc netdev.)

Why not via spin_trylock_irqsave() ? Thus, if we already hold the
lock, we do not bother any longer with doing the same work twice
and just return.

I.e. like:

static void __prandom_reseed(bool late)
{
    int i;
    unsigned long flags;
    static bool latch = false;
    static DEFINE_SPINLOCK(lock);

    /* Asking for random bytes might result in bytes getting
     * moved into the nonblocking pool and thus marking it
     * as initialized. In this case we would double back into
     * this function and attempt to do a late reseed.
     * Ignore the pointless attempt to reseed again if we're
     * already waiting for bytes when the nonblocking pool
     * got initialized.
     */

    /* only allow initial seeding (late == false) once */
    if (!spin_trylock_irqsave(&lock, flags))
        return;

    if (latch && !late)
        goto out;

    latch = true;

    for_each_possible_cpu(i) {
        struct rnd_state *state = &per_cpu(net_rand_state,i);
        u32 seeds[4];

        get_random_bytes(&seeds, sizeof(seeds));
        state->s1 = __seed(seeds[0],   2U);
        state->s2 = __seed(seeds[1],   8U);
        state->s3 = __seed(seeds[2],  16U);
        state->s4 = __seed(seeds[3], 128U);

        prandom_warmup(state);
    }
out:
    spin_unlock_irqrestore(&lock, flags);
}

---
  lib/random32.c | 16 +++++++++++++++-
  1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/lib/random32.c b/lib/random32.c
index 1e5b2df..b59da12 100644
--- a/lib/random32.c
+++ b/lib/random32.c
@@ -241,14 +241,27 @@ static void __prandom_reseed(bool late)
  {
        int i;
        unsigned long flags;
-       static bool latch = false;
+       static bool latch = false, reseeding = false;
        static DEFINE_SPINLOCK(lock);

+       /*
+        * Asking for random bytes might result in bytes getting
+        * moved into the nonblocking pool and thus marking it
+        * as initialized. In this case we would double back into
+        * this function and attempt to do a late reseed.
+        * Ignore the pointless attempt to reseed again if we're
+        * already waiting for bytes when the nonblocking pool
+        * got initialized
+        */
+       if (reseeding)
+               return;
+
        /* only allow initial seeding (late == false) once */
        spin_lock_irqsave(&lock, flags);
        if (latch && !late)
                goto out;
        latch = true;
+       reseeding = true;

        for_each_possible_cpu(i) {
                struct rnd_state *state = &per_cpu(net_rand_state,i);
@@ -263,6 +276,7 @@ static void __prandom_reseed(bool late)
                prandom_warmup(state);
        }
  out:
+       reseeding = false;
        spin_unlock_irqrestore(&lock, flags);
  }


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to