On Fri, 22 Jun 2018, Davidlohr Bueso wrote:

This slightly changes the gfp flags passed on to nested_table_alloc() as it 
will now
also use GFP_ATOMIC | __GFP_NOWARN. However, I consider this a positive 
consequence
as for the same reasons we want nowarn semantics in bucket_table_alloc().

If this is not acceptable, we can just keep the caller's current semantics - the
atomic flag could also be labeled 'rehash' or something considering that it 
comes
only from insert_rehash() when we get EAGAIN after trying to insert the first 
time:

diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 9427b5766134..18740b052aec 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -172,17 +172,15 @@ static struct bucket_table *bucket_table_alloc(struct 
rhashtable *ht,
{
        struct bucket_table *tbl = NULL;
        size_t size, max_locks;
+       bool atomic = (gfp == GFP_ATOMIC);
        int i;

        size = sizeof(*tbl) + nbuckets * sizeof(tbl->buckets[0]);
-       if (gfp != GFP_KERNEL)
-               tbl = kzalloc(size, gfp | __GFP_NOWARN | __GFP_NORETRY);
-       else
-               tbl = kvzalloc(size, gfp);
+       tbl = kvzalloc(size, atomic ? gfp | __GFP_NOWARN : gfp);

        size = nbuckets;

-       if (tbl == NULL && gfp != GFP_KERNEL) {
+       if (tbl == NULL && atomic) {
                tbl = nested_bucket_table_alloc(ht, nbuckets, gfp);
                nbuckets = 0;
        }


Reply via email to