On Fri, Aug 26, 2016 at 8:51 AM, Eric Dumazet <eric.duma...@gmail.com> wrote: > From: Eric Dumazet <eduma...@google.com> > > If vmalloc() was successful, do not attempt a kmalloc_array() > > Fixes: 4cf0b354d92e ("rhashtable: avoid large lock-array allocations") > Reported-by: CAI Qian <caiq...@redhat.com> > Signed-off-by: Eric Dumazet <eduma...@google.com> > Cc: Florian Westphal <f...@strlen.de> > --- > lib/rhashtable.c | 7 ++++--- > 1 file changed, 4 insertions(+), 3 deletions(-) > > diff --git a/lib/rhashtable.c b/lib/rhashtable.c > index 5ba520b544d7..56054e541a0f 100644 > --- a/lib/rhashtable.c > +++ b/lib/rhashtable.c > @@ -77,17 +77,18 @@ static int alloc_bucket_locks(struct rhashtable *ht, > struct bucket_table *tbl, > size = min_t(unsigned int, size, tbl->size >> 1); > > if (sizeof(spinlock_t) != 0) { > + tbl->locks = NULL; > #ifdef CONFIG_NUMA > if (size * sizeof(spinlock_t) > PAGE_SIZE && > gfp == GFP_KERNEL) > tbl->locks = vmalloc(size * sizeof(spinlock_t)); > - else > #endif
Not directly for your patch, but why did we have this CONFIG_NUMA for vmalloc()? I think this macro is the real cause of the bug. :-P