gen_pool_alloc_algo() iterates over the chunks of a pool trying to find
a contiguous block of memory that satisfies the allocation request.

The shortcut

        if (size > atomic_read(&chunk->avail))
                continue;

makes the loop skip over chunks that do not have enough bytes left to
fulfill the request. There are two situations, though, where an
allocation might still fail:

(1) The available memory is not contiguous, i.e. the request cannot be
fulfilled due to external fragmentation.

(2) A race condition. Another thread runs the same code concurrently and
is quicker to grab the available memory.

In those situations, the loop calls pool->algo() to search the entire
chunk, and pool->algo() returns some value that is >= end_bit to
indicate that the search failed.  This return value is then assigned to
start_bit. The variables start_bit and end_bit describe the range that
should be searched, and this range should be reset for every chunk that
is searched.  Today, the code fails to reset start_bit to 0.  As a
result, prefixes of subsequent chunks are ignored. Memory allocations
might fail even though there is plenty of room left in these prefixes of
those other chunks.

Reviewed-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Fixes: 7f184275aa30 ("lib, Make gen_pool memory allocator lockless")
Cc: Andi Kleen <a...@linux.intel.com>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Arnd Bergmann <a...@arndb.de>
Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc: Dan Williams <dan.j.willi...@intel.com>
Cc: David Riley <davidri...@chromium.org>
Cc: Eric Miao <eric.y.m...@gmail.com>
Cc: Grant Likely <grant.lik...@linaro.org>
Cc: Greg Kroah-Hartman <gre...@linuxfoundation.org>
Cc: Haojian Zhuang <haojian.zhu...@gmail.com>
Cc: Huang Ying <ying.hu...@intel.com>
Cc: Jaroslav Kysela <pe...@perex.cz>
Cc: Kevin Hilman <khil...@deeprootsystems.com>
Cc: Laura Abbott <lau...@codeaurora.org>
Cc: Liam Girdwood <lgirdw...@gmail.com>
Cc: Mark Brown <broo...@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Cc: Mauro Carvalho Chehab <m.che...@samsung.com>
Cc: Olof Johansson <o...@lixom.net>
Cc: Ritesh Harjain <ritesh.harj...@gmail.com>
Cc: Russell King <li...@arm.linux.org.uk>
Cc: Sekhar Nori <nsek...@ti.com>
Cc: Takashi Iwai <ti...@suse.de>
Cc: Thadeu Lima de Souza Cascardo <casca...@linux.vnet.ibm.com>
Cc: Thierry Reding <thierry.red...@gmail.com>
Cc: Vinod Koul <vinod.k...@intel.com>
Cc: Vladimir Zapolskiy <vladimir_zapols...@mentor.com>
Cc: Will Deacon <will.dea...@arm.com>
Signed-off-by: Daniel Mentz <danielme...@google.com>
---
 lib/genalloc.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/lib/genalloc.c b/lib/genalloc.c
index 0a11396..144fe6b 100644
--- a/lib/genalloc.c
+++ b/lib/genalloc.c
@@ -292,7 +292,7 @@ unsigned long gen_pool_alloc_algo(struct gen_pool *pool, 
size_t size,
        struct gen_pool_chunk *chunk;
        unsigned long addr = 0;
        int order = pool->min_alloc_order;
-       int nbits, start_bit = 0, end_bit, remain;
+       int nbits, start_bit, end_bit, remain;
 
 #ifndef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG
        BUG_ON(in_nmi());
@@ -307,6 +307,7 @@ unsigned long gen_pool_alloc_algo(struct gen_pool *pool, 
size_t size,
                if (size > atomic_read(&chunk->avail))
                        continue;
 
+               start_bit = 0;
                end_bit = chunk_size(chunk) >> order;
 retry:
                start_bit = algo(chunk->bits, end_bit, start_bit,
-- 
2.8.0.rc3.226.g39d4020

Reply via email to