SLUB's implementation of kmem_cache_shrink skips nodes that have
nr_partial=0, because they surely don't have any empty slabs to free.
This check is done w/o holding any locks, therefore it can race with
concurrent kfree adding an empty slab to a partial list. As a result, a
just shrinked cache can have empty slabs.

This is unacceptable for kmemcg, which needs to be sure that there will
be no empty slabs on dead memcg caches after kmem_cache_shrink was
called, because otherwise we may leak a dead cache.

Let's fix this race by checking if node partial list is empty under
node->list_lock. Since the nr_partial!=0 branch of kmem_cache_shrink
does nothing if the list is empty, we can simply remove the nr_partial=0
check.

Signed-off-by: Vladimir Davydov <vdavy...@parallels.com>
Reported-by: Joonsoo Kim <iamjoonsoo....@lge.com>
---
 mm/slub.c |    3 ---
 1 file changed, 3 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 67da14d9ec70..891ac6cd78cc 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3397,9 +3397,6 @@ int __kmem_cache_shrink(struct kmem_cache *s)
 
        flush_all(s);
        for_each_kmem_cache_node(s, node, n) {
-               if (!n->nr_partial)
-                       continue;
-
                for (i = 0; i < objects; i++)
                        INIT_LIST_HEAD(slabs_by_inuse + i);
 
-- 
1.7.10.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to