From: Sahara <keun-o.p...@digital14.com>

Slab cache flags are exported to sysfs and are allowed to get modified
from userspace. Some of those may call calculate_sizes function because
the changed flag can take an effect on slab object size and layout,
which means kmem_cache may have different order and objects.
The freelist pointer corruption occurs if some slab flags are modified
while CONFIG_SLAB_FREELIST_RANDOM is turned on.

 $ echo 0 > /sys/kernel/slab/zs_handle/store_user
 $ echo 0 > /sys/kernel/slab/zspage/store_user
 $ mkswap /dev/block/zram0
 $ swapon /dev/block/zram0 -p 32758

 =============================================================================
 BUG zs_handle (Not tainted): Freepointer corrupt
 -----------------------------------------------------------------------------

 Disabling lock debugging due to kernel taint
 INFO: Slab 0xffffffbf29603600 objects=102 used=102 fp=0x0000000000000000 
flags=0x0200
 INFO: Object 0xffffffca580d8d78 @offset=3448 fp=0xffffffca580d8ed0

 Redzone 00000000f3cddd6c: bb bb bb bb bb bb bb bb                          
........
 Object 0000000082d5d74e: 6b 6b 6b 6b 6b 6b 6b a5                          
kkkkkkk.
 Redzone 000000008fd80359: bb bb bb bb bb bb bb bb                          
........
 Padding 00000000c7f56047: 5a 5a 5a 5a 5a 5a 5a 5a                          
ZZZZZZZZ

In this example, an Android device tries to use zram as a swap and to
turn off store_user in order to reduce the slub object size.
When calculate_sizes is called in kmem_cache_open, size, order and
objects for zs_handle is:
 size:360, order:0, objects:22
However, if the SLAB_STORE_USER bit is cleared in store_user_store:
 size: 56, order:1, objects:73

All the size, order, and objects is changed by calculate_sizes(), but
the size of the random_seq array is still old value(22). As a result,
out-of-bound array access can occur at shuffle_freelist() when slab
allocation is requested.

This patch fixes the problem by re-allocating the random_seq array
with re-calculated correct objects value.

Fixes: 210e7a43fa905 ("mm: SLUB freelist randomization")
Reported-by: Ari-Pekka Verta <ari-pekka.ve...@digital14.com>
Reported-by: Timo Simola <timo.sim...@digital14.com>
Signed-off-by: Sahara <keun-o.p...@digital14.com>
---
 mm/slub.c | 23 ++++++++++++++++-------
 1 file changed, 16 insertions(+), 7 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index f226d66408ee..be1e4d6682b8 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3704,7 +3704,22 @@ static int calculate_sizes(struct kmem_cache *s, int 
forced_order)
        if (oo_objects(s->oo) > oo_objects(s->max))
                s->max = s->oo;
 
-       return !!oo_objects(s->oo);
+       if (!oo_objects(s->oo))
+               return 0;
+
+       /*
+        * Initialize the pre-computed randomized freelist if slab is up.
+        * If the randomized freelist random_seq is already initialized,
+        * free and re-initialize it with re-calculated value.
+        */
+       if (slab_state >= UP) {
+               if (s->random_seq)
+                       cache_random_seq_destroy(s);
+               if (init_cache_random_seq(s))
+                       return 0;
+       }
+
+       return 1;
 }
 
 static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags)
@@ -3748,12 +3763,6 @@ static int kmem_cache_open(struct kmem_cache *s, 
slab_flags_t flags)
        s->remote_node_defrag_ratio = 1000;
 #endif
 
-       /* Initialize the pre-computed randomized freelist if slab is up */
-       if (slab_state >= UP) {
-               if (init_cache_random_seq(s))
-                       goto error;
-       }
-
        if (!init_kmem_cache_nodes(s))
                goto error;
 
-- 
2.17.1

Reply via email to