On Thu, Apr 25, 2024 at 01:55:23PM -0700, Kees Cook wrote:
> The system will immediate fill up stack and crash when both
> CONFIG_DEBUG_KMEMLEAK and CONFIG_MEM_ALLOC_PROFILING are enabled.
> Avoid allocation tagging of kmemleak caches, otherwise recursive
> allocation tracking occurs.
> 
> Fixes: 279bb991b4d9 ("mm/slab: add allocation accounting into slab allocation 
> and free paths")
> Signed-off-by: Kees Cook <keesc...@chromium.org>
> ---
> Cc: Suren Baghdasaryan <sur...@google.com>
> Cc: Kent Overstreet <kent.overstr...@linux.dev>
> Cc: Catalin Marinas <catalin.mari...@arm.com>
> Cc: Andrew Morton <a...@linux-foundation.org>
> Cc: Christoph Lameter <c...@linux.com>
> Cc: Pekka Enberg <penb...@kernel.org>
> Cc: David Rientjes <rient...@google.com>
> Cc: Joonsoo Kim <iamjoonsoo....@lge.com>
> Cc: Vlastimil Babka <vba...@suse.cz>
> Cc: Roman Gushchin <roman.gushc...@linux.dev>
> Cc: Hyeonggon Yoo <42.hye...@gmail.com>
> Cc: linux...@kvack.org
> ---
>  mm/kmemleak.c | 4 ++--
>  mm/slub.c     | 2 +-
>  2 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
> index c55c2cbb6837..fdcf01f62202 100644
> --- a/mm/kmemleak.c
> +++ b/mm/kmemleak.c
> @@ -463,7 +463,7 @@ static struct kmemleak_object *mem_pool_alloc(gfp_t gfp)
>  
>       /* try the slab allocator first */
>       if (object_cache) {
> -             object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp));
> +             object = kmem_cache_alloc_noprof(object_cache, 
> gfp_kmemleak_mask(gfp));

What do these get accounted to, or does this now pop a warning with
CONFIG_MEM_ALLOC_PROFILING_DEBUG?

>               if (object)
>                       return object;
>       }
> @@ -947,7 +947,7 @@ static void add_scan_area(unsigned long ptr, size_t size, 
> gfp_t gfp)
>       untagged_objp = (unsigned long)kasan_reset_tag((void *)object->pointer);
>  
>       if (scan_area_cache)
> -             area = kmem_cache_alloc(scan_area_cache, 
> gfp_kmemleak_mask(gfp));
> +             area = kmem_cache_alloc_noprof(scan_area_cache, 
> gfp_kmemleak_mask(gfp));
>  
>       raw_spin_lock_irqsave(&object->lock, flags);
>       if (!area) {
> diff --git a/mm/slub.c b/mm/slub.c
> index a94a0507e19c..9ae032ed17ed 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2016,7 +2016,7 @@ prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t 
> flags, void *p)
>       if (!p)
>               return NULL;
>  
> -     if (s->flags & SLAB_NO_OBJ_EXT)
> +     if (s->flags & (SLAB_NO_OBJ_EXT | SLAB_NOLEAKTRACE))
>               return NULL;
>  
>       if (flags & __GFP_NO_OBJ_EXT)
> -- 
> 2.34.1
> 

Reply via email to