Introduce type-aware kmalloc-family helpers to replace the common
idioms for single, array, and flexible object allocations:

        ptr = kmalloc(sizeof(*ptr), gfp);
        ptr = kcalloc(count, sizeof(*ptr), gfp);
        ptr = kmalloc_array(count, sizeof(*ptr), gfp);
        ptr = kcalloc(count, sizeof(*ptr), gfp);
        ptr = kmalloc(struct_size(ptr, flex_member, count), gfp);

These become, respectively:

        kmalloc_obj(p, gfp);
        kzalloc_obj(p, count, gfp);
        kmalloc_obj(p, count, gfp);
        kzalloc_obj(p, count, gfp);
        kmalloc_obj(p, flex_member, count, gfp);

These each return the size of the allocation, so that other common
idioms can be converted easily as well. For example:

        info->size = struct_size(ptr, flex_member, count);
        ptr = kmalloc(info->size, gfp);

becomes:

        info->size = kmalloc_obj(ptr, flex_member, count, gfp);

Internal introspection of allocated type also becomes possible, allowing
for alignment-aware choices and future hardening work. For example,
adding __alignof(*ptr) as an argument to the internal allocators so that
appropriate/efficient alignment choices can be made, or being able to
correctly choose per-allocation offset randomization within a bucket
that does not break alignment requirements.

Additionally, once __builtin_get_counted_by() is added by GCC[1] and
Clang[2], it will be possible to automatically set the counted member of
a struct with a counted_by FAM, further eliminating open-coded redundant
initializations, and can internally check for "too large" allocations
based on the type size of the counter variable:

        if (count > type_max(ptr->flex_count))
                fail...;
        info->size = struct_size(ptr, flex_member, count);
        ptr = kmalloc(info->size, gfp);
        ptr->flex_count = count;

becomes (i.e. unchanged from earlier example):

        info->size = kmalloc_obj(ptr, flex_member, count, gfp);

Replacing all existing simple code patterns found via Coccinelle[3]
shows what could be replaced immediately (saving roughly 1,500 lines):

 7040 files changed, 14128 insertions(+), 15557 deletions(-)

Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=116016 [1]
Link: https://github.com/llvm/llvm-project/issues/99774 [2]
Link: 
https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/kmalloc_obj-assign-size.cocci
 [3]
Signed-off-by: Kees Cook <k...@kernel.org>
---
Cc: Vlastimil Babka <vba...@suse.cz>
Cc: Christoph Lameter <c...@linux.com>
Cc: Pekka Enberg <penb...@kernel.org>
Cc: David Rientjes <rient...@google.com>
Cc: Joonsoo Kim <iamjoonsoo....@lge.com>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Roman Gushchin <roman.gushc...@linux.dev>
Cc: Hyeonggon Yoo <42.hye...@gmail.com>
Cc: Gustavo A. R. Silva <gustavo...@kernel.org>
Cc: Bill Wendling <mo...@google.com>
Cc: Justin Stitt <justinst...@google.com>
Cc: Jann Horn <ja...@google.com>
Cc: Przemek Kitszel <przemyslaw.kits...@intel.com>
Cc: Marco Elver <el...@google.com>
Cc: linux...@kvack.org
---
 include/linux/slab.h | 38 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index eb2bf4629157..46801c28908e 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -686,6 +686,44 @@ static __always_inline __alloc_size(1) void 
*kmalloc_noprof(size_t size, gfp_t f
 }
 #define kmalloc(...)                           
alloc_hooks(kmalloc_noprof(__VA_ARGS__))
 
+#define __alloc_obj3(ALLOC, P, COUNT, FLAGS)                   \
+({                                                             \
+       size_t __obj_size = size_mul(sizeof(*P), COUNT);        \
+       void *__obj_ptr;                                        \
+       (P) = __obj_ptr = ALLOC(__obj_size, FLAGS);             \
+       if (!__obj_ptr)                                         \
+               __obj_size = 0;                                 \
+       __obj_size;                                             \
+})
+
+#define __alloc_obj2(ALLOC, P, FLAGS)  __alloc_obj3(ALLOC, P, 1, FLAGS)
+
+#define __alloc_obj4(ALLOC, P, FAM, COUNT, FLAGS)              \
+({                                                             \
+       size_t __obj_size = struct_size(P, FAM, COUNT);         \
+       void *__obj_ptr;                                        \
+       (P) = __obj_ptr = ALLOC(__obj_size, FLAGS);             \
+       if (!__obj_ptr)                                         \
+               __obj_size = 0;                                 \
+       __obj_size;                                             \
+})
+
+#define kmalloc_obj(...)                                       \
+       CONCATENATE(__alloc_obj,                                \
+                   COUNT_ARGS(__VA_ARGS__))(kmalloc, __VA_ARGS__)
+
+#define kzalloc_obj(...)                                       \
+       CONCATENATE(__alloc_obj,                                \
+                   COUNT_ARGS(__VA_ARGS__))(kzalloc, __VA_ARGS__)
+
+#define kvmalloc_obj(...)                                      \
+       CONCATENATE(__alloc_obj,                                \
+                   COUNT_ARGS(__VA_ARGS__))(kvmalloc, __VA_ARGS__)
+
+#define kvzalloc_obj(...)                                      \
+       CONCATENATE(__alloc_obj,                                \
+                   COUNT_ARGS(__VA_ARGS__))(kvzalloc, __VA_ARGS__)
+
 #define kmem_buckets_alloc(_b, _size, _flags)  \
        alloc_hooks(__kmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), 
_flags, NUMA_NO_NODE))
 
-- 
2.34.1


Reply via email to