There is a regular need in the kernel to provide a way to declare having
a dynamically sized set of trailing elements in a structure. Kernel code
should always use “flexible array members”[1] for these cases. The older
style of one-element or zero-length arrays should no longer be used[2].

Refactor the code according to the use of a flexible-array member in
struct stack_record, instead of a one-element array, and use the
struct_size() helper to calculate the size for the allocation.

[1] https://en.wikipedia.org/wiki/Flexible_array_member
[2] 
https://www.kernel.org/doc/html/v5.9-rc1/process/deprecated.html#zero-length-and-one-element-arrays

Build-tested-by: kernel test robot <l...@intel.com>
Link: https://lore.kernel.org/lkml/5f75876b.x9zdn10esic0qlhv%25...@intel.com/
Signed-off-by: Gustavo A. R. Silva <gustavo...@kernel.org>
---
 lib/stackdepot.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 2caffc64e4c8..c6106cfb7950 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -62,7 +62,7 @@ struct stack_record {
        u32 hash;                       /* Hash in the hastable */
        u32 size;                       /* Number of frames in the stack */
        union handle_parts handle;
-       unsigned long entries[1];       /* Variable-sized array of entries. */
+       unsigned long entries[];        /* Variable-sized array of entries. */
 };
 
 static void *stack_slabs[STACK_ALLOC_MAX_SLABS];
@@ -104,9 +104,8 @@ static bool init_stack_slab(void **prealloc)
 static struct stack_record *depot_alloc_stack(unsigned long *entries, int size,
                u32 hash, void **prealloc, gfp_t alloc_flags)
 {
-       int required_size = offsetof(struct stack_record, entries) +
-               sizeof(unsigned long) * size;
        struct stack_record *stack;
+       size_t required_size = struct_size(stack, entries, size);
 
        required_size = ALIGN(required_size, 1 << STACK_ALLOC_ALIGN);
 
-- 
2.27.0

Reply via email to