It was found that under some .config options (notably L1_CACHE_SHIFT=7) and compiler combinations this on-stack alignment leads to a 320 byte stack usage, which then triggers a KASAN stack warning elsewhere.
Using 320 bytes of stack space for a 40 byte structure is ludicrous and clearly not right. Fixes: 515ab7c41306 ("x86/mm: Align TLB invalidation info") Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org> --- Index: linux-2.6/arch/x86/mm/tlb.c =================================================================== --- linux-2.6.orig/arch/x86/mm/tlb.c +++ linux-2.6/arch/x86/mm/tlb.c @@ -728,7 +728,7 @@ void flush_tlb_mm_range(struct mm_struct { int cpu; - struct flush_tlb_info info __aligned(SMP_CACHE_BYTES) = { + struct flush_tlb_info info = { .mm = mm, .stride_shift = stride_shift, .freed_tables = freed_tables,