On Wed, Feb 25, 2026 at 08:08:34AM +0000, Fuad Tabba wrote: > On Tue, 24 Feb 2026 at 21:07, Kees Cook <[email protected]> wrote: > > On Tue, Feb 24, 2026 at 05:04:27PM +0000, Fuad Tabba wrote: > > > sized_strscpy() performs word-at-a-time writes to the destination > > > buffer. If the destination buffer is not aligned to unsigned long, > > > direct assignment causes UBSAN misaligned-access errors. > > > > Is this via CONFIG_UBSAN_ALIGNMENT=y ? Note this in the Kconfig: > > > > Enabling this option on architectures that support unaligned > > accesses may produce a lot of false positives. > > > > which architecture are you checking this on? > > This is with CONFIG_UBSAN_ALIGNMENT=y on arm64. Although the > architecture supports unaligned accesses, I was running the UBSAN > checks (including the alignment ones) the other day while debugging an > unrelated issue. That said, the alignment checks ensure C standard > compliance and prevent the compiler from optimizing unaligned UB casts > into alignment-strict instructions (like ldp/stp or vector > instructions on arm64, which cause hardware faults). > > > > Use put_unaligned() to safely write the words to the destination. > > > > Also, I thought the word-at-a-time work in sized_strscpy() was > > specifically to take advantage of aligned word writes? This doesn't seem > > like the right solution, and I think we're already disabling the > > unaligned access by using "max=0" in the earlier checks. > > The max=0 check is heavily guarded. Both x86 and arm64 select > CONFIG_DCACHE_WORD_ACCESS, bypassing it: > > #ifndef CONFIG_DCACHE_WORD_ACCESS > // ... alignment checks that set max = 0 ... > #endif > > I also noticed that the read path already expects and handles > unaligned addresses. If you look at load_unaligned_zeropad() (called > above the write), it explicitly loads an unaligned word and handles > potential page-crossing faults. The write path lacked the equivalent > put_unaligned() wrapper, leaving it exposed to UB.
Probably it needs to be reworked differently to provide write_at_a_time() helper? > I checked the disassembly on both x86 and aarch64: put_unaligned() > (via __builtin_memcpy) compiles to the same instructions (mov and > str), preserving the optimization while making the code UBSAN-clean. You need to check this on _all_ supported architectures with all possible related configuration option combinations. > > I think the bug may be that you got CONFIG_UBSAN_ALIGNMENT enabled for > > an arch that doesn't suffer from unaligned access problems. :) We should > > fix the Kconfig! > > Does that reasoning make sense for keeping the fix here rather than in > the Kconfig? -- With Best Regards, Andy Shevchenko

