Hi Kees,
On Tue, 24 Feb 2026 at 21:07, Kees Cook <[email protected]> wrote:
>
> On Tue, Feb 24, 2026 at 05:04:27PM +0000, Fuad Tabba wrote:
> > sized_strscpy() performs word-at-a-time writes to the destination
> > buffer. If the destination buffer is not aligned to unsigned long,
> > direct assignment causes UBSAN misaligned-access errors.
>
> Is this via CONFIG_UBSAN_ALIGNMENT=y ? Note this in the Kconfig:
>
> Enabling this option on architectures that support unaligned
> accesses may produce a lot of false positives.
>
> which architecture are you checking this on?
This is with CONFIG_UBSAN_ALIGNMENT=y on arm64. Although the
architecture supports unaligned accesses, I was running the UBSAN
checks (including the alignment ones) the other day while debugging an
unrelated issue. That said, the alignment checks ensure C standard
compliance and prevent the compiler from optimizing unaligned UB casts
into alignment-strict instructions (like ldp/stp or vector
instructions on arm64, which cause hardware faults).
> > Use put_unaligned() to safely write the words to the destination.
>
> Also, I thought the word-at-a-time work in sized_strscpy() was
> specifically to take advantage of aligned word writes? This doesn't seem
> like the right solution, and I think we're already disabling the
> unaligned access by using "max=0" in the earlier checks.
The max=0 check is heavily guarded. Both x86 and arm64 select
CONFIG_DCACHE_WORD_ACCESS, bypassing it:
#ifndef CONFIG_DCACHE_WORD_ACCESS
// ... alignment checks that set max = 0 ...
#endif
I also noticed that the read path already expects and handles
unaligned addresses. If you look at load_unaligned_zeropad() (called
above the write), it explicitly loads an unaligned word and handles
potential page-crossing faults. The write path lacked the equivalent
put_unaligned() wrapper, leaving it exposed to UB.
I checked the disassembly on both x86 and aarch64: put_unaligned()
(via __builtin_memcpy) compiles to the same instructions (mov and
str), preserving the optimization while making the code UBSAN-clean.
> I think the bug may be that you got CONFIG_UBSAN_ALIGNMENT enabled for
> an arch that doesn't suffer from unaligned access problems. :) We should
> fix the Kconfig!
Does that reasoning make sense for keeping the fix here rather than in
the Kconfig?
Cheers,
/fuad
> -Kees
>
> >
> > Fixes: 30035e45753b7 ("string: provide strscpy()")
> > Signed-off-by: Fuad Tabba <[email protected]>
> > ---
> > lib/string.c | 6 +++---
> > 1 file changed, 3 insertions(+), 3 deletions(-)
> >
> > diff --git a/lib/string.c b/lib/string.c
> > index b632c71df1a5..a1697bf72078 100644
> > --- a/lib/string.c
> > +++ b/lib/string.c
> > @@ -157,16 +157,16 @@ ssize_t sized_strscpy(char *dest, const char *src,
> > size_t count)
> > if (has_zero(c, &data, &constants)) {
> > data = prep_zero_mask(c, data, &constants);
> > data = create_zero_mask(data);
> > - *(unsigned long *)(dest+res) = c &
> > zero_bytemask(data);
> > + put_unaligned(c & zero_bytemask(data), (unsigned long
> > *)(dest+res));
> > return res + find_zero(data);
> > }
> > count -= sizeof(unsigned long);
> > if (unlikely(!count)) {
> > c &= ALLBUTLAST_BYTE_MASK;
> > - *(unsigned long *)(dest+res) = c;
> > + put_unaligned(c, (unsigned long *)(dest+res));
> > return -E2BIG;
> > }
> > - *(unsigned long *)(dest+res) = c;
> > + put_unaligned(c, (unsigned long *)(dest+res));
> > res += sizeof(unsigned long);
> > max -= sizeof(unsigned long);
> > }
> > --
> > 2.53.0.371.g1d285c8824-goog
> >
> >
>
> --
> Kees Cook