Hi,

[Andy]
> You need to check this on _all_ supported architectures with all possible
> related configuration option combinations.

[David]
> That very much depends on the exactly how get/put_unaligned are implemented
> (and the behaviour of the compiler).
> ISTR something about not using 'casts to packed types' for the them,
> which might cause the compiler to generate other code.
> (Brain can't quite remember...)

Fair points all around. To be honest, proving the behavior across all
supported architecture/config combinations is more than I can chew on
right now.

I initially stumbled across this UBSAN splat on arm64 while debugging
an unrelated issue, and I thought a targeted put_unaligned() swap
would be a straightforward fix. Given the complexity of the
architectural and compiler quirks you've raised, I agree this needs a
much deeper investigation, or potentially a new write_word_at_a_time()
abstraction, as Andy suggested.

I'll drop this patch for the time being. If I have the bandwidth in
the future, and if this splat starts causing real problems, I might
give it another go.

Thanks for the thorough review and insights! At least I learned a bit from this.

Cheers,
/fuad

>         David
>
> >
> > So this patch shouldn't introduce memcpy fallback penalties on sparc,
> > but it still fixes the UB on architectures like x86 and arm64.
> >
> > Cheers,
> > /fuad
> >
> > >         David
> > >
> > > >
> > > > > Have you read the comment near to
> > > > >
> > > > >         if (IS_ENABLED(CONFIG_KMSAN))
> > > >
> > > > Not until now to be honest. However, are you asking whether
> > > > put_unaligned() breaks KMSAN? I don't think it does, max is set to 0
> > > > when KMSAN is enabled, this entire while loop is bypassed.
> > > >
> > > > Thanks,
> > > > /fuad
> > > >
> > > > > ?
> > > > >
> > > > > --
> > > > > With Best Regards,
> > > > > Andy Shevchenko
> > > > >
> > > > >
> > > >
> > >
> >
>

Reply via email to