Hi Ayan,, > On 8 Nov 2022, at 09:45, Ayan Kumar Halder <ayan.kumar.hal...@amd.com> wrote: > > From: Ayan Kumar Halder <ayank...@amd.com> > > Xen provides helper to atomically read/write memory (see {read, > write}_atomic()). Those helpers can only work if the address is aligned > to the size of the access (see B2.2.1 ARM DDI 08476I.a). > > On Arm32, the alignment is already enforced by the processor because > HSCTLR.A bit is set (it enforce alignment for every access). For Arm64, > this bit is not set because memcpy()/memset() can use unaligned access > for performance reason (the implementation is taken from the Cortex > library). > > To avoid any overhead in production build, the alignment will only be > checked using an ASSERT. Note that it might be possible to do it in > production build using the acquire/exclusive version of load/store. But > this is left to a follow-up (if wanted). > > Signed-off-by: Ayan Kumar Halder <ayan.kumar.hal...@amd.com> > Signed-off-by: Julien Grall <jul...@xen.org> > Reviewed-by: Michal Orzel <michal.or...@amd.com> > Reviewed-by: Bertrand Marquis <bertrand.marq...@arm.com>
I confirm my Reviewed-by. Side note: You should actually have removed it :-) Cheers Bertrand > --- > > Changes from :- > v1 - 1. Referred to the latest Arm Architecture Reference Manual in the commit > message. > > v2 - 1. Updated commit message to specify the reason for using ASSERT(). > 2. Added Julien's SoB. > > xen/arch/arm/include/asm/atomic.h | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/xen/arch/arm/include/asm/atomic.h > b/xen/arch/arm/include/asm/atomic.h > index 1f60c28b1b..64314d59b3 100644 > --- a/xen/arch/arm/include/asm/atomic.h > +++ b/xen/arch/arm/include/asm/atomic.h > @@ -78,6 +78,7 @@ static always_inline void read_atomic_size(const volatile > void *p, > void *res, > unsigned int size) > { > + ASSERT(IS_ALIGNED((vaddr_t)p, size)); > switch ( size ) > { > case 1: > @@ -102,6 +103,7 @@ static always_inline void write_atomic_size(volatile void > *p, > void *val, > unsigned int size) > { > + ASSERT(IS_ALIGNED((vaddr_t)p, size)); > switch ( size ) > { > case 1: > -- > 2.17.1 >