I think we may also need an _atomic_unlock() to add a DMB during unlock as well.

On Wed, Jul 31, 2013 at 8:37 AM, Artturi Alm <artturi....@gmail.com> wrote:
> On 07/31/13 08:57, Richard Allen wrote:
>>
>> Hi,
>>
>> I just wanted to let you know that _atomic_lock(), from _atomic_lock.c,
>> as used by librthread should probably have a barrier instruction added
>>   to prevent the processor from reordering loads/stores around the
>> atomic_lock.
>>
>> For more information about barriers on ARM, see:
>>
>> http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0489c/CIHGHHIE.html
>>
>> For some examples, see sections 7.2.1 and 7.2.2
>>
>> http://infocenter.arm.com/help/topic/com.arm.doc.genc007826/Barrier_Litmus_Tests_and_Cookbook_A08.pdf
>>
>> -Richard
>>
>
> Hi,
>
> I'm not sure what the diff against _atomic_lock.c would look like,
> since i'm guessing it might not be supported, and i don't like
> inline asm so i'll leave it to someone else, however, diff for
> using it in cpufunc_asm_armv7.S would be something like below
>
>
> -Artturi
>
>
>
> Index: cpufunc_asm_armv7.S
> ===================================================================
> RCS file: /cvs/src/sys/arch/arm/arm/cpufunc_asm_armv7.S,v
> retrieving revision 1.6
> diff -u -p -r1.6 cpufunc_asm_armv7.S
> --- cpufunc_asm_armv7.S 30 Mar 2013 01:30:30 -0000      1.6
> +++ cpufunc_asm_armv7.S 31 Jul 2013 13:26:18 -0000
> @@ -19,6 +19,7 @@
>  #include <machine/asm.h>
>
>  #define        DSB     .long   0xf57ff040
> +#define        DMB     .long   0xf57ff050
>  #define        ISB     .long   0xf57ff060
>  #define        WFI     .long   0xe320f003

Reply via email to