On 25/05/16 11:52, Alex Bennée wrote:
> Sergey Fedorov <serge.f...@gmail.com> writes:
>
>> On 24/05/16 22:56, Emilio G. Cota wrote:
>>> On Tue, May 24, 2016 at 09:08:01 +0200, Paolo Bonzini wrote:
>>>> On 23/05/2016 19:09, Emilio G. Cota wrote:
>>>>> PS. And really equating smp_wmb/rmb to release/acquire as we have under
>>>>> #ifdef __ATOMIC is hard to justify, other than to please tsan.
>>>> That only makes a difference on arm64, right?
>>>>
>>>>    acquire         release         rmb             wmb
>>>> x86        --              --              --              --
>>>> power      lwsync          lwsync          lwsync          lwsync
>>>> armv7      dmb             dmb             dmb             dmb
>>>> arm64      dmb ishld       dmb ish         dmb ishld       dmb ishst
>>>> ia64       --              --              --              --
>>> Yes. I now see why we're defining rmb/wmb based on acquire/release:
>>> it's quite convenient given that the compiler provides them, and
>>> the (tiny) differences in practice are not worth the trouble of
>>> adding asm for them. So I take back my comment =)
>>>
>>> The gains of getting rid of the consume barrier from atomic_rcu_read
>>> are clear though; updated patch to follow.
>> However, maybe it's not such a pain to maintain an optimized version for
>> AArch64 in assembly :P
> Please don't. The advantage of the builtins is they are known by things
> like tsan.
>

We can always do:

    #if defined(__aarch64__) && !defined(__SANITIZE_THREAD__)
    /* AArch64 asm variant */
    #else
    /* GCC __atomic variant */
    #endif


Kind regards,
Sergey

Reply via email to