On 10/13/22 13:30, Uros Bizjak wrote:
OTOH, for x86 (same default toggles) there's no barriers at all.

     _Z10bar_seqcstiPi:
          endbr64
          movl    g(%rip), %eax
          movl    %eax, (%rsi)
          movl    a(%rip), %eax
          addl    %edi, %eax
          ret

Regarding x86 memory model, please see Intel® 64 and IA-32 Architectures
Software Developer’s Manual, Volume 3A, section 8.2 [1]

[1]https://www.intel.com/content/www/us/en/developer/articles/technical/intel-sdm.html

My naive intuition was x86 TSO would require a fence before
load(seq_cst) for a prior store, even if that store was non atomic, so
ensure load didn't bubble up ahead of store.
As documented in the SDM above, the x86 memory model guarantees that

• Reads are not reordered with other reads.
• Writes are not reordered with older reads.
• Writes to memory are not reordered with other writes, with the
following exceptions:
...
• Reads may be reordered with older writes to different locations but
not with older writes to the same location.

So my example is the last case where older write is followed by read to different location and thus potentially could be reordered.

Reply via email to