On 18/05/16 02:18, Emilio G. Cota wrote: > On Tue, May 17, 2016 at 23:35:57 +0300, Sergey Fedorov wrote: >> On 17/05/16 22:38, Emilio G. Cota wrote: >>> On Tue, May 17, 2016 at 20:13:24 +0300, Sergey Fedorov wrote: >>>> On 14/05/16 06:34, Emilio G. Cota wrote: >> (snip) >>>>> + while (atomic_read(&spin->value)) { >>>>> + cpu_relax(); >>>>> + } >>>>> + } >>>> Looks like relaxed atomic access can be a subject to various >>>> optimisations according to >>>> https://gcc.gnu.org/wiki/Atomic/GCCMM/AtomicSync#Relaxed. >>> The important thing here is that the read actually happens >>> on every iteration; this is achieved with atomic_read(). >>> Barriers etc. do not matter here because once we exit >>> the loop, the try to acquire the lock -- and if we succeed, >>> we then emit the right barrier. >> I just can't find where it is stated that an expression like >> "__atomic_load(ptr, &_val, __ATOMIC_RELAXED)" has a _compiler_ barrier >> or volatile access semantic. Hopefully, cpu_relax() serves as a compiler >> barrier. If we rely on that, we'd better put a comment about it. > I treat atomic_read/set as ACCESS_ONCE[1], i.e. volatile cast. > From docs/atomics.txt: > > COMPARISON WITH LINUX KERNEL MEMORY BARRIERS > ============================================ > [...] > - atomic_read and atomic_set in Linux give no guarantee at all; > atomic_read and atomic_set in QEMU include a compiler barrier > (similar to the ACCESS_ONCE macro in Linux). > > [1] https://lwn.net/Articles/508991/
But actually (cf include/qemu/atomic.h) we can have: #define atomic_read(ptr) \ ({ \ QEMU_BUILD_BUG_ON(sizeof(*ptr) > sizeof(void *)); \ typeof(*ptr) _val; \ __atomic_load(ptr, &_val, __ATOMIC_RELAXED); \ _val; \ }) I can't find anywhere if this __atomic_load() has volatile/compiler barrier semantics... Kind regards, Sergey