Use 'volatile' to enforce a new memory access on each lockless atomic store and read. Without this a loop consisting of an atomic_read with memory_order_relaxed would be simply optimized away. Also, using volatile is cheaper than adding a full compiler barrier (also) in that case.
This use of a volatile cast mirrors the Linux kernel ACCESS_ONCE macro. Without this change the more rigorous atomic test cases introduced in a following patch will hang due to the atomic accesses being optimized away. Signed-off-by: Jarno Rajahalme <jrajaha...@nicira.com> --- lib/ovs-atomic-gcc4+.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/lib/ovs-atomic-gcc4+.h b/lib/ovs-atomic-gcc4+.h index 756696b..bb08ff9 100644 --- a/lib/ovs-atomic-gcc4+.h +++ b/lib/ovs-atomic-gcc4+.h @@ -83,7 +83,7 @@ atomic_signal_fence(memory_order order) \ if (IS_LOCKLESS_ATOMIC(*dst__)) { \ atomic_thread_fence(ORDER); \ - *dst__ = src__; \ + *(typeof(*DST) volatile *)dst__ = src__; \ atomic_thread_fence_if_seq_cst(ORDER); \ } else { \ atomic_store_locked(dst__, src__); \ @@ -99,7 +99,7 @@ atomic_signal_fence(memory_order order) \ if (IS_LOCKLESS_ATOMIC(*src__)) { \ atomic_thread_fence_if_seq_cst(ORDER); \ - *dst__ = *src__; \ + *dst__ = *(typeof(*SRC) volatile *)src__; \ } else { \ atomic_read_locked(src__, dst__); \ } \ -- 1.7.10.4 _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev