Use 'volatile' to enforce a new memory access on each lockless atomic store and read. Without this a loop consisting of an atomic_read with memory_order_relaxed would be simply optimized away. Also, using volatile is cheaper than adding a full compiler barrier (also) in that case.
Without this change the more rigorous atomic test cases introduced in a following patch will hang due to the atomic accesses being optimized away. Signed-off-by: Jarno Rajahalme <jrajaha...@nicira.com> --- lib/ovs-atomic-gcc4+.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/lib/ovs-atomic-gcc4+.h b/lib/ovs-atomic-gcc4+.h index 9a79f7e..a4ed86c 100644 --- a/lib/ovs-atomic-gcc4+.h +++ b/lib/ovs-atomic-gcc4+.h @@ -84,7 +84,7 @@ atomic_signal_fence(memory_order order OVS_UNUSED) \ if (IS_LOCKLESS_ATOMIC(*dst__)) { \ atomic_thread_fence(order__); \ - *dst__ = src__; \ + *(typeof(*DST) volatile *)dst__ = src__; \ atomic_thread_fence_if_seq_cst(order__); \ } else { \ atomic_store_locked(dst__, src__); \ @@ -101,7 +101,7 @@ atomic_signal_fence(memory_order order OVS_UNUSED) \ if (IS_LOCKLESS_ATOMIC(*src__)) { \ atomic_thread_fence_if_seq_cst(order__); \ - *dst__ = *src__; \ + *dst__ = *(typeof(*SRC) volatile *)src__; \ } else { \ atomic_read_locked(src__, dst__); \ } \ -- 1.7.10.4 _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev