On 05/03/2018 06:26 AM, Peter Maydell wrote: > On 27 April 2018 at 01:26, Richard Henderson > <richard.hender...@linaro.org> wrote: >> Given that this atomic operation will be used by both risc-v >> and aarch64, let's not duplicate code across the two targets. >> >> Signed-off-by: Richard Henderson <richard.hender...@linaro.org> >> --- >> accel/tcg/atomic_template.h | 71 >> +++++++++++++++++++++++++++++++++++++++++++++ >> accel/tcg/tcg-runtime.h | 8 +++++ >> tcg/tcg-op.h | 34 ++++++++++++++++++++++ >> tcg/tcg.h | 8 +++++ >> tcg/tcg-op.c | 8 +++++ >> 5 files changed, 129 insertions(+) > >> @@ -233,6 +270,39 @@ ABI_TYPE ATOMIC_NAME(add_fetch)(CPUArchState *env, >> target_ulong addr, >> ldo = ldn; >> } >> } >> + >> +/* These helpers are, as a whole, full barriers. Within the helper, >> + * the leading barrier is explicit and the trailing barrier is within >> + * cmpxchg primitive. >> + */ >> +#define GEN_ATOMIC_HELPER_FN(X, FN, XDATA_TYPE, RET) \ >> +ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \ >> + ABI_TYPE xval EXTRA_ARGS) \ >> +{ \ >> + ATOMIC_MMU_DECLS; \ >> + XDATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \ >> + XDATA_TYPE ldo, ldn, old, new, val = xval; \ >> + smp_mb(); \ >> + ldn = atomic_read__nocheck(haddr); \ > > I see you're using the __nocheck function here. How does this > work for the 32-bit host case where you don't necessarily have > a 64-bit atomic primitive?
It won't be compiled for the 32-bit host. Translation will not attempt to use this helper and will instead call exit_atomic. > >> + do { \ >> + ldo = ldn; old = BSWAP(ldo); new = FN(old, val); \ >> + ldn = atomic_cmpxchg__nocheck(haddr, ldo, BSWAP(new)); \ >> + } while (ldo != ldn); \ >> + ATOMIC_MMU_CLEANUP; \ >> + return RET; \ >> +} > > I was going to suggest that you could also now use this to > iimplement the currently-hand-coded fetch_add and add_fetch > for the reverse-host-endian case, but those don't have a leading > smp_mb() and this does. Do you know why those are different? That would seem to be a bug... r~