Acording to the ARM ARM exclusive loads require the same allignment as
exclusive stores. Let's update the memops used for the load to match
that of the store. This adds the alignment requirement to the memops.

Signed-off-by: Alistair Francis <alistair.fran...@xilinx.com>
Reviewed-by: Richard Henderson <richard.hender...@linaro.org>
---

 target/arm/translate-a64.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 58ed4c6d05..245175e2f1 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -1854,7 +1854,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, 
int rt2,
                                TCGv_i64 addr, int size, bool is_pair)
 {
     TCGv_i64 tmp = tcg_temp_new_i64();
-    TCGMemOp memop = s->be_data + size;
+    TCGMemOp memop = size | MO_ALIGN | s->be_data;
 
     g_assert(size <= 3);
     tcg_gen_qemu_ld_i64(tmp, addr, get_mem_index(s), memop);
-- 
2.11.0


Reply via email to