On 8/29/22 07:23, Ricky Zhou wrote:
Many instructions which load/store 128-bit values are supposed to
raise #GP when the memory operand isn't 16-byte aligned. This includes:
- Instructions explicitly requiring memory alignment (Exceptions Type 1
in the "AVX and SSE Instruction Exception Specification" section of
the SDM)
- Legacy SSE instructions that load/store 128-bit values (Exceptions
Types 2 and 4).
This change adds a raise_gp_if_unaligned helper which raises #GP if an
address is not properly aligned. This helper is called before 128-bit
loads/stores where appropriate.
Resolves:https://gitlab.com/qemu-project/qemu/-/issues/217
Signed-off-by: Ricky Zhou<ri...@rzhou.org>
---
target/i386/helper.h | 1 +
target/i386/tcg/mem_helper.c | 8 ++++++++
target/i386/tcg/translate.c | 38 +++++++++++++++++++++++++++++++++---
3 files changed, 44 insertions(+), 3 deletions(-)
This trap should be raised via the memory operation:
- static inline void gen_ldo_env_A0(DisasContext *s, int offset)
+ static inline void gen_ldo_env_A0(DisasContext *s, int offset, bool aligned)
{
int mem_index = s->mem_index;
- tcg_gen_qemu_ld_i64(s->tmp1_i64, s->A0, mem_index, MO_LEUQ);
+ tcg_gen_qemu_ld_i64(s->tmp1_i64, s->A0, mem_index,
+ MO_LEUQ | (aligned ? MO_ALIGN_16 : 0));
tcg_gen_st_i64(s->tmp1_i64, cpu_env, offset + offsetof(ZMMReg, ZMM_Q(0)));
tcg_gen_addi_tl(s->tmp0, s->A0, 8);
tcg_gen_qemu_ld_i64(s->tmp1_i64, s->tmp0, mem_index, MO_LEUQ);
tcg_gen_st_i64(s->tmp1_i64, cpu_env, offset + offsetof(ZMMReg, ZMM_Q(1)));
}
Only the first of the two loads/stores must be aligned, as the other is known to be +8.
You then must fill in the x86_tcg_ops.do_unaligned_access hook to raise #GP.
r~