On 2/9/22 16:48, Alistair Francis wrote:
Note that this only fixes the fault for memory regions. I/O and
non-existant regions will still trigger a load fault.
...
+void helper_atomic_check(CPURISCVState *env, target_ulong address,
+ int width, int mmu_idx)
+{
+ probe_write(env, address, width, mmu_idx, GETPC());
+}
Note that you could use probe_access_flags, whose return value is a mask. If it includes
TLB_MMIO, you know that the physical address does not contain ram, but you still do not
know if it is unmapped or true mmio, since unmapped gets unassigned_io_ops. It's probably
not *that* hard to find out one way or another, but it might also be reasonable to set a
flag to communicate with your do_transaction_failed hook.
static bool gen_amo(DisasContext *ctx, arg_atomic *a,
void(*func)(TCGv, TCGv, TCGv, TCGArg, MemOp),
- MemOp mop)
+ TCGv_i32 width, MemOp mop)
{
TCGv dest = dest_gpr(ctx, a->rd);
TCGv src1 = get_address(ctx, a->rs1, 0);
TCGv src2 = get_gpr(ctx, a->rs2, EXT_NONE);
+ TCGv_i32 mem_idx = tcg_constant_i32(ctx->mem_idx);
+
+ gen_helper_atomic_check(cpu_env, src1, width, mem_idx);
func(dest, src1, src2, ctx->mem_idx, mop);
@@ -105,55 +108,64 @@ static bool trans_sc_w(DisasContext *ctx, arg_sc_w *a)
static bool trans_amoswap_w(DisasContext *ctx, arg_amoswap_w *a)
{
REQUIRE_EXT(ctx, RVA);
- return gen_amo(ctx, a, &tcg_gen_atomic_xchg_tl, (MO_ALIGN | MO_TESL));
+ return gen_amo(ctx, a, &tcg_gen_atomic_xchg_tl, tcg_constant_i32(4),
+ (MO_ALIGN | MO_TESL));
The width is already stored in the MemOp argument that you're passing around.
I think you should do
TCGv_i32 width = tcg_constant_i32(memop_size(mop));
there in gen_amo, and not change the callers at all.
r~