On Tue, Jan 2, 2018 at 11:12 PM, Richard Henderson <richard.hender...@linaro.org> wrote: >> + case CSR_MISA: { >> + if (!(val_to_write & (1L << ('F' - 'A')))) { >> + val_to_write &= ~(1L << ('D' - 'A')); >> + } >> + >> + /* allow MAFDC bits in MISA to be modified */ >> + target_ulong mask = 0; >> + mask |= 1L << ('M' - 'A'); >> + mask |= 1L << ('A' - 'A'); >> + mask |= 1L << ('F' - 'A'); >> + mask |= 1L << ('D' - 'A'); >> + mask |= 1L << ('C' - 'A'); >> + mask &= env->misa_mask; >> + >> + env->misa = (val_to_write & mask) | (env->misa & ~mask); > > Does this not affect the set of instructions that are allowable? If so, you'd > want something like > > new_misa = (val_to_write & mask) | (env->misa & ~mask); > if (env->misa != new_misa) { > env->misa = new_misa; > tb_flush(CPU(riscv_env_get_cpu(env))); > } > > so that we start with all new translations, which would then check the new > value of misa, and would then raise INST_ADDR_MIS (or not).
This does not seem quite right. misa can legally differ between cores/threads, but tb_flush is a global operation. The way this is supposed to work is that the relevant misa bits are extracted into tb_flags: static inline void cpu_riscv_set_tb_flags(CPURISCVState *env) { env->tb_flags = 0; if (env->misa & MISA_A) { env->tb_flags |= RISCV_TF_MISA_A; } if (env->misa & MISA_D) { env->tb_flags |= RISCV_TF_MISA_D; } if (env->misa & MISA_F) { env->tb_flags |= RISCV_TF_MISA_F; } if (env->misa & MISA_M) { env->tb_flags |= RISCV_TF_MISA_M; } if (env->misa & MISA_C) { env->tb_flags |= RISCV_TF_MISA_C; } env->tb_flags |= cpu_mmu_index(env, true) << RISCV_TF_IAT_SHIFT; env->tb_flags |= cpu_mmu_index(env, false) << RISCV_TF_DAT_SHIFT; } static inline void cpu_get_tb_cpu_state(CPURISCVState *env, target_ulong *pc, target_ulong *cs_base, uint32_t *flags) { *pc = env->pc; *cs_base = 0; *flags = env->tb_flags; } but this code appears to be missing in the tree submitted for upstreaming? -s