On 10/20/2016 06:44 AM, Marek Vasut wrote:
+typedef struct Nios2Instruction { + void (*handler)(DisasContext *dc, uint32_t code, TCGMemOp flags); + uint32_t flags; +} Nios2Instruction;
I gave you some bad advice wrt the type of flags beforehand. I had failed to divine that it was also used for EXCP_* and TCG_COND_*. I think you were right the first time with unsigned.
My bad, sorry.
+/* Load instructions */ +static void gen_ldx(DisasContext *dc, uint32_t code, TCGMemOp flags) +{ + I_TYPE(instr, code); + + TCGv addr = tcg_temp_new(); + TCGv data = tcg_temp_new(); + tcg_gen_addi_tl(addr, load_gpr(dc, instr.a), instr.imm16s); + tcg_gen_qemu_ld_tl(data, addr, dc->mem_idx, flags); + + /* + * WARNING: Loads into R_ZERO are ignored, but we must generate the + * memory access itself to emulate the CPU precisely. Load + * from a protected page to R_ZERO will cause SIGSEGV on + * the Nios2 CPU. + */ + if (likely(instr.b != R_ZERO)) { + tcg_gen_mov_tl(dc->cpu_R[instr.b], data); + }
Consider TCGv data; if (unlikely(instr.b == R_ZERO)) { /* The writeback to R_ZERO is ignored, but we must generate the * memory access itself to emulate the CPU precisely. Load from * a protected page to R_ZERO will cause SIGSEGV on the Nios2 CPU. */ data = tcg_temp_new(); } else { data = dc->cpu_R[instr.b]; } tcg_gen_qemu_ld_tl(data, addr, dc->mem_idx, flags); if (unlikely(instr.b == R_ZERO)) { tcg_temp_free(data); } so that you don't require the mov opcode. That's really what I do on Alpha with dest_gpr.
+#define gen_r_div(fname, insn) \ +static void (fname)(DisasContext *dc, uint32_t code, TCGMemOp flags) \ +{ \ + R_TYPE(instr, (code)); \ + if (likely(instr.c != R_ZERO)) { \ + TCGv val = tcg_const_tl(0); \ + tcg_gen_setcond_tl(TCG_COND_EQ, val, load_gpr((dc), instr.b), val); \ + tcg_gen_or_tl(val, val, load_gpr((dc), instr.b)); \ + tcg_gen_##insn((dc)->cpu_R[instr.c], load_gpr((dc), instr.a), val); \ + tcg_temp_free(val); \ + } \ +} + +gen_r_div(divs, div_tl)
For signed division, you have to protect against 0x80000000 / -1 as well, which raises an overflow exception on the x86 host.
+ /* Set up instruction counts */ + num_insns = 0; + max_insns = tb->cflags & CF_COUNT_MASK; + if (max_insns == 0) { + max_insns = CF_COUNT_MASK; + } + if (max_insns > TCG_MAX_INSNS) { + max_insns = TCG_MAX_INSNS; + } + next_page_start = (tb->pc & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE;
...
+ } while (!dc->is_jmp && + !tcg_op_buf_full() && + !cs->singlestep_enabled && + !singlestep && + dc->pc < next_page_start && + num_insns < max_insns);
Consider if (cs->singlestep_enabled || singlestep) { max_insns = 1; } else { int page_insns = (TARGET_PAGE_SIZE - (tb->pc & TARGET_PAGE_MASK)) / 4; max_insns = tb->cflags & CF_COUNT_MASK; if (max_insns == 0) { max_insns = CF_COUNT_MASK; } if (max_insns > page_insns) { max_insns = page_insns; } if (max_insns > TCG_MAX_INSNS) { max_insns = TCG_MAX_INSNS; } } so that we collapse the last 4 loop conditions into: num_insns < max_insns.
+ /* End off the block */ + gen_tb_end(tb, num_insns); + + /* Mark instruction starts for the final generated instruction */ + tb->size = dc->pc - tb->pc; + tb->icount = num_insns;
No CPU_LOG_TB_IN_ASM disassembly? I thought patch 1 added a nios2 disassembler. r~