In commit 4b1a3e1e34ad97 we added a check for whether the TLB entry we had following a tlb_fill had the INVALID bit set. This could happen in some circumstances because a stale or wrong TLB entry was pulled out of the victim cache. However, after commit 68fea038553039e (which prevents stale entries being in the victim cache) and the previous commit (which ensures we don't incorrectly hit in the victim cache)) this should never be possible.
Drop the check on TLB_INVALID_MASK from the "is this a TLB_RECHECK?" condition, and instead assert that the tlb fill procedure has given us a valid TLB entry (or longjumped out with a guest exception). Signed-off-by: Peter Maydell <peter.mayd...@linaro.org> Reviewed-by: Richard Henderson <richard.hender...@linaro.org> Message-id: 20180713141636.18665-3-peter.mayd...@linaro.org --- accel/tcg/cputlb.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 2d5fb15d9a3..563fa30117e 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -970,10 +970,10 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr) if (!VICTIM_TLB_HIT(addr_code, addr)) { tlb_fill(ENV_GET_CPU(env), addr, 0, MMU_INST_FETCH, mmu_idx, 0); } + assert(tlb_hit(env->tlb_table[mmu_idx][index].addr_code, addr)); } - if (unlikely((env->tlb_table[mmu_idx][index].addr_code & - (TLB_RECHECK | TLB_INVALID_MASK)) == TLB_RECHECK)) { + if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) { /* * This is a TLB_RECHECK access, where the MMU protection * covers a smaller range than a target page, and we must -- 2.17.1