In commit 71b9a45330fe220d1 we changed the condition we use to determine whether we need to refill the TLB in get_page_addr_code() to if (unlikely(env->tlb_table[mmu_idx][index].addr_code != (addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK)))) {
This isn't the right check (it will falsely fail if the input addr happens to have the low bit corresponding to TLB_INVALID_MASK set, for instance). This patchset first factors out the "check for a hit" logic into some new functions tlb_hit() and tlb_hit_page() (the latter is for when the address is known to be page-aligned), uses those functions in the various places that do TLB hit tests, and then uses tlb_hit() to replace the erroneous code in get_page_addr_code(). I noticed this while trying to debug Laurent's m68k test case: it meant that we would come into get_page_addr_code() for a TLB hit, falsely decide it was a miss, and then fish an older entry out of the TLB victim cache... Peter Maydell (2): tcg: Define and use new tlb_hit() and tlb_hit_page() functions accel/tcg: Correct "is this a TLB miss" check in get_page_addr_code() accel/tcg/softmmu_template.h | 16 ++++++---------- include/exec/cpu-all.h | 23 +++++++++++++++++++++++ include/exec/cpu_ldst.h | 3 +-- accel/tcg/cputlb.c | 18 ++++++------------ 4 files changed, 36 insertions(+), 24 deletions(-) -- 2.17.1