On 10/27/18 1:16 AM, Emilio G. Cota wrote: > On Tue, Oct 23, 2018 at 08:02:47 +0100, Richard Henderson wrote: >> +static void tlb_flush_page_locked(CPUArchState *env, int midx, >> + target_ulong addr) >> +{ >> + target_ulong lp_addr = env->tlb_d[midx].large_page_addr; >> + target_ulong lp_mask = env->tlb_d[midx].large_page_mask; >> + >> + /* Check if we need to flush due to large pages. */ >> + if ((addr & lp_mask) == lp_addr) { >> + tlb_debug("forcing full flush midx %d (" >> + TARGET_FMT_lx "/" TARGET_FMT_lx ")\n", >> + midx, lp_addr, lp_mask); >> + tlb_flush_one_mmuidx_locked(env, midx); >> + } else { >> + int pidx = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); >> + tlb_flush_entry_locked(&env->tlb_table[midx][pidx], addr); >> + tlb_flush_vtlb_page_locked(env, midx, addr); > > Just noticed that we should use tlb_entry here, e.g.: > > } else { > - int pidx = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); > - tlb_flush_entry_locked(&env->tlb_table[midx][pidx], addr); > + CPUTLBEntry *entry = tlb_entry(env, midx, addr); > + > + tlb_flush_entry_locked(entry, addr); > tlb_flush_vtlb_page_locked(env, midx, addr); > }
Fixed, thanks. r~