On Thu, May 31, 2012 at 1:21 PM, ���f任 (Wei-Ren Chen) <che...@iis.sinica.edu.tw> wrote: >> Hmmm, does it? >> >> void helper_invlpg(target_ulong addr) >> { >> helper_svm_check_intercept_param(SVM_EXIT_INVLPG, 0); >> tlb_flush_page(env, addr); >> } > > I would be wrong, so let the code speak. ;) > > --- > void tlb_flush_page(CPUArchState *env, target_ulong addr) > { > if ((addr & env->tlb_flush_mask) == env->tlb_flush_addr) {
tlb_flush_addr/tlb_flush_mask is a region that covers all large pages; this condition would be false if there are no large pages in the TLB or the invalidation happens far enough from them. > tlb_flush(env, 1); --- (1) > return; > } > > ... snip ... > > addr &= TARGET_PAGE_MASK; > i = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); > for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { > tlb_flush_entry(&env->tlb_table[mmu_idx][i], addr); > } > > tb_flush_jmp_cache(env, addr); > } > --- > > The comment of tlb_flush (1) says, > > QEMU doesn't currently implement a global/not-global flag > for tlb entries, at the moment tlb_flush() will also flush all > tlb entries in the flush_global == false case. > > That's why I get impression on QEMU flush the entire tlb. So it could flush > particular tlb entry in tlb_flush_entry? I'd say the probability is high with 32bit guest. -- Thanks. -- Max