On 10/9/24 18:20, Richard Henderson wrote:
On 10/9/24 16:05, Pierrick Bouvier wrote:
@@ -720,13 +728,10 @@ static void tlb_flush_range_locked(CPUState *cpu, int 
midx,
           return;
       }
+    tlbfast_flush_range_locked(d, f, addr, len, mask);
+
       for (vaddr i = 0; i < len; i += TARGET_PAGE_SIZE) {
           vaddr page = addr + i;
-        CPUTLBEntry *entry = tlb_entry(cpu, midx, page);
-
-        if (tlb_flush_entry_mask_locked(entry, page, mask)) {
-            tlb_n_used_entries_dec(cpu, midx);
-        }
           tlb_flush_vtlb_page_mask_locked(cpu, midx, page, mask);
       }
   }

Why don't we have the same kind of change for tlb_flush_vtlb_page_mask_locked?

We know have two loops (for entry mask, and for page mask).

It goes away in patch 15.

r~

Right, looks good.
Reviewed-by: Pierrick Bouvier <pierrick.bouv...@linaro.org>

Reply via email to