In dirty logging case(logging_active == True), we need to collapse a block entry into a table if necessary. After dirty logging is canceled, when merging tables back into a block entry, we should not only free the non-huge page tables but also unmap the non-huge mapping for the block. Without the unmap, inconsistent TLB entries for the pages in the the block will be created.
We could also use unmap_stage2_range API to unmap the non-huge mapping, but this could potentially free the upper level page-table page which will be useful later. Signed-off-by: Yanan Wang <wangyana...@huawei.com> --- arch/arm64/kvm/hyp/pgtable.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 696b6aa83faf..fec8dc9f2baa 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -500,6 +500,9 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level, return 0; } +static void stage2_flush_dcache(void *addr, u64 size); +static bool stage2_pte_cacheable(kvm_pte_t pte); + static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, struct stage2_map_data *data) { @@ -507,9 +510,17 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, struct page *page = virt_to_page(ptep); if (data->anchor) { - if (kvm_pte_valid(pte)) + if (kvm_pte_valid(pte)) { + kvm_set_invalid_pte(ptep); + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, + addr, level); put_page(page); + if (stage2_pte_cacheable(pte)) + stage2_flush_dcache(kvm_pte_follow(pte), + kvm_granule_size(level)); + } + return 0; } @@ -574,7 +585,7 @@ static int stage2_map_walk_table_post(u64 addr, u64 end, u32 level, * The behaviour of the LEAF callback then depends on whether or not the * anchor has been set. If not, then we're not using a block mapping higher * up the table and we perform the mapping at the existing leaves instead. - * If, on the other hand, the anchor _is_ set, then we drop references to + * If, on the other hand, the anchor _is_ set, then we unmap the mapping of * all valid leaves so that the pages beneath the anchor can be freed. * * Finally, the TABLE_POST callback does nothing if the anchor has not -- 2.19.1