On Tue, Dec 01, 2020 at 02:05:03PM +0000, Marc Zyngier wrote:
> On 2020-12-01 13:46, Will Deacon wrote:
> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > index 0271b4a3b9fe..12526d8c7ae4 100644
> > --- a/arch/arm64/kvm/hyp/pgtable.c
> > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > @@ -493,7 +493,7 @@ static int stage2_map_walk_table_pre(u64 addr, u64
> > end, u32 level,
> >             return 0;
> > 
> >     kvm_set_invalid_pte(ptep);
> > -   kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, 0);
> > +   /* TLB invalidation is deferred until the _post handler */
> >     data->anchor = ptep;
> >     return 0;
> >  }
> > @@ -547,11 +547,21 @@ static int stage2_map_walk_table_post(u64 addr,
> > u64 end, u32 level,
> >                                   struct stage2_map_data *data)
> >  {
> >     int ret = 0;
> > +   kvm_pte_t pte = *ptep;
> > 
> >     if (!data->anchor)
> >             return 0;
> > 
> > -   free_page((unsigned long)kvm_pte_follow(*ptep));
> > +   kvm_set_invalid_pte(ptep);
> > +
> > +   /*
> > +    * Invalidate the whole stage-2, as we may have numerous leaf
> > +    * entries below us which would otherwise need invalidating
> > +    * individually.
> > +    */
> > +   kvm_call_hyp(__kvm_tlb_flush_vmid, data->mmu);
> 
> That's a big hammer, and we so far have been pretty careful not to
> over-invalidate. Is the block-replacing-table *without* an unmap
> in between the only case where this triggers?

Yes, this only happens in that case. The alternative would be to issue
invalidations for every single entry we unmap, which I can implement if
you prefer, but it felt worse to me given that by-IPA invalidation
isn't really great either).

Will

Reply via email to