On Thu, Feb 15, 2024 at 10:31:58AM +0000, Ryan Roberts wrote:
> Split __flush_tlb_range() into __flush_tlb_range_nosync() +
> __flush_tlb_range(), in the same way as the existing flush_tlb_page()
> arrangement. This allows calling __flush_tlb_range_nosync() to elide the
> trailing DSB. Forthcoming "contpte" code will take advantage of this
> when clearing the young bit from a contiguous range of ptes.
> 
> Ordering between dsb and mmu_notifier_arch_invalidate_secondary_tlbs()
> has changed, but now aligns with the ordering of __flush_tlb_page(). It
> has been discussed that __flush_tlb_page() may be wrong though.
> Regardless, both will be resolved separately if needed.
> 
> Reviewed-by: David Hildenbrand <da...@redhat.com>
> Tested-by: John Hubbard <jhubb...@nvidia.com>
> Signed-off-by: Ryan Roberts <ryan.robe...@arm.com>

Acked-by: Mark Rutland <mark.rutl...@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/tlbflush.h | 13 +++++++++++--
>  1 file changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/tlbflush.h 
> b/arch/arm64/include/asm/tlbflush.h
> index 1deb5d789c2e..3b0e8248e1a4 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -422,7 +422,7 @@ do {                                                      
>                 \
>  #define __flush_s2_tlb_range_op(op, start, pages, stride, tlb_level) \
>       __flush_tlb_range_op(op, start, pages, stride, 0, tlb_level, false, 
> kvm_lpa2_is_enabled());
>  
> -static inline void __flush_tlb_range(struct vm_area_struct *vma,
> +static inline void __flush_tlb_range_nosync(struct vm_area_struct *vma,
>                                    unsigned long start, unsigned long end,
>                                    unsigned long stride, bool last_level,
>                                    int tlb_level)
> @@ -456,10 +456,19 @@ static inline void __flush_tlb_range(struct 
> vm_area_struct *vma,
>               __flush_tlb_range_op(vae1is, start, pages, stride, asid,
>                                    tlb_level, true, lpa2_is_enabled());
>  
> -     dsb(ish);
>       mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, start, end);
>  }
>  
> +static inline void __flush_tlb_range(struct vm_area_struct *vma,
> +                                  unsigned long start, unsigned long end,
> +                                  unsigned long stride, bool last_level,
> +                                  int tlb_level)
> +{
> +     __flush_tlb_range_nosync(vma, start, end, stride,
> +                              last_level, tlb_level);
> +     dsb(ish);
> +}
> +
>  static inline void flush_tlb_range(struct vm_area_struct *vma,
>                                  unsigned long start, unsigned long end)
>  {
> -- 
> 2.25.1
> 

Reply via email to