(catching up with email after holiday) On Wed, Oct 29, 2014 at 07:47:39PM +0000, Will Deacon wrote: > mmu_gather: move minimal range calculations into generic code > > On architectures with hardware broadcasting of TLB invalidation messages > , it makes sense to reduce the range of the mmu_gather structure when > unmapping page ranges based on the dirty address information passed to > tlb_remove_tlb_entry. > > arm64 already does this by directly manipulating the start/end fields > of the gather structure, but this confuses the generic code which > does not expect these fields to change and can end up calculating > invalid, negative ranges when forcing a flush in zap_pte_range. > > This patch moves the minimal range calculation out of the arm64 code > and into the generic implementation, simplifying zap_pte_range in the > process (which no longer needs to care about start/end, since they will > point to the appropriate ranges already). > > Signed-off-by: Will Deacon <will.dea...@arm.com>
Nice to see this clean-up for arm64, however I have a question below. > diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h > index 5672d7ea1fa0..340bc5c5ca2d 100644 > --- a/include/asm-generic/tlb.h > +++ b/include/asm-generic/tlb.h > @@ -128,6 +128,46 @@ static inline void tlb_remove_page(struct mmu_gather > *tlb, struct page *page) > tlb_flush_mmu(tlb); > } > > +static inline void __tlb_adjust_range(struct mmu_gather *tlb, > + unsigned long address) > +{ > + if (!tlb->fullmm) { > + tlb->start = min(tlb->start, address); > + tlb->end = max(tlb->end, address + PAGE_SIZE); > + } > +} Here __tlb_adjust_range() assumes end to be (start + PAGE_SIZE) always. > @@ -152,12 +193,14 @@ static inline void tlb_remove_page(struct mmu_gather > *tlb, struct page *page) > #define tlb_remove_pmd_tlb_entry(tlb, pmdp, address) \ > do { \ > tlb->need_flush = 1; \ > + __tlb_adjust_range(tlb, address); \ > __tlb_remove_pmd_tlb_entry(tlb, pmdp, address); \ > } while (0) [...] > #define pmd_free_tlb(tlb, pmdp, address) \ > do { \ > tlb->need_flush = 1; \ > + __tlb_adjust_range(tlb, address); \ > __pmd_free_tlb(tlb, pmdp, address); \ > } while (0) This would work on arm64 but is the PAGE_SIZE range enough for all architectures even when we flush a huge page or a pmd/pud table entry? The approach Peter Z took with his patches was to use pmd_addr_end(addr, TASK_SIZE) and change __tlb_adjust_range() to take start/end arguments: https://lkml.org/lkml/2011/3/7/302 -- Catalin -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/