With commit 9fee28baa601 ("powerpc: implement the new page table range API") we added set_ptes to powerpc architecture but the implementation missed calling the pte filter for all the ptes we are setting in the range. set_pte_filter can be used for filter pte values and on some platforms which don't support coherent icache it clears the exec bit so that we can flush the icache on exec fault
The patch also removes the usage of arch_enter/leave_lazy_mmu() because set_pte is not supposed to be used when updating a pte entry. Powerpc architecture uses this rule to skip the expensive tlb invalidate which is not needed when you are setting up the pte for the first time. See commit 56eecdb912b5 ("mm: Use ptep/pmdp_set_numa() for updating _PAGE_NUMA bit") for more details Fixes: 9fee28baa601 ("powerpc: implement the new page table range API") Signed-off-by: Aneesh Kumar K.V <aneesh.ku...@linux.ibm.com> --- arch/powerpc/mm/pgtable.c | 33 ++++++++++++++++++++------------- 1 file changed, 20 insertions(+), 13 deletions(-) diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c index 3ba9fe411604..95ab20cca2da 100644 --- a/arch/powerpc/mm/pgtable.c +++ b/arch/powerpc/mm/pgtable.c @@ -191,28 +191,35 @@ void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte, unsigned int nr) { /* - * Make sure hardware valid bit is not set. We don't do - * tlb flush for this update. + * We don't need to call arch_enter/leave_lazy_mmu_mode() + * because we expect set_ptes to be only be used on not present + * and not hw_valid ptes. Hence there is not translation cache flush + * involved that need to be batched. */ - VM_WARN_ON(pte_hw_valid(*ptep) && !pte_protnone(*ptep)); + for (;;) { - /* Note: mm->context.id might not yet have been assigned as - * this context might not have been activated yet when this - * is called. - */ - pte = set_pte_filter(pte); + /* + * Make sure hardware valid bit is not set. We don't do + * tlb flush for this update. + */ + VM_WARN_ON(pte_hw_valid(*ptep) && !pte_protnone(*ptep)); - /* Perform the setting of the PTE */ - arch_enter_lazy_mmu_mode(); - for (;;) { + /* Note: mm->context.id might not yet have been assigned as + * this context might not have been activated yet when this + * is called. + */ + pte = set_pte_filter(pte); + + /* Perform the setting of the PTE */ __set_pte_at(mm, addr, ptep, pte, 0); if (--nr == 0) break; ptep++; - pte = __pte(pte_val(pte) + (1UL << PTE_RPN_SHIFT)); addr += PAGE_SIZE; + /* increment the pfn */ + pte = __pte(pte_val(pte) + PAGE_SIZE); + } - arch_leave_lazy_mmu_mode(); } void unmap_kernel_page(unsigned long va) -- 2.41.0