On Fri, Aug 18, 2017 at 10:18:31PM +1000, Michael Ellerman wrote: > Ram Pai <linux...@us.ibm.com> writes: > > > replace redundant code in __hash_page_64K(), __hash_page_huge(), > > __hash_page_4K(), __hash_page_4K() and flush_hash_page() with > > helper functions pte_get_hash_gslot() and pte_set_hash_slot() > > This seems out of order. > > At lease some of these are patching or even entirely replacing code you > just added. > > > diff --git a/arch/powerpc/mm/hugetlbpage-hash64.c > > b/arch/powerpc/mm/hugetlbpage-hash64.c > > index 5964b6d..e6dcd50 100644 > > --- a/arch/powerpc/mm/hugetlbpage-hash64.c > > +++ b/arch/powerpc/mm/hugetlbpage-hash64.c > > @@ -112,18 +103,7 @@ int __hash_page_huge(unsigned long ea, unsigned long > > access, unsigned long vsid, > > return -1; > > } > > > > -#ifdef CONFIG_PPC_64K_PAGES > > - /* > > - * Insert slot number & secondary bit in PTE second half. > > - */ > > - hidxp = (unsigned long *)(ptep + PTRS_PER_PTE); > > - rpte.hidx &= ~(0xfUL); > > - *hidxp = rpte.hidx | (slot & 0xfUL); > > - /* > > - * check __real_pte for details on matching smp_rmb() > > - */ > > - smp_wmb(); > > -#endif /* CONFIG_PPC_64K_PAGES */ > > + new_pte |= pte_set_hash_slot(ptep, rpte, 0, slot); > > } > > Here for example. That entire chunk was just added in patch in 2.
Had it that way in my earlier patch series. But others found it difficult to review its correctness. So had to have the code inline first, followed by the modularization later; in this patch series. Looks like you prefer it the earlier way. Will do in the next series. RP