Michal Hocko <mho...@kernel.org> writes: > [ text/plain ] > On Tue 05-04-16 12:05:47, Sukadev Bhattiprolu wrote: > [...] >> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c >> index d991b9e..081f679 100644 >> --- a/arch/powerpc/mm/hugetlbpage.c >> +++ b/arch/powerpc/mm/hugetlbpage.c >> @@ -81,6 +81,13 @@ static int __hugepte_alloc(struct mm_struct *mm, hugepd_t >> *hpdp, >> if (! new) >> return -ENOMEM; >> >> + /* >> + * Make sure other cpus find the hugepd set only after a >> + * properly initialized page table is visible to them. >> + * For more details look for comment in __pte_alloc(). >> + */ >> + smp_wmb(); >> + > > what is the pairing memory barrier? > >> spin_lock(&mm->page_table_lock); >> #ifdef CONFIG_PPC_FSL_BOOK3E >> /*
This is documented in __pte_alloc(). I didn't want to repeat the same here. /* * Ensure all pte setup (eg. pte page lock and page clearing) are * visible before the pte is made visible to other CPUs by being * put into page tables. * * The other side of the story is the pointer chasing in the page * table walking code (when walking the page table without locking; * ie. most of the time). Fortunately, these data accesses consist * of a chain of data-dependent loads, meaning most CPUs (alpha * being the notable exception) will already guarantee loads are * seen in-order. See the alpha page table accessors for the * smp_read_barrier_depends() barriers in page table walking code. */ smp_wmb(); /* Could be smp_wmb__xxx(before|after)_spin_lock */ -aneesh _______________________________________________ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev