Le 07/06/2019 à 05:56, Nicholas Piggin a écrit :
The change to pmdp_invalidate to mark the pmd with _PAGE_INVALID broke
the synchronisation against lock free lookups, __find_linux_pte's
pmd_none check no longer returns true for such cases.

Fix this by adding a check for this condition as well.

Fixes: da7ad366b497 ("powerpc/mm/book3s: Update pmd_present to look at _PAGE_PRESENT 
bit")
Cc: Christophe Leroy <christophe.le...@c-s.fr>
Suggested-by: Aneesh Kumar K.V <aneesh.ku...@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npig...@gmail.com>
---
  arch/powerpc/mm/pgtable.c | 16 ++++++++++++++--
  1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index db4a6253df92..533fc6fa6726 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -372,13 +372,25 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea,
        pdshift = PMD_SHIFT;
        pmdp = pmd_offset(&pud, ea);
        pmd  = READ_ONCE(*pmdp);
+
        /*
-        * A hugepage collapse is captured by pmd_none, because
-        * it mark the pmd none and do a hpte invalidate.
+        * A hugepage collapse is captured by this condition, see
+        * pmdp_collapse_flush.
         */
        if (pmd_none(pmd))
                return NULL;
+#ifdef CONFIG_PPC_BOOK3S_64
+       /*
+        * A hugepage split is captured by this condition, see
+        * pmdp_invalidate.
+        *
+        * Huge page modification can be caught here too.
+        */
+       if (pmd_is_serializing(pmd))
+               return NULL;
+#endif
+

Could get rid of that #ifdef by adding the following in book3s32 and nohash pgtable.h:

static inline bool pmd_is_serializing()  { return false; }

Christophe

        if (pmd_trans_huge(pmd) || pmd_devmap(pmd)) {
                if (is_thp)
                        *is_thp = true;

Reply via email to