When splitting a huge migrating PMD, we'll transfer the soft dirty bit
from the huge page to the small pages.  However we're possibly using a
wrong data since when fetching the bit we're using pmd_soft_dirty()
upon a migration entry.  Fix it up.

CC: Andrea Arcangeli <aarca...@redhat.com>
CC: Andrew Morton <a...@linux-foundation.org>
CC: "Kirill A. Shutemov" <kirill.shute...@linux.intel.com>
CC: Matthew Wilcox <wi...@infradead.org>
CC: Michal Hocko <mho...@suse.com>
CC: Dave Jiang <dave.ji...@intel.com>
CC: "Aneesh Kumar K.V" <aneesh.ku...@linux.vnet.ibm.com>
CC: Souptick Joarder <jrdr.li...@gmail.com>
CC: Konstantin Khlebnikov <khlebni...@yandex-team.ru>
CC: linux...@kvack.org
CC: linux-kernel@vger.kernel.org
Signed-off-by: Peter Xu <pet...@redhat.com>
---

I noticed this during code reading.  Only compile tested.  I'm sending
a patch directly for review comments since it's relatively
straightforward and not easy to test.  Please have a look, thanks.
---
 mm/huge_memory.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index f2d19e4fe854..fb0787c3dd3b 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2161,7 +2161,10 @@ static void __split_huge_pmd_locked(struct 
vm_area_struct *vma, pmd_t *pmd,
                SetPageDirty(page);
        write = pmd_write(old_pmd);
        young = pmd_young(old_pmd);
-       soft_dirty = pmd_soft_dirty(old_pmd);
+       if (unlikely(pmd_migration))
+               soft_dirty = pmd_swp_soft_dirty(old_pmd);
+       else
+               soft_dirty = pmd_soft_dirty(old_pmd);
 
        /*
         * Withdraw the table only after we mark the pmd entry invalid.
-- 
2.17.1

Reply via email to