In copy_present_page, after we mark the pte non-writable, we should
check for previous dirty bit updates and make sure we don't lose the dirty
bit on reset.

Also, avoid marking the pte write-protected again if copy_present_page
already marked it write-protected.

Cc: Peter Xu <pet...@redhat.com>
Cc: Jason Gunthorpe <j...@ziepe.ca>
Cc: John Hubbard <jhubb...@nvidia.com>
Cc: linux...@kvack.org
Cc: linux-ker...@vger.kernel.org
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Jan Kara <j...@suse.cz>
Cc: Michal Hocko <mho...@suse.com>
Cc: Kirill Shutemov <kir...@shutemov.name>
Cc: Hugh Dickins <hu...@google.com>
Cc: Linus Torvalds <torva...@linux-foundation.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.ku...@linux.ibm.com>
---
 mm/memory.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/mm/memory.c b/mm/memory.c
index bfe202ef6244..f57b1f04d50a 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -848,6 +848,9 @@ copy_present_page(struct mm_struct *dst_mm, struct 
mm_struct *src_mm,
        if (likely(!page_maybe_dma_pinned(page)))
                return 1;
 
+       if (pte_dirty(*src_pte))
+               pte = pte_mkdirty(pte);
+
        /*
         * Uhhuh. It looks like the page might be a pinned page,
         * and we actually need to copy it. Now we can set the
@@ -904,6 +907,11 @@ copy_present_pte(struct mm_struct *dst_mm, struct 
mm_struct *src_mm,
                if (retval <= 0)
                        return retval;
 
+               /*
+                * Fetch the src pte value again, copy_present_page
+                * could modify it.
+                */
+               pte = *src_pte;
                get_page(page);
                page_dup_rmap(page, false);
                rss[mm_counter(page)]++;
-- 
2.26.2

Reply via email to