Now that conversion to device-exclusive does no longer perform an
rmap walk and the main page_vma_mapped_walk() users were taught to
properly handle nonswap entries, let's treat device-exclusive entries just
as if they would be present, similar to how we handle device-private
entries already.

This fixes swapout/migration of folios with device-exclusive entries.

Likely there are still some page_vma_mapped_walk() callers that are not
fully prepared for these entries, and where we simply want to refuse
!pte_present() entries. They have to be fixed independently; the ones in
mm/rmap.c are prepared.

Fixes: b756a3b5e7ea ("mm: device exclusive memory access")
Signed-off-by: David Hildenbrand <da...@redhat.com>
---
 mm/memory.c | 17 +----------------
 mm/rmap.c   |  7 -------
 2 files changed, 1 insertion(+), 23 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index db38d6ae4e74..cd689cd8a7c8 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -743,20 +743,6 @@ static void restore_exclusive_pte(struct vm_area_struct 
*vma,
 
        VM_BUG_ON_FOLIO(pte_write(pte) && (!folio_test_anon(folio) &&
                                           PageAnonExclusive(page)), folio);
-
-       /*
-        * No need to take a page reference as one was already
-        * created when the swap entry was made.
-        */
-       if (folio_test_anon(folio))
-               folio_add_anon_rmap_pte(folio, page, vma, address, RMAP_NONE);
-       else
-               /*
-                * Currently device exclusive access only supports anonymous
-                * memory so the entry shouldn't point to a filebacked page.
-                */
-               WARN_ON_ONCE(1);
-
        set_pte_at(vma->vm_mm, address, ptep, pte);
 
        /*
@@ -1628,8 +1614,7 @@ static inline int zap_nonpresent_ptes(struct mmu_gather 
*tlb,
                 */
                WARN_ON_ONCE(!vma_is_anonymous(vma));
                rss[mm_counter(folio)]--;
-               if (is_device_private_entry(entry))
-                       folio_remove_rmap_pte(folio, page, vma);
+               folio_remove_rmap_pte(folio, page, vma);
                folio_put(folio);
        } else if (!non_swap_entry(entry)) {
                /* Genuine swap entries, hence a private anon pages */
diff --git a/mm/rmap.c b/mm/rmap.c
index 9e2002d97d6f..4acc9f6d743a 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2495,13 +2495,6 @@ struct page *make_device_exclusive(struct mm_struct *mm, 
unsigned long addr,
        /* The pte is writable, uffd-wp does not apply. */
        set_pte_at(mm, addr, fw.ptep, swp_pte);
 
-       /*
-        * TODO: The device-exclusive non-swap PTE holds a folio reference but
-        * does not count as a mapping (mapcount), which is wrong and must be
-        * fixed, otherwise RMAP walks don't behave as expected.
-        */
-       folio_remove_rmap_pte(folio, page, vma);
-
        folio_walk_end(&fw, vma);
        *foliop = folio;
        return page;
-- 
2.48.1

Reply via email to