Now that conversion to device-exclusive does no longer perform an
rmap walk and all page_vma_mapped_walk() users were taught to
properly handle device-exclusive entries, let's treat device-exclusive
entries just as if they would be present, similar to how we handle
device-private entries already.

This fixes swapout/migration/split/hwpoison of folios with
device-exclusive entries.

We only had to take care of page_vma_mapped_walk() users, because these
traditionally assume pte_present(). Other page table walkers already
have to handle !pte_present(), and some of them might simply skip them
(e.g., MADV_PAGEOUT) if they are not specialized on them. This change
doesn't modify the latter.

Note that while folios with device-exclusive PTEs can now get migrated,
khugepaged will not collapse a THP if there is device-exclusive PTE.
Doing so might also not be desired if the device frequently performs
atomics to the same page. Similarly, KSM will never merge order-0 folios
that are device-exclusive.

Fixes: b756a3b5e7ea ("mm: device exclusive memory access")
Signed-off-by: David Hildenbrand <da...@redhat.com>
---
 mm/memory.c | 17 +----------------
 mm/rmap.c   |  7 -------
 2 files changed, 1 insertion(+), 23 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index ba33ba3b7ea17..e9f54065b117f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -741,20 +741,6 @@ static void restore_exclusive_pte(struct vm_area_struct 
*vma,
 
        VM_BUG_ON_FOLIO(pte_write(pte) && (!folio_test_anon(folio) &&
                                           PageAnonExclusive(page)), folio);
-
-       /*
-        * No need to take a page reference as one was already
-        * created when the swap entry was made.
-        */
-       if (folio_test_anon(folio))
-               folio_add_anon_rmap_pte(folio, page, vma, address, RMAP_NONE);
-       else
-               /*
-                * Currently device exclusive access only supports anonymous
-                * memory so the entry shouldn't point to a filebacked page.
-                */
-               WARN_ON_ONCE(1);
-
        set_pte_at(vma->vm_mm, address, ptep, pte);
 
        /*
@@ -1626,8 +1612,7 @@ static inline int zap_nonpresent_ptes(struct mmu_gather 
*tlb,
                 */
                WARN_ON_ONCE(!vma_is_anonymous(vma));
                rss[mm_counter(folio)]--;
-               if (is_device_private_entry(entry))
-                       folio_remove_rmap_pte(folio, page, vma);
+               folio_remove_rmap_pte(folio, page, vma);
                folio_put(folio);
        } else if (!non_swap_entry(entry)) {
                /* Genuine swap entries, hence a private anon pages */
diff --git a/mm/rmap.c b/mm/rmap.c
index 7b737f0f68fb5..e2a543f639ce3 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2511,13 +2511,6 @@ struct page *make_device_exclusive(struct mm_struct *mm, 
unsigned long addr,
        /* The pte is writable, uffd-wp does not apply. */
        set_pte_at(mm, addr, fw.ptep, swp_pte);
 
-       /*
-        * TODO: The device-exclusive PFN swap PTE holds a folio reference but
-        * does not count as a mapping (mapcount), which is wrong and must be
-        * fixed, otherwise RMAP walks don't behave as expected.
-        */
-       folio_remove_rmap_pte(folio, page, vma);
-
        folio_walk_end(&fw, vma);
        mmu_notifier_invalidate_range_end(&range);
        *foliop = folio;
-- 
2.48.1

Reply via email to