Three mm paths outside the fault handler gate on the uffd PTE bit
today: khugepaged (skip collapse on ranges carrying markers), rmap
(cap unmap batching), and GUP (force a fault through
gup_can_follow_protnone). Extend each to treat VM_UFFD_RWP the same
as VM_UFFD_WP; otherwise per-PTE RWP state is silently destroyed or
bypassed.

khugepaged: try_collapse_pte_mapped_thp() and
file_backed_vma_is_retractable() already refuse to collapse or
retract page tables on ranges carrying the uffd PTE bit. Broaden the
VMA predicate from userfaultfd_wp() to userfaultfd_protected() so
VM_UFFD_RWP ranges get the same protection. hpage_collapse_scan_pmd()
needs no change — its existing pte_uffd() check already catches an
RWP PTE because it carries the uffd bit.

rmap: folio_unmap_pte_batch() caps batching at 1 for VM_UFFD_RWP so
the restore path handles each PTE with its own marker.

GUP: gup_can_follow_protnone() forces a fault on VM_UFFD_RWP VMAs
regardless of FOLL_HONOR_NUMA_FAULT. RWP uses protnone as an
access-tracking marker, not for NUMA hinting, so any GUP — read or
write — must go through the userfaultfd fault path.

Signed-off-by: Kiryl Shutsemau <[email protected]>
Assisted-by: Claude:claude-opus-4-6
---
 include/linux/mm.h | 10 +++++++++-
 mm/khugepaged.c    | 18 +++++++++++-------
 mm/rmap.c          |  2 +-
 3 files changed, 21 insertions(+), 9 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1f2b6c6cc572..675480c760a7 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -4605,11 +4605,19 @@ static inline int vm_fault_to_errno(vm_fault_t 
vm_fault, int foll_flags)
 
 /*
  * Indicates whether GUP can follow a PROT_NONE mapped page, or whether
- * a (NUMA hinting) fault is required.
+ * a (NUMA hinting or userfaultfd RWP) fault is required.
  */
 static inline bool gup_can_follow_protnone(const struct vm_area_struct *vma,
                                           unsigned int flags)
 {
+       /*
+        * VM_UFFD_RWP uses protnone as an access-tracking marker, not for
+        * NUMA hinting. GUP must always take a fault so the access is
+        * delivered to userfaultfd, regardless of FOLL_HONOR_NUMA_FAULT.
+        */
+       if (vma->vm_flags & VM_UFFD_RWP)
+               return false;
+
        /*
         * If callers don't want to honor NUMA hinting faults, no need to
         * determine if we would actually have to trigger a NUMA hinting fault.
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index de0644bde400..a798c542c849 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1532,8 +1532,11 @@ static enum scan_result 
try_collapse_pte_mapped_thp(struct mm_struct *mm, unsign
        if (!thp_vma_allowable_order(vma, vma->vm_flags, TVA_FORCED_COLLAPSE, 
PMD_ORDER))
                return SCAN_VMA_CHECK;
 
-       /* Keep pmd pgtable for uffd-wp; see comment in retract_page_tables() */
-       if (userfaultfd_wp(vma))
+       /*
+        * Keep pmd pgtable while the uffd bit is in use; see comment in
+        * retract_page_tables().
+        */
+       if (userfaultfd_protected(vma))
                return SCAN_PTE_UFFD;
 
        folio = filemap_lock_folio(vma->vm_file->f_mapping,
@@ -1746,13 +1749,14 @@ static bool file_backed_vma_is_retractable(struct 
vm_area_struct *vma)
                return false;
 
        /*
-        * When a vma is registered with uffd-wp, we cannot recycle
+        * When a vma is registered with uffd-wp or RWP, we cannot recycle
         * the page table because there may be pte markers installed.
-        * Other vmas can still have the same file mapped hugely, but
-        * skip this one: it will always be mapped in small page size
-        * for uffd-wp registered ranges.
+        * VM_UFFD_RWP ranges similarly rely on per-PTE uffd state
+        * and cannot be recycled to a shared PMD. Other vmas can still
+        * have the same file mapped hugely, but skip this one: it will
+        * always be mapped in small page size for these registrations.
         */
-       if (userfaultfd_wp(vma))
+       if (userfaultfd_protected(vma))
                return false;
 
        /*
diff --git a/mm/rmap.c b/mm/rmap.c
index 05056c213203..1426d1ece917 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1965,7 +1965,7 @@ static inline unsigned int folio_unmap_pte_batch(struct 
folio *folio,
        if (pte_unused(pte))
                return 1;
 
-       if (userfaultfd_wp(vma))
+       if (userfaultfd_protected(vma))
                return 1;
 
        /*
-- 
2.51.2


Reply via email to