On 25.04.25 22:23, Peter Xu wrote:
On Fri, Apr 25, 2025 at 10:17:09AM +0200, David Hildenbrand wrote:
Let's use our new interface. In remap_pfn_range(), we'll now decide
whether we have to track (full VMA covered) or only sanitize the pgprot
(partial VMA covered).

Remember what we have to untrack by linking it from the VMA. When
duplicating VMAs (e.g., splitting, mremap, fork), we'll handle it similar
to anon VMA names, and use a kref to share the tracking.

Once the last VMA un-refs our tracking data, we'll do the untracking,
which simplifies things a lot and should sort our various issues we saw
recently, for example, when partially unmapping/zapping a tracked VMA.

This change implies that we'll keep tracking the original PFN range even
after splitting + partially unmapping it: not too bad, because it was
not working reliably before. The only thing that kind-of worked before
was shrinking such a mapping using mremap(): we managed to adjust the
reservation in a hacky way, now we won't adjust the reservation but
leave it around until all involved VMAs are gone.

Signed-off-by: David Hildenbrand <da...@redhat.com>
---
  include/linux/mm_inline.h |  2 +
  include/linux/mm_types.h  | 11 ++++++
  kernel/fork.c             | 54 ++++++++++++++++++++++++--
  mm/memory.c               | 81 +++++++++++++++++++++++++++++++--------
  mm/mremap.c               |  4 --
  5 files changed, 128 insertions(+), 24 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index f9157a0c42a5c..89b518ff097e6 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -447,6 +447,8 @@ static inline bool anon_vma_name_eq(struct anon_vma_name 
*anon_name1,
#endif /* CONFIG_ANON_VMA_NAME */ +void pfnmap_track_ctx_release(struct kref *ref);
+
  static inline void init_tlb_flush_pending(struct mm_struct *mm)
  {
        atomic_set(&mm->tlb_flush_pending, 0);
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 56d07edd01f91..91124761cfda8 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -764,6 +764,14 @@ struct vma_numab_state {
        int prev_scan_seq;
  };
+#ifdef __HAVE_PFNMAP_TRACKING
+struct pfnmap_track_ctx {
+       struct kref kref;
+       unsigned long pfn;
+       unsigned long size;
+};
+#endif
+
  /*
   * This struct describes a virtual memory area. There is one of these
   * per VM-area/task. A VM area is any part of the process virtual memory
@@ -877,6 +885,9 @@ struct vm_area_struct {
        struct anon_vma_name *anon_name;
  #endif
        struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
+#ifdef __HAVE_PFNMAP_TRACKING
+       struct pfnmap_track_ctx *pfnmap_track_ctx;
+#endif

So this was originally the small concern (or is it small?) that this will
grow every vma on x86, am I right?

Yeah, and last time I looked into this, it would have grown it such that it 
would
require a bigger slab. Right now:

Before this change:

struct vm_area_struct {
        union {
                struct {
                        long unsigned int vm_start;      /*     0     8 */
                        long unsigned int vm_end;        /*     8     8 */
                };                                       /*     0    16 */
                freeptr_t          vm_freeptr;           /*     0     8 */
        };                                               /*     0    16 */
        struct mm_struct *         vm_mm;                /*    16     8 */
        pgprot_t                   vm_page_prot;         /*    24     8 */
        union {
                const vm_flags_t   vm_flags;             /*    32     8 */
                vm_flags_t         __vm_flags;           /*    32     8 */
        };                                               /*    32     8 */
        unsigned int               vm_lock_seq;          /*    40     4 */

        /* XXX 4 bytes hole, try to pack */

        struct list_head           anon_vma_chain;       /*    48    16 */
        /* --- cacheline 1 boundary (64 bytes) --- */
        struct anon_vma *          anon_vma;             /*    64     8 */
        const struct vm_operations_struct  * vm_ops;     /*    72     8 */
        long unsigned int          vm_pgoff;             /*    80     8 */
        struct file *              vm_file;              /*    88     8 */
        void *                     vm_private_data;      /*    96     8 */
        atomic_long_t              swap_readahead_info;  /*   104     8 */
        struct mempolicy *         vm_policy;            /*   112     8 */
        struct vma_numab_state *   numab_state;          /*   120     8 */
        /* --- cacheline 2 boundary (128 bytes) --- */
        refcount_t                 vm_refcnt __attribute__((__aligned__(64))); 
/*   128     4 */

        /* XXX 4 bytes hole, try to pack */

        struct {
                struct rb_node     rb __attribute__((__aligned__(8))); /*   136 
   24 */
                long unsigned int  rb_subtree_last;      /*   160     8 */
        } __attribute__((__aligned__(8))) shared 
__attribute__((__aligned__(8)));        /*   136    32 */
        struct anon_vma_name *     anon_name;            /*   168     8 */
        struct vm_userfaultfd_ctx  vm_userfaultfd_ctx;   /*   176     0 */

        /* size: 192, cachelines: 3, members: 18 */
        /* sum members: 168, holes: 2, sum holes: 8 */
        /* padding: 16 */
        /* forced alignments: 2, forced holes: 1, sum forced holes: 4 */
} __attribute__((__aligned__(64)));

After this change:

struct vm_area_struct {
        union {
                struct {
                        long unsigned int vm_start;      /*     0     8 */
                        long unsigned int vm_end;        /*     8     8 */
                };                                       /*     0    16 */
                freeptr_t          vm_freeptr;           /*     0     8 */
        };                                               /*     0    16 */
        struct mm_struct *         vm_mm;                /*    16     8 */
        pgprot_t                   vm_page_prot;         /*    24     8 */
        union {
                const vm_flags_t   vm_flags;             /*    32     8 */
                vm_flags_t         __vm_flags;           /*    32     8 */
        };                                               /*    32     8 */
        unsigned int               vm_lock_seq;          /*    40     4 */

        /* XXX 4 bytes hole, try to pack */

        struct list_head           anon_vma_chain;       /*    48    16 */
        /* --- cacheline 1 boundary (64 bytes) --- */
        struct anon_vma *          anon_vma;             /*    64     8 */
        const struct vm_operations_struct  * vm_ops;     /*    72     8 */
        long unsigned int          vm_pgoff;             /*    80     8 */
        struct file *              vm_file;              /*    88     8 */
        void *                     vm_private_data;      /*    96     8 */
        atomic_long_t              swap_readahead_info;  /*   104     8 */
        struct mempolicy *         vm_policy;            /*   112     8 */
        struct vma_numab_state *   numab_state;          /*   120     8 */
        /* --- cacheline 2 boundary (128 bytes) --- */
        refcount_t                 vm_refcnt __attribute__((__aligned__(64))); 
/*   128     4 */

        /* XXX 4 bytes hole, try to pack */

        struct {
                struct rb_node     rb __attribute__((__aligned__(8))); /*   136 
   24 */
                long unsigned int  rb_subtree_last;      /*   160     8 */
        } __attribute__((__aligned__(8))) shared 
__attribute__((__aligned__(8)));        /*   136    32 */
        struct anon_vma_name *     anon_name;            /*   168     8 */
        struct vm_userfaultfd_ctx  vm_userfaultfd_ctx;   /*   176     0 */
        struct pfnmap_track_ctx *  pfnmap_track_ctx;     /*   176     8 */

        /* size: 192, cachelines: 3, members: 19 */
        /* sum members: 176, holes: 2, sum holes: 8 */
        /* padding: 8 */
        /* forced alignments: 2, forced holes: 1, sum forced holes: 4 */
} __attribute__((__aligned__(64)));

Observe that we allocate 192 bytes with or without pfnmap_track_ctx. (IIRC,
slab sizes are ... 128, 192, 256, 512, ...)


After all pfnmap vmas are the minority, I was thinking whether we could
work it out without extending vma struct.

Heh, similar to userfaultfd on most systems, or ones with a mempolicy, or
anon vma names, ... :)

But yeah, pfnmap is certainly a minority as well.


I had a quick thought quite a while ago, but never tried out (it was almost
off-track since vfio switched away from remap_pfn_range..), which is to
have x86 maintain its own mapping of vma <-> pfn tracking using a global
stucture.  After all, the memtype code did it already with the
memtype_rbroot, so I was thinking if vma info can be memorized as well, so
as to get rid of get_pat_info() too.

Maybe it also needs the 2nd layer like what you did with the track ctx, but
the tree maintains the mapping instead of adding the ctx pointer into vma.

Maybe it could work with squashing the two layers (or say, extending
memtype rbtree), but maybe not..

It could make it slightly slower than vma->pfnmap_track_ctx ref when
looking up pfn when holding a vma ref, but I assume it's ok considering
that track/untrack should be slow path for pfnmaps, and pfnmaps shouldn't
be a huge lot.

I didn't think further, but if that'll work it'll definitely avoids the
additional fields on x86 vmas.  I'm curious whether you explored that
direction, or maybe it's a known decision that the 8 bytes isn't a concern.

When discussing this approach with Lorenzo, I raised that we could simply
map using an xarray the VMA to that structure.

But then, if we're not effectively allocating any more space, then probably
not worth adding more complexity right now.

--
Cheers,

David / dhildenb

Reply via email to