On 2/2/26 12:36, Jordan Niethe wrote:
> A future change will remove device private pages from the physical
> address space. This will mean that device private pages no longer have
> normal PFN and must be handled separately.
> 
> Prepare for this by adding a PVMW_DEVICE_PRIVATE flag to
> page_vma_mapped_walk::flags. This indicates that
> page_vma_mapped_walk::pfn contains a device private offset rather than a
> normal pfn.
> 
> Once the device private pages are removed from the physical address
> space this flag will be used to ensure a device private offset is
> returned.
> 
> Reviewed-by: Zi Yan <[email protected]>
> Signed-off-by: Jordan Niethe <[email protected]>
> Signed-off-by: Alistair Popple <[email protected]>
> ---
> v1:
>   - Update for HMM huge page support
> v2:
>   - Move adding device_private param to check_pmd() until final patch
> v3:
>   - Track device private offset in pvmw::flags instead of pvmw::pfn
> v4:
>   - No change
> ---
>  include/linux/rmap.h | 24 ++++++++++++++++++++++--
>  mm/page_vma_mapped.c |  4 ++--
>  mm/rmap.c            |  4 ++--
>  mm/vmscan.c          |  2 +-
>  4 files changed, 27 insertions(+), 7 deletions(-)
> 
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index daa92a58585d..1b03297f13dc 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -921,6 +921,8 @@ struct page *make_device_exclusive(struct mm_struct *mm, 
> unsigned long addr,
>  #define PVMW_SYNC            (1 << 0)
>  /* Look for migration entries rather than present PTEs */
>  #define PVMW_MIGRATION               (1 << 1)
> +/* pvmw::pfn is a device private offset */
> +#define PVMW_DEVICE_PRIVATE  (1 << 2)
>  
>  /* Result flags */
>  
> @@ -939,14 +941,32 @@ struct page_vma_mapped_walk {
>       unsigned int flags;
>  };
>  
> +static inline unsigned long page_vma_walk_flags(const struct folio *folio,
> +                                             unsigned long flags)
> +{
> +     if (folio_is_device_private(folio))
> +             return flags | PVMW_DEVICE_PRIVATE;
> +     return flags;
> +}
> +
> +static inline unsigned long folio_page_vma_walk_pfn(const struct folio 
> *folio)
> +{
> +     return folio_pfn(folio);
> +}
> +
> +static inline struct folio *page_vma_walk_pfn_to_folio(struct 
> page_vma_mapped_walk *pvmw)
> +{
> +     return pfn_folio(pvmw->pfn);
> +}
> +
>  #define DEFINE_FOLIO_VMA_WALK(name, _folio, _vma, _address, _flags)  \
>       struct page_vma_mapped_walk name = {                            \
> -             .pfn = folio_pfn(_folio),                               \
> +             .pfn = folio_page_vma_walk_pfn(_folio),                 \
>               .nr_pages = folio_nr_pages(_folio),                     \
>               .pgoff = folio_pgoff(_folio),                           \
>               .vma = _vma,                                            \
>               .address = _address,                                    \
> -             .flags = _flags,                                        \
> +             .flags = page_vma_walk_flags(_folio, _flags),           \
>       }

That's all rather horrible ...


I was asking myself recently, why something that is called
"page_vma_mapped_walk" consume a pfn. It's just a horrible interface.


* DEFINE_FOLIO_VMA_WALK() users obviously receive a folio.
* mm/migrate_device.c just abuses page_vma_mapped_walk() to make
  set_pmd_migration_entry() work. But we have a folio.
* page_mapped_in_vma() has a page/folio.

mapping_wrprotect_range_one() and pfn_mkclean_range() are the real
issues. They all end up calling page_vma_mkclean_one(), which does not
operate on pages/folios.

Ideally, the odd pfn case would use it's own simplified infrastructure.


So, could we simply add a folio+page pointer in case we have one, and
use that one if set, leaving leaving the pfn unset?

Then, the pfn would only be set for the
mapping_wrprotect_range_one/pfn_mkclean_range case. I don't think
device-private folios would ever have to mess with that.


Then, you just always have a folio+page and don't even have to worry
about the pfn?


-- 
Cheers,

David

Reply via email to