Hi,

On 9/3/25 04:18, Balbir Singh wrote:

> MIGRATE_VMA_SELECT_COMPOUND will be used to select THP pages during
> migrate_vma_setup() and MIGRATE_PFN_COMPOUND will make migrating
> device pages as compound pages during device pfn migration.
>
> migrate_device code paths go through the collect, setup
> and finalize phases of migration.
>
> The entries in src and dst arrays passed to these functions still
> remain at a PAGE_SIZE granularity. When a compound page is passed,
> the first entry has the PFN along with MIGRATE_PFN_COMPOUND
> and other flags set (MIGRATE_PFN_MIGRATE, MIGRATE_PFN_VALID), the
> remaining entries (HPAGE_PMD_NR - 1) are filled with 0's. This
> representation allows for the compound page to be split into smaller
> page sizes.
>
> migrate_vma_collect_hole(), migrate_vma_collect_pmd() are now THP
> page aware. Two new helper functions migrate_vma_collect_huge_pmd()
> and migrate_vma_insert_huge_pmd_page() have been added.
>
> migrate_vma_collect_huge_pmd() can collect THP pages, but if for
> some reason this fails, there is fallback support to split the folio
> and migrate it.
>
> migrate_vma_insert_huge_pmd_page() closely follows the logic of
> migrate_vma_insert_page()
>
> Support for splitting pages as needed for migration will follow in
> later patches in this series.
>
> Cc: Andrew Morton <a...@linux-foundation.org>
> Cc: David Hildenbrand <da...@redhat.com>
> Cc: Zi Yan <z...@nvidia.com>
> Cc: Joshua Hahn <joshua.hah...@gmail.com>
> Cc: Rakie Kim <rakie....@sk.com>
> Cc: Byungchul Park <byungc...@sk.com>
> Cc: Gregory Price <gou...@gourry.net>
> Cc: Ying Huang <ying.hu...@linux.alibaba.com>
> Cc: Alistair Popple <apop...@nvidia.com>
> Cc: Oscar Salvador <osalva...@suse.de>
> Cc: Lorenzo Stoakes <lorenzo.stoa...@oracle.com>
> Cc: Baolin Wang <baolin.w...@linux.alibaba.com>
> Cc: "Liam R. Howlett" <liam.howl...@oracle.com>
> Cc: Nico Pache <npa...@redhat.com>
> Cc: Ryan Roberts <ryan.robe...@arm.com>
> Cc: Dev Jain <dev.j...@arm.com>
> Cc: Barry Song <bao...@kernel.org>
> Cc: Lyude Paul <ly...@redhat.com>
> Cc: Danilo Krummrich <d...@kernel.org>
> Cc: David Airlie <airl...@gmail.com>
> Cc: Simona Vetter <sim...@ffwll.ch>
> Cc: Ralph Campbell <rcampb...@nvidia.com>
> Cc: Mika Penttilä <mpent...@redhat.com>
> Cc: Matthew Brost <matthew.br...@intel.com>
> Cc: Francois Dugast <francois.dug...@intel.com>
>
> Signed-off-by: Balbir Singh <balb...@nvidia.com>
> ---
>  include/linux/migrate.h |   2 +
>  mm/migrate_device.c     | 456 ++++++++++++++++++++++++++++++++++------
>  2 files changed, 395 insertions(+), 63 deletions(-)
>
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index 9009e27b5f44..40e1c792eb54 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -134,6 +134,7 @@ static inline int migrate_misplaced_folio(struct folio 
> *folio, int node)
>  #define MIGRATE_PFN_VALID    (1UL << 0)
>  #define MIGRATE_PFN_MIGRATE  (1UL << 1)
>  #define MIGRATE_PFN_WRITE    (1UL << 3)
> +#define MIGRATE_PFN_COMPOUND (1UL << 4)
>  #define MIGRATE_PFN_SHIFT    6
>  
>  static inline struct page *migrate_pfn_to_page(unsigned long mpfn)
> @@ -152,6 +153,7 @@ enum migrate_vma_direction {
>       MIGRATE_VMA_SELECT_SYSTEM = 1 << 0,
>       MIGRATE_VMA_SELECT_DEVICE_PRIVATE = 1 << 1,
>       MIGRATE_VMA_SELECT_DEVICE_COHERENT = 1 << 2,
> +     MIGRATE_VMA_SELECT_COMPOUND = 1 << 3,
>  };
>  
>  struct migrate_vma {
> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> index e58c3f9d01c8..aba0cd7856da 100644
> --- a/mm/migrate_device.c
> +++ b/mm/migrate_device.c
> @@ -14,6 +14,7 @@
>  #include <linux/pagewalk.h>
>  #include <linux/rmap.h>
>  #include <linux/swapops.h>
> +#include <asm/pgalloc.h>
>  #include <asm/tlbflush.h>
>  #include "internal.h"
>  
> @@ -44,6 +45,23 @@ static int migrate_vma_collect_hole(unsigned long start,
>       if (!vma_is_anonymous(walk->vma))
>               return migrate_vma_collect_skip(start, end, walk);
>  
> +     if (thp_migration_supported() &&
> +             (migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) &&
> +             (IS_ALIGNED(start, HPAGE_PMD_SIZE) &&
> +              IS_ALIGNED(end, HPAGE_PMD_SIZE))) {
> +             migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE |
> +                                             MIGRATE_PFN_COMPOUND;
> +             migrate->dst[migrate->npages] = 0;
> +             migrate->npages++;
> +             migrate->cpages++;
> +
> +             /*
> +              * Collect the remaining entries as holes, in case we
> +              * need to split later
> +              */
> +             return migrate_vma_collect_skip(start + PAGE_SIZE, end, walk);
> +     }
> +

seems you have to split_huge_pmd() for the huge zero page here in case of 
!thp_migration_supported() afaics

>       for (addr = start; addr < end; addr += PAGE_SIZE) {
>               migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE;
>               migrate->dst[migrate->npages] = 0;
> @@ -102,57 +120,150 @@ static int migrate_vma_split_folio(struct folio *folio,
>       return 0;
>  }

--Mika

Reply via email to