On 7 Jan 2026, at 15:20, Zi Yan wrote:

> +THP folks

+willy, since he commented in another thread.

>
> On 16 Dec 2025, at 15:10, Francois Dugast wrote:
>
>> From: Matthew Brost <[email protected]>
>>
>> Introduce migrate_device_split_page() to split a device page into
>> lower-order pages. Used when a folio allocated as higher-order is freed
>> and later reallocated at a smaller order by the driver memory manager.
>>
>> Cc: Andrew Morton <[email protected]>
>> Cc: Balbir Singh <[email protected]>
>> Cc: [email protected]
>> Cc: [email protected]
>> Signed-off-by: Matthew Brost <[email protected]>
>> Signed-off-by: Francois Dugast <[email protected]>
>> ---
>>  include/linux/huge_mm.h |  3 +++
>>  include/linux/migrate.h |  1 +
>>  mm/huge_memory.c        |  6 ++---
>>  mm/migrate_device.c     | 49 +++++++++++++++++++++++++++++++++++++++++
>>  4 files changed, 56 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>> index a4d9f964dfde..6ad8f359bc0d 100644
>> --- a/include/linux/huge_mm.h
>> +++ b/include/linux/huge_mm.h
>> @@ -374,6 +374,9 @@ int __split_huge_page_to_list_to_order(struct page 
>> *page, struct list_head *list
>>  int folio_split_unmapped(struct folio *folio, unsigned int new_order);
>>  unsigned int min_order_for_split(struct folio *folio);
>>  int split_folio_to_list(struct folio *folio, struct list_head *list);
>> +int __split_unmapped_folio(struct folio *folio, int new_order,
>> +                       struct page *split_at, struct xa_state *xas,
>> +                       struct address_space *mapping, enum split_type 
>> split_type);
>>  int folio_check_splittable(struct folio *folio, unsigned int new_order,
>>                         enum split_type split_type);
>>  int folio_split(struct folio *folio, unsigned int new_order, struct page 
>> *page,
>> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
>> index 26ca00c325d9..ec65e4fd5f88 100644
>> --- a/include/linux/migrate.h
>> +++ b/include/linux/migrate.h
>> @@ -192,6 +192,7 @@ void migrate_device_pages(unsigned long *src_pfns, 
>> unsigned long *dst_pfns,
>>                      unsigned long npages);
>>  void migrate_device_finalize(unsigned long *src_pfns,
>>                      unsigned long *dst_pfns, unsigned long npages);
>> +int migrate_device_split_page(struct page *page);
>>
>>  #endif /* CONFIG_MIGRATION */
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 40cf59301c21..7ded35a3ecec 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3621,9 +3621,9 @@ static void __split_folio_to_order(struct folio 
>> *folio, int old_order,
>>   * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio 
>> might be
>>   * split but not to @new_order, the caller needs to check)
>>   */
>> -static int __split_unmapped_folio(struct folio *folio, int new_order,
>> -            struct page *split_at, struct xa_state *xas,
>> -            struct address_space *mapping, enum split_type split_type)
>> +int __split_unmapped_folio(struct folio *folio, int new_order,
>> +                       struct page *split_at, struct xa_state *xas,
>> +                       struct address_space *mapping, enum split_type 
>> split_type)
>>  {
>>      const bool is_anon = folio_test_anon(folio);
>>      int old_order = folio_order(folio);
>> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
>> index 23379663b1e1..eb0f0e938947 100644
>> --- a/mm/migrate_device.c
>> +++ b/mm/migrate_device.c
>> @@ -775,6 +775,49 @@ int migrate_vma_setup(struct migrate_vma *args)
>>  EXPORT_SYMBOL(migrate_vma_setup);
>>
>>  #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
>> +/**
>> + * migrate_device_split_page() - Split device page
>> + * @page: Device page to split
>> + *
>> + * Splits a device page into smaller pages. Typically called when 
>> reallocating a
>> + * folio to a smaller size. Inherently racy—only safe if the caller ensures
>> + * mutual exclusion within the page's folio (i.e., no other threads are 
>> using
>> + * pages within the folio). Expected to be called a free device page and
>> + * restores all split out pages to a free state.
>> + */

Do you mind explaining why __split_unmapped_folio() is needed for a free device
page? A free page is not supposed to be a large folio, at least from a core
MM point of view. __split_unmapped_folio() is intended to work on large folios
(or compound pages), even if the input folio has refcount == 0 (because it is
frozen).

>> +int migrate_device_split_page(struct page *page)
>> +{
>> +    struct folio *folio = page_folio(page);
>> +    struct dev_pagemap *pgmap = folio->pgmap;
>> +    struct page *unlock_page = folio_page(folio, 0);
>> +    unsigned int order = folio_order(folio), i;
>> +    int ret = 0;
>> +
>> +    VM_BUG_ON_FOLIO(!order, folio);
>> +    VM_BUG_ON_FOLIO(!folio_is_device_private(folio), folio);
>> +    VM_BUG_ON_FOLIO(folio_ref_count(folio), folio);

Please use VM_WARN_ON_FOLIO() instead to catch errors. There is no need to crash
the kernel

>> +
>> +    folio_lock(folio);
>> +
>> +    ret = __split_unmapped_folio(folio, 0, page, NULL, NULL, 
>> SPLIT_TYPE_UNIFORM);
>> +    if (ret) {
>> +           /*
>> +            * We can't fail here unless the caller doesn't know what they
>> +            * are doing.
>> +            */
>> +            VM_BUG_ON_FOLIO(ret, folio);

Same here.

>> +
>> +            return ret;
>> +    }
>> +
>> +    for (i = 0; i < 0x1 << order; ++i, ++unlock_page) {
>> +            page_folio(unlock_page)->pgmap = pgmap;
>> +            folio_unlock(page_folio(unlock_page));
>> +    }
>> +
>> +    return 0;
>> +}
>> +
>>  /**
>>   * migrate_vma_insert_huge_pmd_page: Insert a huge folio into 
>> @migrate->vma->vm_mm
>>   * at @addr. folio is already allocated as a part of the migration process 
>> with
>> @@ -927,6 +970,11 @@ static int migrate_vma_split_unmapped_folio(struct 
>> migrate_vma *migrate,
>>      return ret;
>>  }
>>  #else /* !CONFIG_ARCH_ENABLE_THP_MIGRATION */
>> +int migrate_device_split_page(struct page *page)
>> +{
>> +    return 0;
>> +}
>> +
>>  static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>>                                       unsigned long addr,
>>                                       struct page *page,
>> @@ -943,6 +991,7 @@ static int migrate_vma_split_unmapped_folio(struct 
>> migrate_vma *migrate,
>>      return 0;
>>  }
>>  #endif
>> +EXPORT_SYMBOL(migrate_device_split_page);
>>
>>  static unsigned long migrate_vma_nr_pages(unsigned long *src)
>>  {
>> -- 
>> 2.43.0
>
>
> Best Regards,
> Yan, Zi


Best Regards,
Yan, Zi

Reply via email to