On 4/29/26 9:29 AM, Zi Yan wrote:
collapse_file() requires FSes supporting large folio with at least
PMD_ORDER, so replace the READ_ONLY_THP_FOR_FS check with that.
MADV_COLLAPSE ignores shmem huge config, so exclude the check for shmem.

While at it, replace VM_BUG_ON with VM_WARN_ON_ONCE.

Add a helper function mapping_pmd_folio_support() for FSes supporting large
folio with at least PMD_ORDER.

Signed-off-by: Zi Yan <[email protected]>
Reviewed-by: Lance Yang <[email protected]>
Reviewed-by: Baolin Wang <[email protected]>
---
  include/linux/pagemap.h | 26 ++++++++++++++++++++++++++
  mm/khugepaged.c         | 10 ++++++++--
  2 files changed, 34 insertions(+), 2 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 1f50991b43e3b..1fed3414fe9b8 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -513,6 +513,32 @@ static inline bool mapping_large_folio_support(const 
struct address_space *mappi
        return mapping_max_folio_order(mapping) > 0;
  }
+/**
+ * mapping_pmd_folio_support() - Check if a mapping support PMD-sized folio
+ * @mapping: The address_space
+ *
+ * Some file supports large folio but does not support as large as PMD order.
+ * If a PMD-sized pagecache folio is attempted to be created on a filesystem,
+ * this check needs to be performed first.
+ *
+ * Return: true - PMD-sized folio is supported, false - PMD-sized folio is not
+ * supported.
+ */
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+static inline bool mapping_pmd_folio_support(const struct address_space 
*mapping)
+{
+       /* AS_FOLIO_ORDER is only reasonable for pagecache folios */
+       VM_WARN_ON_ONCE((unsigned long)mapping & FOLIO_MAPPING_ANON);
+
+       return mapping_max_folio_order(mapping) >= PMD_ORDER;

Probably a stupid question, but I dont know FS thats well.

Here we are checking that the max allowed folio order is greater than (or eq) to the PMD_ORDER. Yet the function asks if PMD specifically is supported. In the future could we have some FS that does not support PMD orders, but does support larger orders (eg. PUD)?

Other than that. LGTM

Reviewed-by: Nico Pache <[email protected]>

+}
+#else
+static inline bool mapping_pmd_folio_support(const struct address_space 
*mapping)
+{
+       return false;
+}
+#endif
+
  /* Return the maximum folio size for this pagecache mapping, in bytes. */
  static inline size_t mapping_max_folio_size(const struct address_space 
*mapping)
  {
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index e112525c4aa9c..6808f2b48d864 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2235,8 +2235,14 @@ static enum scan_result collapse_file(struct mm_struct 
*mm, unsigned long addr,
        int nr_none = 0;
        bool is_shmem = shmem_file(file);
- VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem);
-       VM_BUG_ON(start & (HPAGE_PMD_NR - 1));
+       /*
+        * MADV_COLLAPSE ignores shmem huge config, so do not check shmem
+        *
+        * TODO: once shmem always calls mapping_set_large_folios() on its
+        * mapping, the shmem check can be removed.
+        */
+       VM_WARN_ON_ONCE(!is_shmem && !mapping_pmd_folio_support(mapping));
+       VM_WARN_ON_ONCE(start & (HPAGE_PMD_NR - 1));
result = alloc_charge_folio(&new_folio, mm, cc, HPAGE_PMD_ORDER);
        if (result != SCAN_SUCCEED)


Reply via email to