Now that PAGE_MAPPING_MOVABLE is gone, we can simplify and rely on the
folio_test_anon() test only.

... but staring at the users, this function should never even have been
called on movable_ops pages. E.g.,
* __buffer_migrate_folio() does not make sense for them
* folio_migrate_mapping() does not make sense for them
* migrate_huge_page_move_mapping() does not make sense for them
* __migrate_folio() does not make sense for them
* ... and khugepaged should never stumble over them

Let's simply refuse typed pages (which includes slab) except hugetlb,
and WARN.

Reviewed-by: Zi Yan <z...@nvidia.com>
Signed-off-by: David Hildenbrand <da...@redhat.com>
---
 include/linux/mm.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 6a5447bd43fd8..f6ef4c4eb536b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2176,13 +2176,13 @@ static inline int folio_expected_ref_count(const struct 
folio *folio)
        const int order = folio_order(folio);
        int ref_count = 0;
 
-       if (WARN_ON_ONCE(folio_test_slab(folio)))
+       if (WARN_ON_ONCE(page_has_type(&folio->page) && 
!folio_test_hugetlb(folio)))
                return 0;
 
        if (folio_test_anon(folio)) {
                /* One reference per page from the swapcache. */
                ref_count += folio_test_swapcache(folio) << order;
-       } else if (!((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS)) {
+       } else {
                /* One reference per page from the pagecache. */
                ref_count += !!folio->mapping << order;
                /* One reference from PG_private. */
-- 
2.49.0


Reply via email to