Kernel sets PG_dropcache instead of PG_reclaim everywhere. Check
PG_dropcache in lru_gen_folio_seq().

No need to check for dirty and writeback as there's no conflict with
PG_readahead anymore.

Signed-off-by: Kirill A. Shutemov <kirill.shute...@linux.intel.com>
Acked-by: David Hildenbrand <da...@redhat.com>
---
 include/linux/mm_inline.h | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index f9157a0c42a5..f353d3c610ac 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -241,8 +241,7 @@ static inline unsigned long lru_gen_folio_seq(struct lruvec 
*lruvec, struct foli
        else if (reclaiming)
                gen = MAX_NR_GENS;
        else if ((!folio_is_file_lru(folio) && !folio_test_swapcache(folio)) ||
-                (folio_test_reclaim(folio) &&
-                 (folio_test_dirty(folio) || folio_test_writeback(folio))))
+                folio_test_dropbehind(folio))
                gen = MIN_NR_GENS;
        else
                gen = MAX_NR_GENS - folio_test_workingset(folio);
-- 
2.45.2


Reply via email to