Currently, when defrag is set to "madvise", thp allocations will direct
reclaim.  However, when defrag is set to "defer", all thp allocations do
not attempt reclaim regardless of MADV_HUGEPAGE.

This patch always directly reclaims for MADV_HUGEPAGE regions when defrag
is not set to "never."  The idea is that MADV_HUGEPAGE regions really
want to be backed by hugepages and are willing to endure the latency at
fault as it was the default behavior prior to commit 444eb2a449ef ("mm:
thp: set THP defrag by default to madvise and add a stall-free defrag
option").

In this form, "defer" is a stronger, more heavyweight version of
"madvise".

Signed-off-by: David Rientjes <rient...@google.com>
---
 Documentation/vm/transhuge.txt |  7 +++++--
 mm/huge_memory.c               | 10 ++++++----
 2 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/Documentation/vm/transhuge.txt b/Documentation/vm/transhuge.txt
--- a/Documentation/vm/transhuge.txt
+++ b/Documentation/vm/transhuge.txt
@@ -121,8 +121,11 @@ to utilise them.
 
 "defer" means that an application will wake kswapd in the background
 to reclaim pages and wake kcompact to compact memory so that THP is
-available in the near future. It's the responsibility of khugepaged
-to then install the THP pages later.
+available in the near future, unless it is for a region where
+madvise(MADV_HUGEPAGE) has been used, in which case direct reclaim will be
+used. Kcompactd will attempt to make hugepages available for allocation in
+the near future and khugepaged will try to collapse existing memory into
+hugepages later.
 
 "madvise" will enter direct reclaim like "always" but only for regions
 that are have used madvise(MADV_HUGEPAGE). This is the default behaviour.
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -619,15 +619,17 @@ static int __do_huge_pmd_anonymous_page(struct vm_fault 
*vmf, struct page *page,
  */
 static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)
 {
-       bool vma_madvised = !!(vma->vm_flags & VM_HUGEPAGE);
+       const bool vma_madvised = !!(vma->vm_flags & VM_HUGEPAGE);
 
        if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG,
                                &transparent_hugepage_flags) && vma_madvised)
                return GFP_TRANSHUGE;
        else if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG,
-                                               &transparent_hugepage_flags))
-               return GFP_TRANSHUGE_LIGHT | __GFP_KSWAPD_RECLAIM;
-       else if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG,
+                                               &transparent_hugepage_flags)) {
+               return GFP_TRANSHUGE_LIGHT |
+                      (vma_madvised ? __GFP_DIRECT_RECLAIM :
+                                      __GFP_KSWAPD_RECLAIM);
+       } else if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG,
                                                &transparent_hugepage_flags))
                return GFP_TRANSHUGE | (vma_madvised ? 0 : __GFP_NORETRY);
 

Reply via email to