On Fri 24-08-18 12:03:14, Aneesh Kumar K.V wrote:
> When scanning for movable pages, filter out Hugetlb pages if hugepage 
> migration
> is not supported. Without this we hit infinte loop in __offline pages where we
> do
>       pfn = scan_movable_pages(start_pfn, end_pfn);
>       if (pfn) { /* We have movable pages */
>               ret = do_migrate_range(pfn, end_pfn);
>               goto repeat;
>       }
> 
> We do support hugetlb migration ony if the hugetlb pages are at pmd level. 
> Here
> we just check for Kernel config. The gigantic page size check is done in
> page_huge_active.

Well, this is a bit misleading. I would say that

Fix this by checking hugepage_migration_supported both in has_unmovable_pages
which is the primary backoff mechanism for page offlining and for
consistency reasons also into scan_movable_pages because it doesn't make
any sense to return a pfn to non-migrateable huge page.

> Acked-by: Michal Hocko <mho...@suse.com>
> Reported-by: Haren Myneni <ha...@linux.vnet.ibm.com>
> CC: Naoya Horiguchi <n-horigu...@ah.jp.nec.com>
> Signed-off-by: Aneesh Kumar K.V <aneesh.ku...@linux.ibm.com>

I would add
Fixes: 72b39cfc4d75 ("mm, memory_hotplug: do not fail offlining too early")

Not because the bug has been introduced by that commit but rather
because the issue would be latent before that commit.

My Acked-by still holds.

> ---
>  mm/memory_hotplug.c | 3 ++-
>  mm/page_alloc.c     | 4 ++++
>  2 files changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 9eea6e809a4e..38d94b703e9d 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1333,7 +1333,8 @@ static unsigned long scan_movable_pages(unsigned long 
> start, unsigned long end)
>                       if (__PageMovable(page))
>                               return pfn;
>                       if (PageHuge(page)) {
> -                             if (page_huge_active(page))
> +                             if 
> (hugepage_migration_supported(page_hstate(page)) &&
> +                                 page_huge_active(page))
>                                       return pfn;
>                               else
>                                       pfn = round_up(pfn + 1,
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c677c1506d73..b8d91f59b836 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -7709,6 +7709,10 @@ bool has_unmovable_pages(struct zone *zone, struct 
> page *page, int count,
>                * handle each tail page individually in migration.
>                */
>               if (PageHuge(page)) {
> +
> +                     if (!hugepage_migration_supported(page_hstate(page)))
> +                             goto unmovable;
> +
>                       iter = round_up(iter + 1, 1<<compound_order(page)) - 1;
>                       continue;
>               }
> -- 
> 2.17.1

-- 
Michal Hocko
SUSE Labs

Reply via email to