On 09/04/2017 01:51 PM, Michal Hocko wrote:
> From: Michal Hocko <mho...@suse.com>
> 
> Memory offlining can fail just too eagerly under a heavy memory pressure.
> 
> [ 5410.336792] page:ffffea22a646bd00 count:255 mapcount:252 
> mapping:ffff88ff926c9f38 index:0x3
> [ 5410.336809] flags: 0x9855fe40010048(uptodate|active|mappedtodisk)
> [ 5410.336811] page dumped because: isolation failed
> [ 5410.336813] page->mem_cgroup:ffff8801cd662000
> [ 5420.655030] memory offlining [mem 0x18b580000000-0x18b5ffffffff] failed
> 
> Isolation has failed here because the page is not on LRU. Most probably
> because it was on the pcp LRU cache or it has been removed from the LRU
> already but it hasn't been freed yet. In both cases the page doesn't look
> non-migrable so retrying more makes sense.
> 
> __offline_pages seems rather cluttered when it comes to the retry
> logic. We have 5 retries at maximum and a timeout. We could argue
> whether the timeout makes sense but failing just because of a race when
> somebody isoltes a page from LRU or puts it on a pcp LRU lists is just
> wrong. It only takes it to race with a process which unmaps some pages
> and remove them from the LRU list and we can fail the whole offline
> because of something that is a temporary condition and actually not
> harmful for the offline. Please note that unmovable pages should be
> already excluded during start_isolate_page_range.
> 
> Fix this by removing the max retry count and only rely on the timeout
> resp. interruption by a signal from the userspace. Also retry rather
> than fail when check_pages_isolated sees some !free pages because those
> could be a result of the race as well.
> 
> Signed-off-by: Michal Hocko <mho...@suse.com>
> ---
>  mm/memory_hotplug.c | 40 ++++++++++------------------------------
>  1 file changed, 10 insertions(+), 30 deletions(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 459bbc182d10..c9dcbe6d2ac6 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1597,7 +1597,7 @@ static int __ref __offline_pages(unsigned long 
> start_pfn,
>  {
>       unsigned long pfn, nr_pages, expire;
>       long offlined_pages;
> -     int ret, drain, retry_max, node;
> +     int ret, node;
>       unsigned long flags;
>       unsigned long valid_start, valid_end;
>       struct zone *zone;
> @@ -1634,43 +1634,25 @@ static int __ref __offline_pages(unsigned long 
> start_pfn,
>  
>       pfn = start_pfn;
>       expire = jiffies + timeout;
> -     drain = 0;
> -     retry_max = 5;
>  repeat:
>       /* start memory hot removal */
> -     ret = -EAGAIN;
> +     ret = -EBUSY;
>       if (time_after(jiffies, expire))
>               goto failed_removal;
>       ret = -EINTR;
>       if (signal_pending(current))
>               goto failed_removal;
> -     ret = 0;
> -     if (drain) {
> -             lru_add_drain_all_cpuslocked();
> -             cond_resched();
> -             drain_all_pages(zone);
> -     }

Why we had this condition before that only when we fail in migration
later in do_migrate_range function, drain the lru lists in the next
attempt. Why not from the first attempt itself ? Just being curious.

Reply via email to