On Wed 10-01-18 15:43:17, Andrey Ryabinin wrote:
[...]
> @@ -2506,15 +2480,13 @@ static int mem_cgroup_resize_limit(struct mem_cgroup 
> *memcg,
>               if (!ret)
>                       break;
>  
> -             try_to_free_mem_cgroup_pages(memcg, 1, GFP_KERNEL, !memsw);
> -
> -             curusage = page_counter_read(counter);
> -             /* Usage is reduced ? */
> -             if (curusage >= oldusage)
> -                     retry_count--;
> -             else
> -                     oldusage = curusage;
> -     } while (retry_count);
> +             usage = page_counter_read(counter);
> +             if (!try_to_free_mem_cgroup_pages(memcg, usage - limit,
> +                                             GFP_KERNEL, !memsw)) {

If the usage drops below limit in the meantime then you get underflow
and reclaim the whole memcg. I do not think this is a good idea. This
can also lead to over reclaim. Why don't you simply stick with the
original SWAP_CLUSTER_MAX (aka 1 for try_to_free_mem_cgroup_pages)?

> +                     ret = -EBUSY;
> +                     break;
> +             }
> +     } while (true);

-- 
Michal Hocko
SUSE Labs

Reply via email to