On Thu, 7 Apr 2016, Michal Hocko wrote:

> > +static void hugetlb_cgroup_init(struct hugetlb_cgroup *h_cgroup,
> > +                           struct hugetlb_cgroup *parent_h_cgroup)
> > +{
> > +   int idx;
> > +
> > +   for (idx = 0; idx < HUGE_MAX_HSTATE; idx++) {
> > +           struct page_counter *counter = &h_cgroup->hugepage[idx];
> > +           struct page_counter *parent = NULL;
> > +           unsigned long limit;
> > +           int ret;
> > +
> > +           if (parent_h_cgroup)
> > +                   parent = &parent_h_cgroup->hugepage[idx];
> > +           page_counter_init(counter, parent);
> > +
> > +           limit = round_down(PAGE_COUNTER_MAX,
> > +                              1 << huge_page_order(&hstates[idx]));
> > +           ret = page_counter_limit(counter, limit);
> > +           VM_BUG_ON(ret);
> > +   }
> > +}
> 
> I fail to see the point for this. Why would want to round down
> PAGE_COUNTER_MAX? It will never make a real difference. Or am I missing
> something?

Did you try the patch?

If we're rounding down the user value, it makes sense to be consistent 
with the upper bound default to specify intent.

Reply via email to