On Mon, Jan 26, 2015 at 12:24:49PM -0600, Christoph Lameter wrote:
> On Mon, 26 Jan 2015, Vladimir Davydov wrote:
> 
> > Anyways, I think that silently relying on the fact that the allocator
> > never fails small allocations is kind of unreliable. What if this
> 
> We are not doing that though. If the allocation fails we do nothing.

Yeah, that's correct, but memcg/kmem wants it to always free empty slabs
(see patch 3 for details), so I'm trying to be punctual and eliminate
any possibility of failure, because a failure (if it ever happened)
would result in a permanent memory leak (pinned mem_cgroup + its
kmem_caches).

> 
> > > > +                       if (page->inuse < objects)
> > > > +                               list_move(&page->lru,
> > > > +                                         slabs_by_inuse + page->inuse);
> > > >                         if (!page->inuse)
> > > >                                 n->nr_partial--;
> > > >                 }
> > >
> > > The condition is always true. A page that has page->inuse == objects
> > > would not be on the partial list.
> > >
> >
> > This is in case we failed to allocate the slabs_by_inuse array. We only
> > have a list for empty slabs then (on stack).
> 
> Ok in that case objects == 1. If you want to do this maybe do it in a more
> general way?
> 
> You could allocate an array on the stack to deal with the common cases. I
> believe an array of 32 objects would be fine to allocate and cover most of
> the slab caches on the system? Would eliminate most of the allocations in
> kmem_cache_shrink.

We could do that, but IMO that would only complicate the code w/o
yielding any real benefits. This function is slow and called rarely
anyway, so I don't think there is any point to optimize out a page
allocation here.

Thanks,
Vladimir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to