On Tue, Jun 24, 2014 at 04:38:41PM +0900, Joonsoo Kim wrote:
> On Fri, Jun 13, 2014 at 12:38:22AM +0400, Vladimir Davydov wrote:
> > @@ -3462,6 +3474,17 @@ static inline void __cache_free(struct kmem_cache 
> > *cachep, void *objp,
> >  
> >     kmemcheck_slab_free(cachep, objp, cachep->object_size);
> >  
> > +#ifdef CONFIG_MEMCG_KMEM
> > +   if (unlikely(!ac)) {
> > +           int nodeid = page_to_nid(virt_to_page(objp));
> > +
> > +           spin_lock(&cachep->node[nodeid]->list_lock);
> > +           free_block(cachep, &objp, 1, nodeid);
> > +           spin_unlock(&cachep->node[nodeid]->list_lock);
> > +           return;
> > +   }
> > +#endif
> > +
> 
> And, please document intention of this code. :)

Sure.

> And, you said that this way of implementation would be slow because
> there could be many object in dead caches and this implementation
> needs node spin_lock on each object freeing. Is it no problem now?

It may be :(

> If you have any performance data about this implementation and
> alternative one, could you share it?

I haven't (shame on me!). I'll do some testing today and send you the
results.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to