On Thu, Jun 20, 2013 at 08:26:03AM +0800, Wanpeng Li wrote:
> On Wed, Jun 19, 2013 at 05:52:50PM +0900, Joonsoo Kim wrote:
> >On Wed, Jun 19, 2013 at 04:00:32PM +0800, Wanpeng Li wrote:
> >> On Wed, Jun 19, 2013 at 03:33:55PM +0900, Joonsoo Kim wrote:
> >> >In free path, we don't check number of cpu_partial, so one slab can
> >> >be linked in cpu partial list even if cpu_partial is 0. To prevent this,
> >> >we should check number of cpu_partial in put_cpu_partial().
> >> >
> >> 
> >> How about skip get_partial entirely? put_cpu_partial is called 
> >> in two paths, one is during refill cpu partial lists in alloc 
> >> slow path, the other is in free slow path. And cpu_partial is 0 
> >> just in debug mode. 
> >> 
> >> - alloc slow path, there is unnecessary to call get_partial 
> >>   since cpu partial lists won't be used in debug mode. 
> >> - free slow patch, new.inuse won't be true in debug mode 
> >>   which lead to put_cpu_partial won't be called.
> >> 
> >
> >In debug mode, put_cpu_partial() can't be called already on both path.
> >But, if we assign 0 to cpu_partial via sysfs, put_cpu_partial() will be 
> >called
> >on free slow path. On alloc slow path, it can't be called, because following
> >test in get_partial_node() is always failed.
> >
> >available > s->cpu_partial / 2
> 
> Is it always true? We can freeze slab from partial list, and 
> s->cpu_partial is 0. 

Do you mean node partial list?

At first, acquire_slab() is called for a cpu slab(not cpu partial list)
in get_partial_node(), and then, check above test. In this time, available
is always higher than 0, so, if we assign 0 to s->cpu_partial, we break
the loop and we don't try to get a slab for cpu partial list.

Thanks.

> 
> Regards,
> Wanpeng Li 
> 
> >
> >Thanks.
> >
> >> Regards,
> >> Wanpeng Li 
> >> 
> >> >Signed-off-by: Joonsoo Kim <iamjoonsoo....@lge.com>
> >> >
> >> >diff --git a/mm/slub.c b/mm/slub.c
> >> >index 57707f0..7033b4f 100644
> >> >--- a/mm/slub.c
> >> >+++ b/mm/slub.c
> >> >@@ -1955,6 +1955,9 @@ static void put_cpu_partial(struct kmem_cache *s, 
> >> >struct page *page, int drain)
> >> >  int pages;
> >> >  int pobjects;
> >> >
> >> >+ if (!s->cpu_partial)
> >> >+         return;
> >> >+
> >> >  do {
> >> >          pages = 0;
> >> >          pobjects = 0;
> >> >-- 
> >> >1.7.9.5
> >> >
> >> >--
> >> >To unsubscribe, send a message with 'unsubscribe linux-mm' in
> >> >the body to majord...@kvack.org.  For more info on Linux MM,
> >> >see: http://www.linux-mm.org/ .
> >> >Don't email: <a href=mailto:"d...@kvack.org";> em...@kvack.org </a>
> >> 
> >> --
> >> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> >> the body to majord...@kvack.org.  For more info on Linux MM,
> >> see: http://www.linux-mm.org/ .
> >> Don't email: <a href=mailto:"d...@kvack.org";> em...@kvack.org </a>
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"d...@kvack.org";> em...@kvack.org </a>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to