On Sat, 24 February 2007 16:14:48 -0800, Christoph Lameter wrote:
>
> It eliminates 50% of the slab caches. Thus it reduces the management
> overhead by half.
How much management overhead is there left with SLUB? Is it just the
one per-node slab? Is there runtime overhead as well?
In a slight
From: Christoph Lameter <[EMAIL PROTECTED]>
Date: Sat, 24 Feb 2007 09:32:49 -0800 (PST)
> On Fri, 23 Feb 2007, David Miller wrote:
>
> > I also agree with Andi in that merging could mess up how object type
> > local lifetimes help reduce fragmentation in object pools.
>
> If that is a problem fo
On Sat, 24 Feb 2007, Jörn Engel wrote:
> How much of a gain is the merging anyway? Once you start having
> explicit whitelists or blacklists of pools that can be merged, one can
> start to wonder if the result is worth the effort.
It eliminates 50% of the slab caches. Thus it reduces the managem
On Sat, 24 February 2007 09:32:49 -0800, Christoph Lameter wrote:
>
> If that is a problem for particular object pools then we may be able to
> except those from the merging.
How much of a gain is the merging anyway? Once you start having
explicit whitelists or blacklists of pools that can be m
On Fri, 23 Feb 2007, David Miller wrote:
> > The general caches already merge lots of users depending on their sizes.
> > So we already have the situation and we have tools to deal with it.
>
> But this doesn't happen for things like biovecs, and that will
> make debugging painful.
>
> If a cra
From: Christoph Lameter <[EMAIL PROTECTED]>
Date: Fri, 23 Feb 2007 21:47:36 -0800 (PST)
> On Sat, 24 Feb 2007, KAMEZAWA Hiroyuki wrote:
>
> > >From a viewpoint of a crash dump user, this merging will make crash dump
> > investigation very very very difficult.
>
> The general caches already merge
On Sat, 24 Feb 2007, KAMEZAWA Hiroyuki wrote:
> >From a viewpoint of a crash dump user, this merging will make crash dump
> investigation very very very difficult.
The general caches already merge lots of users depending on their sizes.
So we already have the situation and we have tools to deal
On Thu, 22 Feb 2007 10:42:23 -0800 (PST)
Christoph Lameter <[EMAIL PROTECTED]> wrote:
> > > G. Slab merging
> > >
> > >We often have slab caches with similar parameters. SLUB detects those
> > >on bootup and merges them into the corresponding general caches. This
> > >leads to more ef
On Fri, 23 Feb 2007, Andi Kleen wrote:
> If you don't cache constructed but free objects then there is no cache
> advantage of constructors/destructors and they would be useless.
SLUB caches those objects as long as they are part of a partially
allocated slab. If all objects in the slab are free
On Thu, Feb 22, 2007 at 10:42:23AM -0800, Christoph Lameter wrote:
> On Thu, 22 Feb 2007, Andi Kleen wrote:
>
> > >SLUB does not need a cache reaper for UP systems.
> >
> > This means constructors/destructors are becomming worthless?
> > Can you describe your rationale why you think they don
On Thu, 22 Feb 2007, Andi Kleen wrote:
> >SLUB does not need a cache reaper for UP systems.
>
> This means constructors/destructors are becomming worthless?
> Can you describe your rationale why you think they don't make
> sense on UP?
Cache reaping has nothing to do with constructors and d
Christoph Lameter <[EMAIL PROTECTED]> writes:
> This is a new slab allocator which was motivated by the complexity of the
> with the existing implementation.
Thanks for doing that work. It certainly was long overdue.
> D. SLAB has a complex cache reaper
>
>SLUB does not need a cache reaper
n Thu, 22 Feb 2007, David Miller wrote:
> All of that logic needs to be protected by CONFIG_ZONE_DMA too.
Right. Will fix that in the next release.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http
On Thu, 22 Feb 2007, Peter Zijlstra wrote:
> On Wed, 2007-02-21 at 23:00 -0800, Christoph Lameter wrote:
>
> > +/*
> > + * Lock order:
> > + * 1. slab_lock(page)
> > + * 2. slab->list_lock
> > + *
>
> That seems to contradict this:
This is a trylock. If it fails then we can compensate by al
On Thu, 22 Feb 2007, Pekka Enberg wrote:
> On 2/22/07, Christoph Lameter <[EMAIL PROTECTED]> wrote:
> > This is a new slab allocator which was motivated by the complexity of the
> > existing code in mm/slab.c. It attempts to address a variety of concerns
> > with the existing implementation.
>
>
Hi Christoph,
On 2/22/07, Christoph Lameter <[EMAIL PROTECTED]> wrote:
This is a new slab allocator which was motivated by the complexity of the
existing code in mm/slab.c. It attempts to address a variety of concerns
with the existing implementation.
So do you want to add a new allocator or r
From: Christoph Lameter <[EMAIL PROTECTED]>
Date: Wed, 21 Feb 2007 23:00:30 -0800 (PST)
> +#ifdef CONFIG_ZONE_DMA
> +static struct kmem_cache *kmalloc_caches_dma[KMALLOC_NR_CACHES];
> +#endif
Therefore.
> +static struct kmem_cache *get_slab(size_t size, gfp_t flags)
> +{
...
> + s = kmalloc
On Wed, 2007-02-21 at 23:00 -0800, Christoph Lameter wrote:
> +/*
> + * Lock order:
> + * 1. slab_lock(page)
> + * 2. slab->list_lock
> + *
That seems to contradict this:
> +/*
> + * Lock page and remove it from the partial list
> + *
> + * Must hold list_lock
> + */
> +static __always_inlin
18 matches
Mail list logo