Hello,
On Mon, Sep 24, 2012 at 12:25:00PM +0400, Glauber Costa wrote:
> > This is kinda nasty. Do we really need to do this? How long would a
> > dead cache stick around?
>
> Without targeted shrinking, until all objects are manually freed, which
> may need to wait global reclaim to kick in.
>
On 09/22/2012 12:40 AM, Tejun Heo wrote:
> Hello, Glauber.
>
> On Tue, Sep 18, 2012 at 06:12:09PM +0400, Glauber Costa wrote:
>> @@ -764,10 +777,21 @@ static struct kmem_cache
>> *memcg_create_kmem_cache(struct mem_cgroup *memcg,
>> goto out;
>> }
>>
>> +/*
>> + * Beca
Hello, Glauber.
On Tue, Sep 18, 2012 at 06:12:09PM +0400, Glauber Costa wrote:
> @@ -764,10 +777,21 @@ static struct kmem_cache
> *memcg_create_kmem_cache(struct mem_cgroup *memcg,
> goto out;
> }
>
> + /*
> + * Because the cache is expected to duplicate the string,
On 09/21/2012 01:28 PM, JoonSoo Kim wrote:
> Hi, Glauber.
>
>>> 2012/9/18 Glauber Costa :
diff --git a/mm/slub.c b/mm/slub.c
index 0b68d15..9d79216 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2602,6 +2602,7 @@ redo:
} else
__slab_free(s, pa
Hi, Glauber.
>> 2012/9/18 Glauber Costa :
>>> diff --git a/mm/slub.c b/mm/slub.c
>>> index 0b68d15..9d79216 100644
>>> --- a/mm/slub.c
>>> +++ b/mm/slub.c
>>> @@ -2602,6 +2602,7 @@ redo:
>>> } else
>>> __slab_free(s, page, x, addr);
>>>
>>> + kmem_cache_verify_dead(s)
On 09/21/2012 08:48 AM, JoonSoo Kim wrote:
> Hi Glauber.
>
Hi
> 2012/9/18 Glauber Costa :
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 0b68d15..9d79216 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -2602,6 +2602,7 @@ redo:
>> } else
>> __slab_free(s, page, x, addr);
Hi Glauber.
2012/9/18 Glauber Costa :
> diff --git a/mm/slub.c b/mm/slub.c
> index 0b68d15..9d79216 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2602,6 +2602,7 @@ redo:
> } else
> __slab_free(s, page, x, addr);
>
> + kmem_cache_verify_dead(s);
> }
As far as u kn
On 09/18/2012 09:02 PM, Christoph Lameter wrote:
> Why doesnt slab need that too? It keeps a number of free pages on the per
> node lists until shrink is called.
>
You have already given me this feedback, and I forgot to include it
here. I am sorry for this slip. It was my intention to this for th
Why doesnt slab need that too? It keeps a number of free pages on the per
node lists until shrink is called.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-inf
In the slub allocator, when the last object of a page goes away, we
don't necessarily free it - there is not necessarily a test for empty
page in any slab_free path.
This means that when we destroy a memcg cache that happened to be empty,
those caches may take a lot of time to go away: removing th
10 matches
Mail list logo