Christoph,

While going through the slab code, I observed that alien caches are
not getting resized, when user changes the slab tunables. Appended patch
tries to fix this. Please review and let me know if I missed anything.

thanks,
suresh
---

Resize the alien caches too based on the slab tunables.

Signed-off-by: Suresh Siddha <[EMAIL PROTECTED]>
---

diff --git a/mm/slab.c b/mm/slab.c
index 4cbac24..e0dd9af 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3823,6 +3823,7 @@ static int alloc_kmemlist(struct kmem_cache *cachep)
                l3 = cachep->nodelists[node];
                if (l3) {
                        struct array_cache *shared = l3->shared;
+                       struct array_cache **alien = l3->alien;
 
                        spin_lock_irq(&l3->list_lock);
 
@@ -3831,15 +3832,15 @@ static int alloc_kmemlist(struct kmem_cache *cachep)
                                                shared->avail, node);
 
                        l3->shared = new_shared;
-                       if (!l3->alien) {
-                               l3->alien = new_alien;
-                               new_alien = NULL;
-                       }
+                       l3->alien = new_alien;
                        l3->free_limit = (1 + nr_cpus_node(node)) *
                                        cachep->batchcount + cachep->num;
                        spin_unlock_irq(&l3->list_lock);
                        kfree(shared);
-                       free_alien_cache(new_alien);
+                       if (alien) {
+                               drain_alien_cache(cachep, alien);
+                               free_alien_cache(alien);
+                       }
                        continue;
                }
                l3 = kmalloc_node(sizeof(struct kmem_list3), GFP_KERNEL, node);
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to