On Tue, Oct 09, 2007 at 11:46:14AM -0700, Christoph Lameter wrote: > On Wed, 10 Oct 2007, Akinobu Mita wrote: > > > This patch removes init_alloc_cpu_cpu() from cpu hotplug notifier. But > > call it for each possible CPUs not only online CPUs at initialization time. > > Could you check if a per cpu structure has already been allocated and then > simply skip the call to init? Otherwise we end up with lots of per cpu > structures for cpus that will never show up. > > if (!get_cpu_slab(s, cpu)) > init_alloc_cpu_cpu( ...) >
I couldn't use get_cpu_slab() for that check. But I reviced the patch to do what you said. Subject: slub: fix cpu hotplug offline/online path This patch fixes the problem introduced by: http://kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.23-rc8/2.6.23-rc8-mm2/broken-out/slub-place-kmem_cache_cpu-structures-in-a-numa-aware-way.patch I got slub BUG report when I tried to do cpu hotplug/unplug $ while true; do echo 0 > /sys/devices/system/cpu/cpu1/online echo 1 > /sys/devices/system/cpu/cpu1/online done This is because init_alloc_cpu_cpu() is called every time when the CPU is going to be onlined but init_alloc_cpu_cpu() is not intented to be called twice or more for same CPU. Then it breaks kmem_cache_cpu_free list for the CPU. This patch checks if a per cpu structure has already been allocated and then simply skip the call to init_alloc_cpu_cpu(). Cc: Christoph Lameter <[EMAIL PROTECTED]> Signed-off-by: Akinobu Mita <[EMAIL PROTECTED]> --- mm/slub.c | 5 +++++ 1 file changed, 5 insertions(+) Index: 2.6-mm/mm/slub.c =================================================================== --- 2.6-mm.orig/mm/slub.c +++ 2.6-mm/mm/slub.c @@ -2033,6 +2033,11 @@ static void init_alloc_cpu_cpu(int cpu) { int i; + if (per_cpu(kmem_cache_cpu_free, cpu)) { + /* Already initialized once */ + return; + } + for (i = NR_KMEM_CACHE_CPU - 1; i >= 0; i--) free_kmem_cache_cpu(&per_cpu(kmem_cache_cpu, cpu)[i], cpu); } - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/