On Sat, 2007-07-14 at 21:39 +0400, Oleg Nesterov wrote:
> On 07/14, Peter Zijlstra wrote:
> >
> > +static void flush_all(struct kmem_cache *s)
> > +{
> > +   int cpu;
> > +   struct workqueue_struct *wq = flush_slab_workqueue;
> > +
> > +   mutex_lock(&flush_slab_mutex);
> > +   for_each_online_cpu(cpu) {
> > +           struct slab_work_struct *sw = &per_cpu(slab_works, cpu);
> > +
> > +           INIT_WORK(&sw->work, flush_cpu_slab_wq);
> > +           sw->s = s;
> > +           queue_work_cpu(wq, &sw->work, cpu);
> > +   }
> > +   flush_workqueue(wq);
> > +   mutex_unlock(&flush_slab_mutex);
> > +}
> 
> I suspect this is not cpu-hotplug safe. flush_slab_mutex doesn't protect
> from cpu_down(). This means that slab_work_struct could be scheduled on
> the already dead CPU. flush_workqueue(wq) will hang in that case.

Yeah, the function I copied this from: schedule_on_each_cpu() has a
comment to that effect.

Any ideas on how to solve this?

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to