On Sun, Mar 3, 2013 at 2:51 PM, Al Viro <v...@zeniv.linux.org.uk> wrote: > > +struct vfree_deferred { > + spinlock_t lock; > + void *list; > + struct work_struct wq; > +};
Looking more at this, just get rid of the spinlock entirely, and use <linux/llist.h> for the list. IRQ-safety without the locking. Because you got the locking wrong again, and made free_work() use spin_lock_irq(), but > +static inline void deferred_vfree(void *addr) > +{ > + struct vfree_deferred *p = &get_cpu_var(vfree_deferred); > + spin_lock(&p->lock); This needs to be a spin_lock_irqsave() too. > + *(void **)addr = p->list; > + p->list = addr; > + schedule_work(&p->wq); > + spin_unlock(&p->lock); > + put_cpu_var(vfree_deferred); And there is no reason to hold the lock - or even stay on the CPU - over the work-scheduling (which had better be irq- and smp-safe on its own anyway), so you're actually best off just using struct vfree_deferred *p = &__get_cpu_var(vfree_deferred); struct llist_node *new = (void *)addr; llist_add(new, &p->list); schedule_work(&p->wq); and you're done. I'm not even sure it's worth it making it per-cpu, but I guess it won't hurt either. Linus -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/