On Wed, Jan 30, 2008 at 04:37:49PM +0100, Andrea Arcangeli wrote: > On Tue, Jan 29, 2008 at 06:29:10PM -0800, Christoph Lameter wrote: > > +void mmu_notifier_release(struct mm_struct *mm) > > +{ > > + struct mmu_notifier *mn; > > + struct hlist_node *n, *t; > > + > > + if (unlikely(!hlist_empty(&mm->mmu_notifier.head))) { > > + rcu_read_lock(); > > + hlist_for_each_entry_safe_rcu(mn, n, t, > > + &mm->mmu_notifier.head, hlist) { > > + hlist_del_rcu(&mn->hlist); > > This will race and kernel crash against mmu_notifier_register in > SMP. You should resurrect the per-mmu_notifier_head lock in my last > patch (except it can be converted from a rwlock_t to a regular > spinlock_t) and drop the mmap_sem from > mmu_notifier_register/unregister.
Agree. That will also resolve the problem we discussed yesterday. I want to unregister my mmu_notifier when a GRU segment is unmapped. This would not necessarily be at task termination. However, the mmap_sem is already held for write by the core VM at the point I would call the unregister function. Currently, there is no __mmu_notifier_unregister() defined. Moving to a different lock solves the problem. -- jack -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/