On Tue, Nov 20, 2012 at 4:24 AM, Tetsuo Handa <penguin-ker...@i-love.sakura.ne.jp> wrote: > Kees Cook wrote: >> Instead of locking the list during a delete, mark entries as invalid >> and trigger a workqueue to clean them up. This lets us easily handle >> task_free from interrupt context. > >> @@ -57,9 +80,12 @@ static int yama_ptracer_add(struct task_struct *tracer, >> >> added->tracee = tracee; >> added->tracer = tracer; >> + added->invalid = false; >> >> - spin_lock_bh(&ptracer_relations_lock); >> + spin_lock(&ptracer_relations_lock); > > Can't you use > spin_lock_irqsave(&ptracer_relations_lock, flags); > spin_unlock_irqrestore(&ptracer_relations_lock, flags); > instead of adding ->invalid ?
The _bh was sufficient originally, but looking at Sasha's deadlock, it seems like I should get rid of locking entirely on this path. What do you think of this report: https://lkml.org/lkml/2012/10/17/600 I'm concerned that blocking interrupts would be an even more expensive solution, since every task_free() is forced to block interrupts briefly. Most systems will have either an empty relations list, or a very short one, so it seemed better to avoid any locking at all on the task_free() path. Now the locking contention would be moved to being between the workqueue and any add calls. -Kees -Kees -- Kees Cook Chrome OS Security -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/