On 01/12/2015 10:37 AM, Peter Zijlstra wrote: > On Mon, Jan 12, 2015 at 10:12:38AM -0500, Sasha Levin wrote: >> The reason for my patch is simple: > > That might have maybe been good changelog material? > >> I'm fuzzing with hundreds of worker threads >> which at some point trigger a complete system lockup for some reason. >> >> When lockdep dumps the list of held locks it shows that pretty much every one >> of those threads is holding the lock which caused the lockup, which is >> incorrect >> because it considers locks in the process of getting acquired as "held". >> >> This is my solution to that issue. I wanted to know which one of the threads >> is >> really holding the lock rather than just waiting on it. >> >> Is there a better way to solve that problem? > > Sure, think moar, if the accompanying stack trace is in the middle > of the blocking primitive, ignore the top held lock ;-)
Tried that, it's a pain. Consider this scenario: Process A | Process B | Process C-[...] ----------------|-----------------------|---------------- mutex_lock(x) | | [busy working] | | | mutex_lock(z) | | mutex_lock(x) | | [waiting on x] | | | mutex_lock(z) | | [waiting on z] So at the end of all of that I have 1000 processes waiting on 'z', while the process that has 'z' is waiting on 'x'. So if I look at which processes are not stuck inside a blocking primitive I'll miss on process B., and it's link between process A and process B. > Alternatively, make better/more use of lock_acquired() and track the > acquire vs acquired information in the held_lock (1 bit) and look at it > when printing. We could do that, but then we'd lose the ability to get information out of locks, what's the benefit of doing that? Thanks, Sasha -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/