Since in graph search, multiple matches may be needed, a matched lock needs to rejoin the search for another match, thereby introduce mark_lock_unaccessed().
Signed-off-by: Yuyang Du <duyuy...@gmail.com> --- kernel/locking/lockdep.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index 54ddf85..617c0f4 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -1338,6 +1338,15 @@ static inline void mark_lock_accessed(struct lock_list *lock, lock->class->dep_gen_id = lockdep_dependency_gen_id; } +static inline void mark_lock_unaccessed(struct lock_list *lock) +{ + unsigned long nr; + + nr = lock - list_entries; + WARN_ON(nr >= ARRAY_SIZE(list_entries)); /* Out-of-bounds, input fail */ + lock->class->dep_gen_id--; +} + static inline unsigned long lock_accessed(struct lock_list *lock) { unsigned long nr; -- 1.8.3.1