On Fri, 2019-02-08 at 11:43 +0000, Will Deacon wrote:
> I've also been trying to understand why it's necessary to check both of the
> pending_free entries, and I'm still struggling somewhat. It's true that the
> wakeup in get_pending_free_lock() could lead to both entries being used
> without the RCU call back running in between, however in this scenario then
> any list entries marked for freeing in the first pf will have been unhashed
> and therefore made unreachable to look_up_lock_class().
> 
> So I think the concern remains that entries are somehow remaining visible
> after being zapped.
> 
> You mentioned earlier in the thread that people actually complained about
> list corruption if you only checked the current pf:
> 
>   | The list_del_rcu() call must only happen once. I ran into complaints
>   | reporting that the list_del_rcu() call triggered list corruption. This
>   | change made these complaints disappear.
> 
> Do you have any more details about these complaints (e.g. kernel logs etc)?
> Failing that, any idea how to reproduce them?

Hi Will,

Since elements of the list_entries[] array are always accessed with the graph
lock held, how about removing the list_entries_being_freed bitmap and making
zap_class() clear the appropriate bits in the list_entries_in_use bitmap?

Thanks,

Bart.

Reply via email to