On Thu, 2018-11-29 at 13:01 +0100, Peter Zijlstra wrote:
> On Thu, Nov 29, 2018 at 11:49:02AM +0100, Peter Zijlstra wrote:
> > On Wed, Nov 28, 2018 at 03:43:20PM -0800, Bart Van Assche wrote:
> > >   /*
> > >    * Remove all dependencies this lock is
> > >    * involved in:
> > >    */
> > > + list_for_each_entry_safe(entry, tmp, &all_list_entries, alloc_entry) {
> > >           if (entry->class != class && entry->links_to != class)
> > >                   continue;
> > >           links_to = entry->links_to;
> > >           WARN_ON_ONCE(entry->class == links_to);
> > >           list_del_rcu(&entry->lock_order_entry);
> > > +         list_move(&entry->alloc_entry, &free_list_entries);
> > >           entry->class = NULL;
> > >           entry->links_to = NULL;
> > >           check_free_class(zapped_classes, class);
> > 
> > Hurm.. I'm confused here.
> > 
> > The reason you cannot re-use lock_order_entry for the free list is
> > because list_del_rcu(), right? But if so, then what ensures the
> > list_entry is not re-used before it's grace-period?
> 
> Also; if you have to grow lock_list by 16 bytes just to be able to free
> it, a bitmap allocator is much cheaper, space wise.
> 
> Some people seem to really care about the static image size, and
> lockdep's .data section does matter to them.

How about addressing this by moving removed list entries to a "zapped_entries"
list and only moving list entries from the zapped_entries list to the
free_list_entries list after an RCU grace period? I'm not sure that it is
possible to implement that approach without introducing a new list_head in
struct lock_list.

Thanks,

Bart.


Reply via email to