On Mon, Dec 03, 2018 at 09:07:00AM -0800, Bart Van Assche wrote: > How about adding this as an additional patch before patch 25/27?
Excellent, thanks! > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c > index 9a7cca6dc3d4..ce05b9b419f4 100644 > --- a/kernel/locking/lockdep.c > +++ b/kernel/locking/lockdep.c > @@ -725,6 +725,15 @@ static bool assign_lock_key(struct lockdep_map *lock) > { > unsigned long can_addr, addr = (unsigned long)lock; > > + /* > + * lockdep_free_key_range() assumes that struct lock_class_key > + * objects do not overlap. Since we use the address of lock > + * objects as class key for static objects, check whether the > + * size of lock_class_key objects does not exceed the size of > + * the smallest lock object. > + */ > + BUILD_BUG_ON(sizeof(struct lock_class_key) > sizeof(raw_spinlock_t)); > + > if (__is_kernel_percpu_address(addr, &can_addr)) > lock->key = (void *)can_addr; > else if (__is_module_percpu_address(addr, &can_addr)) > > Thanks, > > Bart.