On Thu, 2018-11-29 at 11:10 +0100, Peter Zijlstra wrote:
> On Wed, Nov 28, 2018 at 03:43:23PM -0800, Bart Van Assche wrote:
> > +/* hash_entry is used to keep track of dynamically allocated keys. */
> >  struct lock_class_key {
> > +   struct hlist_node               hash_entry;
> >     struct lockdep_subclass_key     subkeys[MAX_LOCKDEP_SUBCLASSES];
> >  };
> 
> One consideration; and maybe we should have a BUILD_BUG for that, is
> that this object should be no larger than the smallest lock primitive.
> 
> That typically is raw_spinlock_t, which normally is 4 bytes, but with
> lockdep on that at least also includes struct lockdep_map.
> 
> So what we want is:
> 
>       sizeof(lock_class_key) <= sizeof(raw_spinlock_t)
> 
> Otherwise, two consecutive spinlocks could end up with key overlap in
> their subclass range.
> 
> Now, I think that is still valid after this patch, but it is something
> that gave me pause.

How about adding this as an additional patch before patch 25/27?

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 9a7cca6dc3d4..ce05b9b419f4 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -725,6 +725,15 @@ static bool assign_lock_key(struct lockdep_map *lock)
 {
        unsigned long can_addr, addr = (unsigned long)lock;
 
+       /*
+        * lockdep_free_key_range() assumes that struct lock_class_key
+        * objects do not overlap. Since we use the address of lock
+        * objects as class key for static objects, check whether the
+        * size of lock_class_key objects does not exceed the size of
+        * the smallest lock object.
+        */
+       BUILD_BUG_ON(sizeof(struct lock_class_key) > sizeof(raw_spinlock_t));
+
        if (__is_kernel_percpu_address(addr, &can_addr))
                lock->key = (void *)can_addr;
        else if (__is_module_percpu_address(addr, &can_addr))

Thanks,

Bart.

Reply via email to