From: Jason Gunthorpe <j...@nvidia.com>

[ Upstream commit 0cb42c0265837fafa2b4f302c8a7fed2631d7869 ]

ib_unregister_device_queued() can only be used by drivers using the new
dealloc_device callback flow, and it has a safety WARN_ON to ensure
drivers are using it properly.

However, if unregister and register are raced there is a special
destruction path that maintains the uniform error handling semantic of
'caller does ib_dealloc_device() on failure'. This requires disabling the
dealloc_device callback which triggers the WARN_ON.

Instead of using NULL to disable the callback use a special function
pointer so the WARN_ON does not trigger.

Fixes: d0899892edd0 ("RDMA/device: Provide APIs from the core code to help 
unregistration")
Link: 
https://lore.kernel.org/r/0-v1-a36d512e0a99+762-syz_dealloc_driver_...@nvidia.com
Reported-by: syzbot+4088ed905e4ae2b0e...@syzkaller.appspotmail.com
Suggested-by: Hillf Danton <hdan...@sina.com>
Reviewed-by: Leon Romanovsky <leo...@mellanox.com>
Signed-off-by: Jason Gunthorpe <j...@nvidia.com>
Signed-off-by: Sasha Levin <sas...@kernel.org>
---
 drivers/infiniband/core/device.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
index 10ae6c6eab0ad..59dc9f3cfb376 100644
--- a/drivers/infiniband/core/device.c
+++ b/drivers/infiniband/core/device.c
@@ -1330,6 +1330,10 @@ static int enable_device_and_get(struct ib_device 
*device)
        return ret;
 }
 
+static void prevent_dealloc_device(struct ib_device *ib_dev)
+{
+}
+
 /**
  * ib_register_device - Register an IB device with IB core
  * @device:Device to register
@@ -1397,11 +1401,11 @@ int ib_register_device(struct ib_device *device, const 
char *name)
                 * possibility for a parallel unregistration along with this
                 * error flow. Since we have a refcount here we know any
                 * parallel flow is stopped in disable_device and will see the
-                * NULL pointers, causing the responsibility to
+                * special dealloc_driver pointer, causing the responsibility to
                 * ib_dealloc_device() to revert back to this thread.
                 */
                dealloc_fn = device->ops.dealloc_driver;
-               device->ops.dealloc_driver = NULL;
+               device->ops.dealloc_driver = prevent_dealloc_device;
                ib_device_put(device);
                __ib_unregister_device(device);
                device->ops.dealloc_driver = dealloc_fn;
@@ -1449,7 +1453,8 @@ static void __ib_unregister_device(struct ib_device 
*ib_dev)
         * Drivers using the new flow may not call ib_dealloc_device except
         * in error unwind prior to registration success.
         */
-       if (ib_dev->ops.dealloc_driver) {
+       if (ib_dev->ops.dealloc_driver &&
+           ib_dev->ops.dealloc_driver != prevent_dealloc_device) {
                WARN_ON(kref_read(&ib_dev->dev.kobj.kref) <= 1);
                ib_dealloc_device(ib_dev);
        }
-- 
2.25.1



Reply via email to