On 4/20/2018 11:16 PM, Burakov, Anatoly wrote:
On 20-Apr-18 3:19 PM, Tan, Jianfeng wrote:
On 4/20/2018 4:26 PM, Burakov, Anatoly wrote:
On 19-Apr-18 5:50 PM, Jianfeng Tan wrote:
As we could add virtual devices from different threads now, we
add a spin lock to protect the vdev device list.
Suggested-by: Anatoly Burakov <anatoly.bura...@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng....@intel.com>
Reviewed-by: Qi Zhang <qi.z.zh...@intel.com>
---
<...>
+/* The caller shall be responsible for thread-safe */
static struct rte_vdev_device *
find_vdev(const char *name)
{
@@ -203,10 +206,6 @@ rte_vdev_init(const char *name, const char *args)
if (name == NULL)
return -EINVAL;
- dev = find_vdev(name);
- if (dev)
- return -EEXIST;
-
devargs = alloc_devargs(name, args);
if (!devargs)
return -ENOMEM;
@@ -221,16 +220,28 @@ rte_vdev_init(const char *name, const char
*args)
dev->device.numa_node = SOCKET_ID_ANY;
dev->device.name = devargs->name;
+ rte_spinlock_lock(&vdev_device_list_lock);
+ if (find_vdev(name)) {
+ rte_spinlock_unlock(&vdev_device_list_lock);
+ ret = -EEXIST;
+ goto fail;
+ }
+ TAILQ_INSERT_TAIL(&vdev_device_list, dev, next);
+ rte_spinlock_unlock(&vdev_device_list_lock);
+
I wonder if is possible to just leave the tailq locked until you
either insert the device into tailq, or figure out that it's not
possible? Seems like doing two locks here is unnecessary, unless
vdev_probe_all_drivers needs this tailq unlocked...
My opinion is that we don't know what could be done in driver
probe(). It could possibly insert a new vdev (it does not happen now,
but could happen in future?). So here, we call this with tailq
unlocked. Or we keep it as simple as possible as you say?
I thought this code was responsible for inserting vdevs? I think it
would be generally bad design to insert vdev while inserting vdev :)
I might have mixed this with another case. I think it's a fair point.
That said, it's a fair point, and i don't have a strong opinion on
this, so you can leave it as is if you want.
I'll change the implementation.
Thanks,
Jianfeng