On 27 Mar 2023, at 18:35, Maxime Coquelin wrote:

> On 3/27/23 18:04, Eelco Chaudron wrote:
>>
>>
>> On 27 Mar 2023, at 17:16, Gowrishankar Muthukrishnan wrote:
>>
>>> Hi Eelco,
>>>
>>>> +void
>>>> +rte_vhost_notify_guest(int vid, uint16_t queue_id) {
>>>> +  struct virtio_net *dev = get_device(vid);
>>>> +  struct vhost_virtqueue *vq;
>>>> +
>>>> +  if (!dev ||  queue_id >= VHOST_MAX_VRING)
>>>> +          return;
>>>> +
>>>> +  vq = dev->virtqueue[queue_id];
>>>> +  if (!vq)
>>>> +          return;
>>>> +
>>>> +  rte_spinlock_lock(&vq->access_lock);
>>>> +
>>>
>>> Is spin lock needed here before system call ?
>>
>> I assumed access_lock is protecting all the following fields in this 
>> structure, so I need the lock to read the vq->callfd, however, I can/should 
>> move the eventfd_write outside of the lock.
>
> The FD might be closed between the check and the call to eventfd_write
> though, but I agree this is not optimal to call the eventfd_write under
> the spinlock in your case, as you will block the pmd thread if it tries
> to enqueue/dequeue packets on this queue, defeating the purpose of this
> patch.
>
> Maybe the solution is to change to read-write locks for the access_lock
> spinlock. The datapath (rte_vhost_enqueue_burst/rte_vhost_dequeue_burst)
> and this API would use the read version, meaning they won't lock each
> other, and the control path (lib/vhost/vhost_user.c) will use the write
> version.
>
> Does that make sense?

Yes, this makes sense, let me investigate this and get back.

>>>> +  if (vq->callfd >= 0)
>>>> +          eventfd_write(vq->callfd, (eventfd_t)1);
>>>> +
>>>> +  rte_spinlock_unlock(&vq->access_lock);
>>>> +}
>>>> +
>>>
>>> Thanks.
>>

Reply via email to