On 6/17/20 1:04 PM, Matan Azrad wrote:

>>> Don’t you think that only enabled queues must be fully initialized when
>> their status is changed from disabled to enabled?
>>> So, you can assume that disabled queues can stay "not fully initialized"...
>>
>> That may work but might not be following the Virtio spec as with 1.0 we
>> shouldn't process the rings before DRIVER_OK is set (but we cannot be sure
>> we follow it anyway without SET_STATUS support).
>>
>> I propose to cook a patch doing the following:
>> 1. virtio_is_ready() will only ensure the first queue pair is ready (i.e. 
>> enabled
>> and configured). Meaning that app's new_device callback and vDPA drivers
>> dev_conf callback will be called with only the first queue pair configured 
>> and
>> enabled.
>>
>> 2. Before handling a new vhost-user request, it saves the ready status for
>> every queue pair.
>>
>> 3. Same handling of the requests, except that we won't notify the vdpa
>> driver and the application of vring state changes in the
>> VHOST_USER_SET_VRING_ENABLE handler.
>>
>> 4. Once the Vhost-user request is handled, it compares the new ready status
>> foe every queues with the old one and send queue state event changes
>> accordingly.
> 
> Looks very nice to me.

Cool!

> More points:
> By this method some queues may be configured by the set_vring_state operation 
> so the next calls are expected to be called for each queue by the driver from 
> the set_vring_state callback :
> 1. rte_vhost_enable_guest_notification
>       This one takes datapath lock so we need to be sure that datapath lock 
> is not locked on this queue from the same caller thread (maybe to not takes 
> datapath locks when vdpa is configured at all).

Good point, I agree we shouldn't need to use the access lock when vdpa
is configured. We may want to document that all the control path is
assumed to be single thread though.


> 2. rte_vhost_host_notifier_ctrl
>       This function API is per device and not per queue, maybe we need to 
> change this function to be per queue (add new for now and deprecate the old 
> one in 20.11).

This one is still experimental, so no issue in reworking the API to make
it per queue without deprecation notice.

> 3. Need to be sure that if ready queue configuration is changed after 
> dev_conf, we should notify it to the driver. (maybe by 
> set_vring_state(disabl) and set_vring_state(enable)).

Agree, I'm not sure yet if we should just toggle set_vring_state as you
proposes, or if we should have a new callback for this.

>> It is likely to need changes in the .dev_conf and .set_vring_state
>> implementations by the drivers.
> 
> Yes, for Mellanox it is very easy change.
> Intel?
> 
>  
>>>
>>>> With VHOST_USER_SET_STATUS, we will be able to handle this properly,
>>>> as the backend can be sure the guest won't initialize more queues as
>>>> soon as DRIVER_OK Virtio status bit is set. In my v2, I can add one
>>>> patch to handle this case properly, by "destorying" queues metadata
>>>> as soon as DRIVER_OK is received.
>>>>
>>>> Note that it was the exact reason why I first tried to add support
>>>> for VHOST_USER_SET_STATUS more than two years ago...:
>>>> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Flis
>>>> ts.g
>>>> nu.org%2Farchive%2Fhtml%2Fqemu-devel%2F2018-
>>>>
>> 02%2Fmsg04560.html&data=02%7C01%7Cmatan%40mellanox.com%7C
>>>>
>> bed5d361fbff47ab766008d80c99cc53%7Ca652971c7d2e4d9ba6a4d149256f461
>>>>
>> b%7C0%7C0%7C637273201984684513&sdata=KGJjdEtEN54duNu41rhBIw
>>>> o4tmdWn6QD4yvdR3zeLI8%3D&reserved=0
>>>>
>>>> What do you think?
>>>
>>> Yes, I agree it may be solved by VHOST_USER_SET_STATUS (and probably a
>>> lot of other issues), But I think we need support also legacy QEMU versions
>> if we can...
>>
>> I think the SET_STATUS support is important to be compliant with the Virtio
>> specifictation.
>>
>>> Don't you think so?
> 
> Yes, I agree.
> 
>>
>> We can try that.
>> I will try to cook something this week, but it will require validation with 
>> OVS
>> to be sure we don't break multiqueue. I will send it as RFC, and count on you
>> to try it with your mlx5 vDPA driver.
>>
>> Does it work for you? (note I'll be on vacation from July 1st to 17th)
> 
> Sure,
> Do you have capacity to do it this week?
> I can help..... 

That would be welcome, as I initially planned to spend time on reviewing
& merging patches this week.

Thanks,
Maxime
> Matan
> 
> 
>>
>> Thanks,
>> Maxime
>>
>>>> Regards,
>>>> Maxime
>>>
> 

Reply via email to