>>>>>> 
>>>>>> I think the end result we're hoping for is something like pseudo code 
>>>>>> below,
>>>>>> (keep in mind that the event/sw has a service-core thread running it, so 
>>>>>> no
>>>>>> application code there):
>>>>>> 
>>>>>> int worker_poll = 1;
>>>>>> 
>>>>>> worker() {
>>>>>> while(worker_poll) {
>>>>>>   // eventdev_dequeue_burst() etc
>>>>>> }
>>>>>> go_to_sleep(1);
>>>>>> }
>>>>>> 
>>>>>> control_plane_scale_down() {
>>>>>> unlink(evdev, worker, queue_id);
>>>>>> while(unlinks_in_progress(evdev) > 0)
>>>>>>    usleep(100);
>>>>>> 
>>>>>> /* here we know that the unlink is complete.
>>>>>> * so we can now stop the worker from polling */
>>>>>> worker_poll = 0;
>>>>>> }
>>>>> 
>>>>> 
>>>>> Make sense. Instead of rte_event_is_unlink_in_progress(), How about
>>>>> adding a callback in rte_event_port_unlink() which will be called on
>>>>> unlink completion. It will reduce the need for ONE more API.
>>>>> 
>>>>> Anyway it RC2 now, so we can not accept a new feature. So we will have
>>>>> time for deprecation notice.
>>>>> 
>>>> 
>>>> Both solutions should work but I would perhaps favor Harry's approach as it
>>>> requires less code in the application side and doesn't break backward
>>>> compatibility.
>>> 
>>> OK.
>>> 
>>> Does rte_event_port_unlink() returning -EBUSY will help?
>> 
>> It could perhaps work. The return value becomes a bit ambiguous though. E.g. 
>> how
>> to differentiate a delayed unlink completion from a scenario where the port 
>> & queues
>> have never been linked?
> 
> Based on return code?

Yes, that works. I was thinking about the complexity of the implementation as 
it would
have to also track the pending unlink requests. But anyway, Harry is better 
answering
these questions since I guess he would be implementing this.


Reply via email to