On Thu, Feb 1, 2024 at 3:05 PM Bruce Richardson
<bruce.richard...@intel.com> wrote:
>
> On Fri, Jan 19, 2024 at 05:43:46PM +0000, Bruce Richardson wrote:
> > Clarify the meaning of the NEW, FORWARD and RELEASE event types.
> > For the fields in "rte_event" struct, enhance the comments on each to
> > clarify the field's use, and whether it is preserved between enqueue and
> > dequeue, and it's role, if any, in scheduling.
> >
> > Signed-off-by: Bruce Richardson <bruce.richard...@intel.com>
> > ---
> >
> > As with the previous patch, please review this patch to ensure that the
> > expected semantics of the various event types and event fields have not
> > changed in an unexpected way.
> > ---
> >  lib/eventdev/rte_eventdev.h | 105 ++++++++++++++++++++++++++----------
> >  1 file changed, 77 insertions(+), 28 deletions(-)
> >
> <snip>
>
> >  #define RTE_EVENT_OP_RELEASE            2
> >  /**< Release the flow context associated with the schedule type.
> >   *
> > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC*
> > + * If current flow's scheduler type method is @ref RTE_SCHED_TYPE_ATOMIC
> >   * then this function hints the scheduler that the user has completed 
> > critical
> >   * section processing in the current atomic context.
> >   * The scheduler is now allowed to schedule events from the same flow from
> > @@ -1442,21 +1446,19 @@ struct rte_event_vector {
> >   * performance, but the user needs to design carefully the split into 
> > critical
> >   * vs non-critical sections.
> >   *
> > - * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED*
> > - * then this function hints the scheduler that the user has done all that 
> > need
> > - * to maintain event order in the current ordered context.
> > - * The scheduler is allowed to release the ordered context of this port and
> > - * avoid reordering any following enqueues.
> > - *
> > - * Early ordered context release may increase parallelism and thus system
> > - * performance.
>
> Before I do up a V3 of this patchset, I'd like to try and understand a bit
> more what was meant by the original text for reordered here. The use of
> "context" is very ambiguous, since it could refer to a number of different
> things here.
>
> For me, RELEASE for ordered queues should mean much the same as for atomic
> queues - it means that the event being released is to be "dropped" from the
> point of view of the eventdev scheduler - i.e. any atomic locks held for
> that event should be released, and any reordering slots for it should be
> skipped. However, the text above seems to imply that when we release one
> event it means that we should stop reordering all subsequent events for
> that port - which seems wrong to me. Especially in the case where
> reordering may be done per flow, does one release mean that we need to go
> through all flows and mark as skipped all reordered slots awaiting returned
> events from that port? If this is what is intended, how is it better than
> just doing another dequeue call from the port, which releases everything
> automatically anyway?
>
> Jerin, I believe you were the author of the original text, can you perhaps
> clarify? Other PMD maintainers, can any of you chime in with current
> supported behaviour when enqueuing a release of an ordered event?

If N number of cores does rte_event_dequeue_burst() and got the same
flow, and it is scheduled as
RTE_SCHED_TYPE_ORDERED and then irrespective of the timing downstream
rte_event_enqueue_burst()
invocation any core. Upon rte_event_enqueue_burst() completion, the
events will be enqueued the downstream
queue in the ingress order.

Assume, one of the core, calls RTE_EVENT_OP_RELEASE  in between
dequeue and enqueue, then that event no more
eligible for the ingress order maintenance.


> Ideally, I'd like to see this simplified whereby release for ordered
> behaves like that for atomic, and refers to the current event only, (and
> drop any mention of contexts).
>
> Thanks,
> /Bruce

Reply via email to