On Tue, Jun 30, 2020 at 9:12 PM McDaniel, Timothy
<timothy.mcdan...@intel.com> wrote:
>
> >-----Original Message-----
> >From: Jerin Jacob <jerinjac...@gmail.com>
> >Sent: Monday, June 29, 2020 11:21 PM
> >To: McDaniel, Timothy <timothy.mcdan...@intel.com>
> >Cc: Ray Kinsella <m...@ashroe.eu>; Neil Horman <nhor...@tuxdriver.com>;
> >Jerin Jacob <jer...@marvell.com>; Mattias Rönnblom
> ><mattias.ronnb...@ericsson.com>; dpdk-dev <dev@dpdk.org>; Eads, Gage
> ><gage.e...@intel.com>; Van Haaren, Harry <harry.van.haa...@intel.com>
> >Subject: Re: [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequisites
> >
> >On Tue, Jun 30, 2020 at 1:01 AM McDaniel, Timothy
> ><timothy.mcdan...@intel.com> wrote:
> >>
> >> -----Original Message-----
> >> From: Jerin Jacob <jerinjac...@gmail.com>
> >> Sent: Saturday, June 27, 2020 2:45 AM
> >> To: McDaniel, Timothy <timothy.mcdan...@intel.com>; Ray Kinsella
> ><m...@ashroe.eu>; Neil Horman <nhor...@tuxdriver.com>
> >> Cc: Jerin Jacob <jer...@marvell.com>; Mattias Rönnblom
> ><mattias.ronnb...@ericsson.com>; dpdk-dev <dev@dpdk.org>; Eads, Gage
> ><gage.e...@intel.com>; Van Haaren, Harry <harry.van.haa...@intel.com>
> >> Subject: Re: [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequisites
> >>
> >> > +
> >> > +/** Event port configuration structure */
> >> > +struct rte_event_port_conf_v20 {
> >> > +       int32_t new_event_threshold;
> >> > +       /**< A backpressure threshold for new event enqueues on this 
> >> > port.
> >> > +        * Use for *closed system* event dev where event capacity is 
> >> > limited,
> >> > +        * and cannot exceed the capacity of the event dev.
> >> > +        * Configuring ports with different thresholds can make higher 
> >> > priority
> >> > +        * traffic less likely to  be backpressured.
> >> > +        * For example, a port used to inject NIC Rx packets into the 
> >> > event dev
> >> > +        * can have a lower threshold so as not to overwhelm the device,
> >> > +        * while ports used for worker pools can have a higher threshold.
> >> > +        * This value cannot exceed the *nb_events_limit*
> >> > +        * which was previously supplied to rte_event_dev_configure().
> >> > +        * This should be set to '-1' for *open system*.
> >> > +        */
> >> > +       uint16_t dequeue_depth;
> >> > +       /**< Configure number of bulk dequeues for this event port.
> >> > +        * This value cannot exceed the *nb_event_port_dequeue_depth*
> >> > +        * which previously supplied to rte_event_dev_configure().
> >> > +        * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE
> >capable.
> >> > +        */
> >> > +       uint16_t enqueue_depth;
> >> > +       /**< Configure number of bulk enqueues for this event port.
> >> > +        * This value cannot exceed the *nb_event_port_enqueue_depth*
> >> > +        * which previously supplied to rte_event_dev_configure().
> >> > +        * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE
> >capable.
> >> > +        */
> >> >         uint8_t disable_implicit_release;
> >> >         /**< Configure the port not to release outstanding events in
> >> >          * rte_event_dev_dequeue_burst(). If true, all events received 
> >> > through
> >> > @@ -733,6 +911,14 @@ struct rte_event_port_conf {
> >> >  rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
> >> >                                 struct rte_event_port_conf *port_conf);
> >> >
> >> > +int
> >> > +rte_event_port_default_conf_get_v20(uint8_t dev_id, uint8_t port_id,
> >> > +                               struct rte_event_port_conf_v20 
> >> > *port_conf);
> >> > +
> >> > +int
> >> > +rte_event_port_default_conf_get_v21(uint8_t dev_id, uint8_t port_id,
> >> > +                                     struct rte_event_port_conf 
> >> > *port_conf);
> >>
> >> Hi Timothy,
> >>
> >> + ABI Maintainers (Ray, Neil)
> >>
> >> # As per my understanding, the structures can not be versioned, only
> >> function can be versioned.
> >> i.e we can not make any change to " struct rte_event_port_conf"
> >>
> >> # We have a similar case with ethdev and it deferred to next release v20.11
> >> http://patches.dpdk.org/patch/69113/
> >>
> >> Regarding the API changes:
> >> # The slow path changes general looks good to me. I will review the
> >> next level in the coming days
> >> # The following fast path changes bothers to me. Could you share more
> >> details on below change?
> >>
> >> diff --git a/app/test-eventdev/test_order_atq.c
> >> b/app/test-eventdev/test_order_atq.c
> >> index 3366cfc..8246b96 100644
> >> --- a/app/test-eventdev/test_order_atq.c
> >> +++ b/app/test-eventdev/test_order_atq.c
> >> @@ -34,6 +34,8 @@
> >>                         continue;
> >>                 }
> >>
> >> +               ev.flow_id = ev.mbuf->udata64;
> >> +
> >> # Since RC1 is near, I am not sure how to accommodate the API changes
> >> now and sort out ABI stuffs.
> >> # Other concern is eventdev spec get bloated with versioning files
> >> just for ONE release as 20.11 will be OK to change the ABI.
> >> # While we discuss the API change, Please send deprecation notice for
> >> ABI change for 20.11,
> >> so that there is no ambiguity of this patch for the 20.11 release.
> >>
> >> Hello Jerin,
> >>
> >> Thank you for the review comments.
> >>
> >> With regard to your comments regarding the fast path flow_id change, the 
> >> Intel
> >DLB hardware
> >> is not capable of transferring the flow_id as part of the event itself. We
> >therefore require a mechanism
> >> to accomplish this. What we have done to work around this is to require the
> >application to embed the flow_id
> >> within the data payload. The new flag, #define
> >RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9), can be used
> >> by applications to determine if they need to embed the flow_id, or if its
> >automatically propagated and present in the
> >> received event.
> >>
> >> What we should have done is to wrap the assignment with a conditional.
> >>
> >> if (!(device_capability_flags & RTE_EVENT_DEV_CAP_CARRY_FLOW_ID))
> >>         ev.flow_id = ev.mbuf->udata64;
> >
> >Two problems with this approach,
> >1) we are assuming mbuf udata64 field is available for DLB driver
> >2) It won't work with another adapter, eventdev has no dependency with mbuf
> >
>
> This snippet is not intended to suggest that udata64 always be used to store 
> the flow ID, but as an example of how an application could do it. Some 
> applications won’t need to carry the flow ID through; others can select an 
> unused field in the event data (e.g. hash.rss or udata64 if using mbufs), or 
> (worst-case) re-generate the flow ID in pipeline stages that require it.

OK.
>
> >Question:
> >1) In the case of DLB hardware, on dequeue(),  what HW returns? is it
> >only event pointer and not have any other metadata like schedule_type
> >etc.
> >
>
> The DLB device provides a 16B “queue entry” that consists of:
>
> *       8B event data
> *       Queue ID
> *       Priority
> *       Scheduling type
> *       19 bits of carried-through data
> *       Assorted error/debug/reserved bits that are set by the device (not 
> carried-through)
>
>  For the carried-through 19b, we use 12b for event_type and sub_event_type.

I can only think of TWO options to help
1) Since event pointer always cache aligned, You could grab LSB
6bits(2^6 = 64B ) and 7 bits from (19b - 12b) carried through
structure
2) Have separate mempool driver using existing drivers, ie "event
pointer" + or - some offset have any amount of custom data.


>
> >
> >>
> >> This would minimize/eliminate any performance impact due to the processor's
> >branch prediction logic.

I think, If we need to change common fastpath, better we need to make
it template to create code for compile-time to have absolute zero
overhead
and use runtime.
See app/test-eventdev/test_order_atq.c: function: worker_wrapper()
_create_ worker at compile time based on runtime capability.



> >> The assignment then becomes in essence a NOOP for all event devices that 
> >> are
> >capable of carrying the flow_id as part of the event payload itself.
> >>
> >> Thanks,
> >> Tim
> >>
> >>
> >>
> >> Thanks,
> >> Tim

Reply via email to