> -----Original Message----- > From: Jerin Jacob [mailto:jerin.ja...@caviumnetworks.com] > Sent: Thursday, January 26, 2017 3:39 AM > To: Eads, Gage <gage.e...@intel.com> > Cc: Richardson, Bruce <bruce.richard...@intel.com>; 'dev@dpdk.org' > <dev@dpdk.org>; 'thomas.monja...@6wind.com' > <thomas.monja...@6wind.com>; 'hemant.agra...@nxp.com' > <hemant.agra...@nxp.com>; Van Haaren, Harry > <harry.van.haa...@intel.com>; McDaniel, Timothy > <timothy.mcdan...@intel.com> > Subject: Re: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven > programming model > > On Wed, Jan 25, 2017 at 10:36:21PM +0000, Eads, Gage wrote: > > > > <jerin.ja...@caviumnetworks.com> > > Subject: [dpdk-dev] [PATCH > > > v4 > 1/6] eventdev: introduce event driven > > programming model > > > > > > <message truncated for brevity> > > +/** > > + * Enqueue > > > a burst > of events objects or an event object supplied > > in > > > > > > *rte_event* > > + * structure on an event device designated > > > by its > *dev_id* > > through the event + * port specified by > > > *port_id*. Each > event > > object specifies the event queue on + > > > * which it will be > enqueued. > > > > > > + * > > > > > > + * The *nb_events* parameter is the number of event > > > objects to > > > enqueue which are + * supplied in the *ev* array > > > of *rte_event* > > > structure. > > > > > > + * > > > > > > + * The rte_event_enqueue_burst() function returns the > > > number of > + > > * events objects it actually enqueued. A return > > > value equal to > > > *nb_events* + * means that all event objects have > been enqueued. > > > > > > + * > > > > > > + * @param dev_id > > > > > > + * The identifier of the device. > > > > > > + * @param port_id > > > > > > + * The identifier of the event port. > > > > > > + * @param ev > > > > > > + * Points to an array of *nb_events* objects of type > *rte_event* > > > > > structure > > > > > > + * which contain the event object enqueue operations to be > > > > > processed. > > > > > > + * @param nb_events > > > > > > + * The number of event objects to enqueue, typically number > of > > > > > > + * rte_event_port_enqueue_depth() available for this port. > > > > > > + * > > > > > > + * @return > > > > > > + * The number of event objects actually enqueued on the event > > > > > device. The > > > > > > + * return value can be less than the value of the *nb_events* > > > > > parameter > > > > > > when > > > > > > + * the event devices queue is full or if invalid parameters > are > > > > > specified in a > > > > > > + * *rte_event*. If return value is less than *nb_events*, the > > > > > remaining events > > > > > > + * at the end of ev[] are not consumed,and the caller has to > take > > > > > care of > > > > > > them > > > > > > + * > > > > > > + * @see rte_event_port_enqueue_depth() + */ +uint16_t > > > > > > +rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id, > > > > > > + const struct rte_event ev[], uint16_t > nb_events); > > > > > > > > > > There are a number of reasons this operation could fail to > > > enqueue > all > the events, including: > > > > > - Backpressure > > > > > - Invalid port ID > > > > > - Invalid queue ID > > > > > - Invalid sched type when a queue is configured for > > > ATOMIC_ONLY, > > ORDERED_ONLY, or PARALLEL_ONLY > - ... > > > > > > > > > > The current API doesn't provide a straightforward way to > > > determine > the > cause of a failure. This is a particular issue > > > on event PMDs > that can > backpressure, where the app may want to > > > treat that case > differently > than the other failure cases. > > > > > > > > > > Could we change the return type to int16_t, and define a set > > > of > error > cases (e.g. -ENOSPC for backpressure, -EINVAL for an > > > invalid argument)? > > > > > (With corresponding changes needed in the PMD API) Similarly > > > we > could > change rte_event_dequeue_burst() to return an > > > int16_t, with > -EINVAL as > a possible error case. > > > > > > > > Use rte_errno instead, I suggest. That's what it's there for. > > > > > > > > /Bruce > > > > > > That makes sense. In that case, the API comment could be tweaked like > so: > > > > > > * If the return value is less than *nb_events*, the remaining events > at the > > > * end of ev[] are not consumed and the caller has to take care of > them, > and > > > * rte_errno is set accordingly. Possible errno values include: > > > * - EINVAL - The port ID is invalid, an event's queue ID is invalid, > or an > > > * event's sched type doesn't match the capabilities of the > > > * destination queue. > > > * - ENOSPC - The event port was backpressured and unable to enqueue > one or > > > * more events. > > > > However it seems better to use a signed integer for the dequeue burst > return > value, if it is to use rte_errno. Application code could be simplified: > > > > (signed return value) > > ret = rte_event_dequeue_burst(...); > > if (ret < 0) > > rte_panic("Dequeued returned errno %d\n", rte_errno); > > > > vs. > > > > (unsigned return value) > > ret = rte_event_dequeue_burst(...); > > if (ret == 0 && rte_errno != 0) > > rte_panic("Dequeued returned errno %d\n", rte_errno); > > > > And with an unsigned return value, all dequeue implementations would have > to clear rte_errno when no events are dequeued.
After some internal discussion, I don't think the signed return value is necessary for burst dequeue. Burst enqueue is the more interesting case... > > Gage, > > Just to understand, what is the expected application behavior if the > implementation returns -ENOSPC It's application-dependent -- depending on the importance of the event, the application could decide to retry the enqueue some number of times or decide to drop the event. > > Apart for the above SW driver behavior, I think, HW implementation has two > more different behavior > a) Implementation make sure that it never returns -ENOSPC by allocating more > space on the fly or any other scheme > b) Tail drop > By "tail drop," do you mean the hardware drops the event (and presumably frees any memory it points to)? Or the enqueue is unsuccessful and the application drops the event? > Considering different implementation has different behaviors, How about > enumerating the overflow policy at the port configuration time? and let > implementation act accordingly to avoid fast-patch change in > application(effects in all implementation irrespective of the capability) > > possible enumerating value at the port configuration time, > - PANIC or similar scheme to denote it cannot proceed > - TAIL DROP > or any expected application behavior you want to add I wonder if that's necessary? Hardware behavior a) means the function will always return nb_events. If hardware drops the event(s), I assume enqueue_burst would still return nb_events and the app behaves as if all events were sent. If the enqueue fails (ret < nb_events), app software could check rte_errno and take the action it deems necessary. So all fast-path enqueue code could look like: ret = rte_event_enqueue_burst(..., nb_events); if (ret < nb_events) { .... }