On Fri, Feb 03, 2017 at 04:28:15PM +0530, Hemant Agrawal wrote:
> On 2/3/2017 12:08 PM, Nipun Gupta wrote:
> > > > > -----Original Message-----
> > > > > From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Jerin Jacob
> > > > > Sent: Wednesday, December 21, 2016 14:55
> > > > > To: dev@dpdk.org
> > > > > Cc: thomas.monja...@6wind.com; bruce.richard...@intel.com; Hemant
> > > > > Agrawal <hemant.agra...@nxp.com>; gage.e...@intel.com;
> > > > > harry.van.haa...@intel.com; Jerin Jacob
> > > <jerin.ja...@caviumnetworks.com>
> > > > > Subject: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven
> > > > > programming model
> > > > > 
> > > > > In a polling model, lcores poll ethdev ports and associated
> > > > > rx queues directly to look for packet. In an event driven model,
> > > > > by contrast, lcores call the scheduler that selects packets for
> > > > > them based on programmer-specified criteria. Eventdev library
> > > > > adds support for event driven programming model, which offer
> > > > > applications automatic multicore scaling, dynamic load balancing,
> > > > > pipelining, packet ingress order maintenance and
> > > > > synchronization services to simplify application packet processing.
> > > > > 
> > > > > By introducing event driven programming model, DPDK can support
> > > > > both polling and event driven programming models for packet 
> > > > > processing,
> > > > > and applications are free to choose whatever model
> > > > > (or combination of the two) that best suits their needs.
> > > > > 
> > > > > This patch adds the eventdev specification header file.
> > > > > 
> > > > > Signed-off-by: Jerin Jacob <jerin.ja...@caviumnetworks.com>
> > > > > Acked-by: Bruce Richardson <bruce.richard...@intel.com>
> > > > > ---
> > > > >  MAINTAINERS                        |    3 +
> > > > >  doc/api/doxy-api-index.md          |    1 +
> > > > >  doc/api/doxy-api.conf              |    1 +
> > > > >  lib/librte_eventdev/rte_eventdev.h | 1275
> > > > > ++++++++++++++++++++++++++++++++++++
> > > > >  4 files changed, 1280 insertions(+)
> > > > >  create mode 100644 lib/librte_eventdev/rte_eventdev.h
> > > > 
> > > > <snip>
> > > > 
> > > > > +
> > > > > +/**
> > > > > + * Event device information
> > > > > + */
> > > > > +struct rte_event_dev_info {
> > > > > +     const char *driver_name;        /**< Event driver name */
> > > > > +     struct rte_pci_device *pci_dev; /**< PCI information */
> > > > 
> > > > With 'rte_device' in place (rte_dev.h), should we not have 'rte_device' 
> > > > instead
> > > of 'rte_pci_device' here?
> > > 
> > > Yes. Please post a patch to fix this. As the time of merging to
> > > next-eventdev tree it was not the case.
> > 
> > Sure. I'll send a patch regarding this.
> > 
> > > 
> > > > 
> > > > > + * The number of events dequeued is the number of scheduler contexts 
> > > > > held
> > > by
> > > > > + * this port. These contexts are automatically released in the next
> > > > > + * rte_event_dequeue_burst() invocation, or invoking
> > > > > rte_event_enqueue_burst()
> > > > > + * with RTE_EVENT_OP_RELEASE operation can be used to release the
> > > > > + * contexts early.
> > > > > + *
> > > > > + * @param dev_id
> > > > > + *   The identifier of the device.
> > > > > + * @param port_id
> > > > > + *   The identifier of the event port.
> > > > > + * @param[out] ev
> > > > > + *   Points to an array of *nb_events* objects of type *rte_event* 
> > > > > structure
> > > > > + *   for output to be populated with the dequeued event objects.
> > > > > + * @param nb_events
> > > > > + *   The maximum number of event objects to dequeue, typically 
> > > > > number of
> > > > > + *   rte_event_port_dequeue_depth() available for this port.
> > > > > + *
> > > > > + * @param timeout_ticks
> > > > > + *   - 0 no-wait, returns immediately if there is no event.
> > > > > + *   - >0 wait for the event, if the device is configured with
> > > > > + *   RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT then this function will
> > > > > wait until
> > > > > + *   the event available or *timeout_ticks* time.
> > > > 
> > > > Just for understanding - Is expectation that rte_event_dequeue_burst() 
> > > > will
> > > wait till timeout
> > > > unless requested number of events (nb_events) are not received on the 
> > > > event
> > > port?
> > > 
> > > Yes. If you need any change then a send RFC patch for the header file
> > > change.
> 
> "at least one event available"

Looks good to me. If there no objections then you can send a patch to
update the header file.

> 
> The API should not wait, if at least one event is available to discard the
> timeout value.
> 
> the *timeout* is valid only until the first event is received (even when
> multiple events are requested) and driver will only checking for further
> events availability and return as many events as it is able to get in its
> processing loop.
> 
> 
> > > 
> > > > 
> > > > > + *   if the device is not configured with
> > > > > RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
> > > > > + *   then this function will wait until the event available or
> > > > > + *   *dequeue_timeout_ns* ns which was previously supplied to
> > > > > + *   rte_event_dev_configure()
> > > > > + *
> > > > > + * @return
> > > > > + * The number of event objects actually dequeued from the port. The 
> > > > > return
> > > > > + * value can be less than the value of the *nb_events* parameter 
> > > > > when the
> > > > > + * event port's queue is not full.
> > > > > + *
> > > > > + * @see rte_event_port_dequeue_depth()
> > > > > + */
> > > > > +uint16_t
> > > > > +rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct 
> > > > > rte_event
> > > > > ev[],
> > > > > +                     uint16_t nb_events, uint64_t timeout_ticks);
> > > > > +
> > > > 
> > > > <Snip>
> > > > 
> > > > Regards,
> > > > Nipun
> > 
> 
> 

Reply via email to