> From: Jerin Jacob [mailto:[email protected]]
> Sent: Monday, March 27, 2017 8:45 AM
> To: Van Haaren, Harry <[email protected]>
> Cc: [email protected]; Richardson, Bruce <[email protected]>
> Subject: Re: [PATCH v5 06/20] event/sw: add support for event queues
>
> On Fri, Mar 24, 2017 at 04:53:01PM +0000, Harry van Haaren wrote:
> > From: Bruce Richardson <[email protected]>
> >
> > Add in the data structures for the event queues, and the eventdev
> > functions to create and destroy those queues.
> >
> > Signed-off-by: Bruce Richardson <[email protected]>
> > Signed-off-by: Harry van Haaren <[email protected]>
> > ---
<snip>
> > +static int32_t
> > +qid_init(struct sw_evdev *sw, unsigned int idx, int type,
> > + const struct rte_event_queue_conf *queue_conf)
> > +{
> > + unsigned int i;
> > + int dev_id = sw->data->dev_id;
> > + int socket_id = sw->data->socket_id;
> > + char buf[IQ_RING_NAMESIZE];
> > + struct sw_qid *qid = &sw->qids[idx];
> > +
> > + for (i = 0; i < SW_IQS_MAX; i++) {
>
> Just for my understanding, Are 4(SW_IQS_MAX) iq rings created to address
> different priority for each enqueue operation? What is the significance of
> 4(SW_IQS_MAX) here?
Yes each IQ represents a priority level. There is a compile-time define
(SW_IQS_MAX) which allows setting the number of internal-queues at each queue
stage. The default number of priorities is currently 4.
> > +static int
> > +sw_queue_setup(struct rte_eventdev *dev, uint8_t queue_id,
> > + const struct rte_event_queue_conf *conf)
> > +{
> > + int type;
> > +
> > + switch (conf->event_queue_cfg) {
> > + case RTE_EVENT_QUEUE_CFG_SINGLE_LINK:
> > + type = SW_SCHED_TYPE_DIRECT;
> > + break;
>
> event_queue_cfg is a bitmap. It is valid to have
> RTE_EVENT_QUEUE_CFG_SINGLE_LINK | RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY.
> i.e An atomic schedule type queue and it has only one port linked to
> dequeue the events.
> So in the above context, The switch case is not correct. i.e
> it goes to the default condition. Right?
> Is this intentional?
>
> If I understand it correctly, Based on the use case(grouped based event
> pipelining), you have shared in
> the documentation patch. RTE_EVENT_QUEUE_CFG_SINGLE_LINK used for last
> stage(last queue). One option is if SW PMD cannot support
> RTE_EVENT_QUEUE_CFG_SINGLE_LINK | RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY mode
> then even tough application sets the RTE_EVENT_QUEUE_CFG_SINGLE_LINK |
> RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY, driver can ignore
> RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY. But I am not sure the case where
> application sets RTE_EVENT_QUEUE_CFG_SINGLE_LINK in the middle of the
> pipeline.
>
> Thoughts?
I don't like the idea of the SW PMD ignoring flags for queues - the PMD has no
idea if the queue is the final or middle of the pipeline as it's the
applications usage which defines that.
Does anybody have a need for a queue to be both Atomic *and* Single-link? I
understand the current API doesn't prohibit it, but I don't see the actual
use-case in which that may be useful. Atomic implies load-balancing is
occurring, single link implies there is only one consuming core. Those seem
like opposites to me?
Unless anybody sees value in queue's having both, I suggest we update the
documentation to specify that a queue is either load balanced, or single-link,
and that setting both flags will result in -ENOTSUP being returned. (This check
can be added to EventDev layer if consistent for all PMDs).
Counter-thoughts?
> > + case RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY:
> > + type = RTE_SCHED_TYPE_ATOMIC;
> > + break;
> > + case RTE_EVENT_QUEUE_CFG_ORDERED_ONLY:
> > + type = RTE_SCHED_TYPE_ORDERED;
> > + break;
> > + case RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY:
> > + type = RTE_SCHED_TYPE_PARALLEL;
> > + break;
> > + case RTE_EVENT_QUEUE_CFG_ALL_TYPES:
> > + SW_LOG_ERR("QUEUE_CFG_ALL_TYPES not supported\n");
> > + return -ENOTSUP;
> > + default:
> > + SW_LOG_ERR("Unknown queue type %d requested\n",
> > + conf->event_queue_cfg);
> > + return -EINVAL;
> > + }
> > +
> > + struct sw_evdev *sw = sw_pmd_priv(dev);
> > + return qid_init(sw, queue_id, type, conf);
> > +}