Hi Jerin my responses below: >+# Is this feature or limitation? This is a new feature to enable enqueuing to PMD in any order when the underlined hardware device needs enqueues in a strict dequeue order. >+# What is the use case for this feature? This is needed by the applications which processes events in batches based on their flow ids. Received burst is sorted based on flow ids. >+# If application don't care about ORDER, they can use >RTE_SCHED_TYPE_PARALLEL. Right? This is the ordering between enqueue and dequeue and not across cores. >+# Can you share the feature in the context of the below text in specification? Since the feature is not across cores below context does not apply.
>+---------------- >+/* Scheduler type definitions */ >+#define RTE_SCHED_TYPE_ORDERED 0 >+/**< Ordered scheduling >+ * >+ * Events from an ordered flow of an event queue can be scheduled to multiple >+ * ports for concurrent processing while maintaining the original event order, >+ * i.e. the order in which they were first enqueued to that queue. >+ * This scheme allows events pertaining to the same, potentially large, flow >to >+ * be processed in parallel on multiple cores without incurring any >+* application-level order restoration logic overhead. >+ * >+* After events are dequeued from a set of ports, as those events are >re-enqueued >+* to another queue (with the op field set to @ref RTE_EVENT_OP_FORWARD), the >event >+* device restores the original event order - including events returned from >all >+* ports in the set - before the events are placed on the destination queue, >+* for subsequent scheduling to ports > + * > + * @see rte_event_port_setup() > + */ > + > /** Event port configuration structure */ struct rte_event_port_conf > { > int32_t new_event_threshold; > -- > 2.25.1 >