On 2024-02-09 09:43, Jerin Jacob wrote:
On Thu, Feb 8, 2024 at 3:20 PM Mattias Rönnblom <hof...@lysator.liu.se> wrote:

On 2024-02-07 11:14, Jerin Jacob wrote:
On Fri, Feb 2, 2024 at 7:29 PM Bruce Richardson
<bruce.richard...@intel.com> wrote:

Make some textual improvements to the introduction to eventdev and event
devices in the eventdev header file. This text appears in the doxygen
output for the header file, and introduces the key concepts, for
example: events, event devices, queues, ports and scheduling.

This patch makes the following improvements:
* small textual fixups, e.g. correcting use of singular/plural
* rewrites of some sentences to improve clarity
* using doxygen markdown to split the whole large block up into
    sections, thereby making it easier to read.

No large-scale changes are made, and blocks are not reordered

Signed-off-by: Bruce Richardson <bruce.richard...@intel.com>

Thanks Bruce, While you are cleaning up, Please add following or
similar change to fix for not properly
parsing the struct rte_event_vector. i.e it is coming as global
variables in html files.

l[dpdk.org] $ git diff
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index e31c927905..ce4a195a8f 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1309,9 +1309,9 @@ struct rte_event_vector {
                   */
                  struct {
                          uint16_t port;
-                       /* Ethernet device port id. */
+                       /**< Ethernet device port id. */
                          uint16_t queue;
-                       /* Ethernet device queue id. */
+                       /**< Ethernet device queue id. */
                  };
          };
          /**< Union to hold common attributes of the vector array. */
@@ -1340,7 +1340,11 @@ struct rte_event_vector {
           * vector array can be an array of mbufs or pointers or opaque u64
           * values.
           */
+#ifndef __DOXYGEN__
   } __rte_aligned(16);
+#else
+};
+#endif

   /* Scheduler type definitions */
   #define RTE_SCHED_TYPE_ORDERED          0


---
V3: reworked following feedback from Mattias
---
   lib/eventdev/rte_eventdev.h | 132 ++++++++++++++++++++++--------------
   1 file changed, 81 insertions(+), 51 deletions(-)

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index ec9b02455d..a741832e8e 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -12,25 +12,33 @@
    * @file
    *
    * RTE Event Device API
+ * ====================
    *
- * In a polling model, lcores poll ethdev ports and associated rx queues
- * directly to look for packet. In an event driven model, by contrast, lcores
- * call the scheduler that selects packets for them based on programmer
- * specified criteria. Eventdev library adds support for event driven
- * programming model, which offer applications automatic multicore scaling,
- * dynamic load balancing, pipelining, packet ingress order maintenance and
- * synchronization services to simplify application packet processing.
+ * In a traditional run-to-completion application model, lcores pick up packets

Can we keep it is as poll mode instead of run-to-completion as event mode also
supports run to completion by having dequuee() and then Tx.


A "traditional" DPDK app is both polling and run-to-completion. You
could always add "polling" somewhere, but "run-to-completion" in that
context serves a purpose, imo.

Yeah. Some event devices can actually sleep to save power if packet is
not present(using WFE in arm64 world).


Sure, and I believe you can do that with certain Ethdevs as well. Also, you can also use interrupts. So polling/energy-efficient polling (wfe/umwait)/interrupts aren't really a differentiator between Eventdev and "raw" Ethdev.

I think, We can be more specific then, like

In a traditional run-to-completion application model where packet are
dequeued from NIC RX queues, .......


"In a traditional DPDK application model, the application polls Ethdev port RX queues to look for work, and processing is done in a run-to-completion manner, after which the packets are transmitted on a Ethdev TX queue. Load is distributed by statically assigning ports and queues to lcores, and NIC receive-side scaling (RSS, or similar) is employed to distribute network flows (and thus work) on the same port across multiple RX queues."

I don̈́'t know if that's too much.



A single-stage eventdev-based pipeline will also process packets in a
run-to-completion fashion. In such a scenario, the difference between
eventdev and the "tradition" lies in the (ingress-only) load balancing
mechanism used (which the below note on the "traditional" use of RSS
indicates).

Reply via email to