[PATCH] doc: add new driver guidelines

2024-08-13 Thread Nandini Persad
This document was created to assist contributors in
creating DPDK drivers, providing suggestions and
guidelines on how to upstream effectively.

Signed-off-by: Ferruh Yigit ,
Thomas Monjalon ,
Nandini Persad 
---
 0001-doc-add-new-driver-guidelines.patch | 206 +++
 doc/guides/contributing/index.rst|   1 +
 2 files changed, 207 insertions(+)
 create mode 100644 0001-doc-add-new-driver-guidelines.patch

diff --git a/0001-doc-add-new-driver-guidelines.patch 
b/0001-doc-add-new-driver-guidelines.patch
new file mode 100644
index 00..cba94a3789
--- /dev/null
+++ b/0001-doc-add-new-driver-guidelines.patch
@@ -0,0 +1,206 @@
++.. SPDX-License-Identifier: BSD-3-Clause
++   Copyright 2024 The DPDK contributors
++
++
++Upstreaming New DPDK Drivers Guide
++==
++
++The DPDK project continuously grows its ecosystem by adding support for new
++devices.
++This document is designed to assist contributors in creating DPDK
++drivers, also known as Poll Mode Drivers (PMD's).
++
++By having public support for a device, we can ensure accessibility across 
various
++operating systems and guarantee community maintenance in future releases.
++If a new device is similar to a device already supported by an existing 
driver,
++it is more efficient to update the existing driver.
++
++Here are our best practice recommendations for creating a new driver.
++
++
++Early Engagement with the Community
++---
++
++When creating a new driver, we highly recommend engaging with the DPDK
++community early instead of waiting the work to mature.
++
++These public discussions help align development of your driver with DPDK 
expectations.
++You may submit a roadmap before the release to inform the community of
++your plans. Additionally, sending a Request for Comments (RFC) early in
++the release cycle, or even during the prior release, is advisable.
++
++DPDK is mainly consumed via Long Term Support (LTS) releases.
++It is common to target a new PMD to a LTS release. For this, it is
++suggested to start upstreaming at least one release before a LTS release.
++
++
++Progressive Work
++
++
++To continually progress your work, we recommend planning for incremental
++upstreaming across multiple patch series or releases.
++
++It's important to prioritize quality of the driver over upstreaming
++in a single release or single patch series.
++
++
++Finalizing
++--
++
++Once the driver has been upstreamed, the author has
++a responsibility to the community to maintain it.
++
++This includes the public test report. Authors must send a public
++test report after the first upstreaming of the PMD. The same
++public test procedure may be reproduced regularly per release.
++
++After the PMD is upstreamed, the author should send a patch
++to update the website with the name of the new PMD and supported devices
++via the DPDK mailing list..
++
++For more information about the role of maintainers, see :doc:`patches.rst`.
++
++
++
++Splitting into Patches
++--
++
++We recommend that drivers are split into patches, so that each patch 
represents
++a single feature. If the driver code is already developed, it may be 
challenging
++to split. However, there are many benefits to doing so.
++
++Splitting patches makes it easier to understand a feature and clarifies the
++list of components/files that compose that specific feature.
++
++It also enables the ability to track from the source code to the feature
++it is enabled for and helps users to understand the reasoning and intention
++of implementation. This kind of tracing is regularly required
++for defect resolution and refactoring.
++
++Another benefit of splitting the codebase per feature is that it highlights
++unnecessary or irrelevant code, as any code not belonging to any specific
++feature becomes obvious.
++
++Git bisect is also more useful if patches are split per patch.
++
++The split should focus on logical features
++rather than file-based divisions.
++
++Each patch in the series must compile without errors
++and should maintain functionality.
++
++Enable the build as early as possible within the series
++to facilitate continuous integration and testing.
++This approach ensures a clear and manageable development process.
++
++We suggest splitting patches following this approach:
++
++* Each patch should be organized logically as a new feature.
++* Run test tools per patch (See :ref:`tool_list`:).
++* Update relevant documentation and .ini file with each patch.
++
++
++The following order in the patch series is as suggested below.
++
++The first patch should have the driver's skeleton which should include:
++
++* Maintainer's file update
++* Driver documentation
++* Document must have links to official product documentation web page
++* The  new document should be added into the index (`doc/guides/index.rst`)
++* Initial .ini file
++* Release no

[PATCH 0/9] reowrd in prog guide

2024-05-13 Thread Nandini Persad
I reviewed and made small syntax and grammatical edits
to sections in the programmer's guide and the design section of
the contributor's guidelines.

Nandini Persad (9):
  doc: reword design section in contributors guidelines
  doc: reword pmd section in prog guide
  doc: reword argparse section in prog guide
  doc: reword service cores section in prog guide
  doc: reword trace library section in prog guide
  doc: reword log library section in prog guide
  doc: reword cmdline section in prog guide
  doc: reword stack library section in prog guide
  doc: reword rcu library section in prog guide

 .mailmap|   1 +
 doc/guides/contributing/design.rst  |  79 ++---
 doc/guides/linux_gsg/sys_reqs.rst   |   2 +-
 doc/guides/prog_guide/argparse_lib.rst  |  72 ++-
 doc/guides/prog_guide/cmdline.rst   |  56 -
 doc/guides/prog_guide/log_lib.rst   |  32 ++---
 doc/guides/prog_guide/poll_mode_drv.rst | 151 
 doc/guides/prog_guide/rcu_lib.rst   |  77 ++--
 doc/guides/prog_guide/service_cores.rst |  32 ++---
 doc/guides/prog_guide/stack_lib.rst |   4 +-
 doc/guides/prog_guide/trace_lib.rst |  72 +--
 11 files changed, 284 insertions(+), 294 deletions(-)

-- 
2.34.1



[PATCH 1/9] doc: reword design section in contributors guidelines

2024-05-13 Thread Nandini Persad
minor editing for grammar and syntax of design section

Signed-off-by: Nandini Persad 
---
 .mailmap   |  1 +
 doc/guides/contributing/design.rst | 79 ++
 doc/guides/linux_gsg/sys_reqs.rst  |  2 +-
 3 files changed, 38 insertions(+), 44 deletions(-)

diff --git a/.mailmap b/.mailmap
index 66ebc20666..7d4929c5d1 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1002,6 +1002,7 @@ Naga Suresh Somarowthu 
 Nalla Pradeep 
 Na Na 
 Nan Chen 
+Nandini Persad 
 Nannan Lu 
 Nan Zhou 
 Narcisa Vasile   

diff --git a/doc/guides/contributing/design.rst 
b/doc/guides/contributing/design.rst
index b724177ba1..921578aec5 100644
--- a/doc/guides/contributing/design.rst
+++ b/doc/guides/contributing/design.rst
@@ -8,22 +8,26 @@ Design
 Environment or Architecture-specific Sources
 
 
-In DPDK and DPDK applications, some code is specific to an architecture (i686, 
x86_64) or to an executive environment (freebsd or linux) and so on.
-As far as is possible, all such instances of architecture or env-specific code 
should be provided via standard APIs in the EAL.
+In DPDK and DPDK applications, some code is architecture-specific (i686, 
x86_64) or  environment-specific (FreeBsd or Linux, etc.).
+When feasible, such instances of architecture or env-specific code should be 
provided via standard APIs in the EAL.
 
-By convention, a file is common if it is not located in a directory indicating 
that it is specific.
-For instance, a file located in a subdir of "x86_64" directory is specific to 
this architecture.
+By convention, a file is specific if the directory is indicated. Otherwise, it 
is common.
+
+For example:
+
+A file located in a subdir of "x86_64" directory is specific to this 
architecture.
 A file located in a subdir of "linux" is specific to this execution 
environment.
 
 .. note::
 
Code in DPDK libraries and applications should be generic.
-   The correct location for architecture or executive environment specific 
code is in the EAL.
+   The correct location for architecture or executive environment-specific 
code is in the EAL.
+
+When necessary, there are several ways to handle specific code:
 
-When absolutely necessary, there are several ways to handle specific code:
 
-* Use a ``#ifdef`` with a build definition macro in the C code.
-  This can be done when the differences are small and they can be embedded in 
the same C file:
+* When the differences are small and they can be embedded in the same C file, 
use a ``#ifdef`` with a build definition macro in the C code.
+
 
   .. code-block:: c
 
@@ -33,9 +37,9 @@ When absolutely necessary, there are several ways to handle 
specific code:
  titi();
  #endif
 
-* Use build definition macros and conditions in the Meson build file. This is 
done when the differences are more significant.
-  In this case, the code is split into two separate files that are 
architecture or environment specific.
-  This should only apply inside the EAL library.
+* When the differences are more significant, use build definition macros and 
conditions in the Meson build file.
+In this case, the code is split into two separate files that are architecture 
or environment specific.
+This should only apply inside the EAL library.
 
 Per Architecture Sources
 
@@ -43,7 +47,7 @@ Per Architecture Sources
 The following macro options can be used:
 
 * ``RTE_ARCH`` is a string that contains the name of the architecture.
-* ``RTE_ARCH_I686``, ``RTE_ARCH_X86_64``, ``RTE_ARCH_X86_X32``, 
``RTE_ARCH_PPC_64``, ``RTE_ARCH_RISCV``, ``RTE_ARCH_LOONGARCH``, 
``RTE_ARCH_ARM``, ``RTE_ARCH_ARMv7`` or ``RTE_ARCH_ARM64`` are defined only if 
we are building for those architectures.
+* ``RTE_ARCH_I686``, ``RTE_ARCH_X86_64``, ``RTE_ARCH_X86_X32``, 
``RTE_ARCH_PPC_64``, ``RTE_ARCH_RISCV``, ``RTE_ARCH_LOONGARCH``, 
``RTE_ARCH_ARM``, ``RTE_ARCH_ARMv7`` or ``RTE_ARCH_ARM64`` are defined when 
building for these architectures.
 
 Per Execution Environment Sources
 ~
@@ -51,30 +55,21 @@ Per Execution Environment Sources
 The following macro options can be used:
 
 * ``RTE_EXEC_ENV`` is a string that contains the name of the executive 
environment.
-* ``RTE_EXEC_ENV_FREEBSD``, ``RTE_EXEC_ENV_LINUX`` or ``RTE_EXEC_ENV_WINDOWS`` 
are defined only if we are building for this execution environment.
+* ``RTE_EXEC_ENV_FREEBSD``, ``RTE_EXEC_ENV_LINUX`` or ``RTE_EXEC_ENV_WINDOWS`` 
are defined only when building for this execution environment.
 
 Mbuf features
 -
 
-The ``rte_mbuf`` structure must be kept small (128 bytes).
-
-In order to add new features without wasting buffer space for unused features,
-some fields and flags can be registered dynamically in a shared area.
-The "dynamic" mbuf area is the default choice for the new features.
-
-The "dynamic" area is eating the remaining space in mbuf,
-an

[PATCH 6/9] doc: reword log library section in prog guide

2024-05-13 Thread Nandini Persad
minor changes made for syntax in the log library section and 7.1
section of the programmer's guide. A couple sentences at the end of the
trace library section were also edited.

Signed-off-by: Nandini Persad 
---
 doc/guides/prog_guide/cmdline.rst   | 24 +++---
 doc/guides/prog_guide/log_lib.rst   | 32 ++---
 doc/guides/prog_guide/trace_lib.rst | 22 ++--
 3 files changed, 39 insertions(+), 39 deletions(-)

diff --git a/doc/guides/prog_guide/cmdline.rst 
b/doc/guides/prog_guide/cmdline.rst
index e20281ceb5..6b10ab6c99 100644
--- a/doc/guides/prog_guide/cmdline.rst
+++ b/doc/guides/prog_guide/cmdline.rst
@@ -5,8 +5,8 @@ Command-line Library
 
 
 Since its earliest versions, DPDK has included a command-line library -
-primarily for internal use by, for example, ``dpdk-testpmd`` and the 
``dpdk-test`` binaries,
-but the library is also exported on install and can be used by any end 
application.
+primarily for internal use by, for example, ``dpdk-testpmd`` and the 
``dpdk-test`` binaries.
+However, the library is also exported on install and can be used by any end 
application.
 This chapter covers the basics of the command-line library and how to use it 
in an application.
 
 Library Features
@@ -18,7 +18,7 @@ The DPDK command-line library supports the following features:
 
 * Ability to read and process commands taken from an input file, e.g. startup 
script
 
-* Parameterized commands able to take multiple parameters with different 
datatypes:
+* Parameterized commands that can take multiple parameters with different 
datatypes:
 
* Strings
* Signed/unsigned 16/32/64-bit integers
@@ -56,7 +56,7 @@ Creating a Command List File
 The ``dpdk-cmdline-gen.py`` script takes as input a list of commands to be 
used by the application.
 While these can be piped to it via standard input, using a list file is 
probably best.
 
-The format of the list file must be:
+The format of the list file must follow these requirements:
 
 * Comment lines start with '#' as first non-whitespace character
 
@@ -75,7 +75,7 @@ The format of the list file must be:
   * ``dst_ip6``
 
 * Variable fields, which take their values from a list of options,
-  have the comma-separated option list placed in braces, rather than a the 
type name.
+  have the comma-separated option list placed in braces, rather than by the 
type name.
   For example,
 
   * ``<(rx,tx,rxtx)>mode``
@@ -127,13 +127,13 @@ and the callback stubs will be written to an equivalent 
".c" file.
 Providing the Function Callbacks
 
 
-As discussed above, the script output is a header file, containing structure 
definitions,
-but the callback functions themselves obviously have to be provided by the 
user.
-These callback functions must be provided as non-static functions in a C file,
+As discussed above, the script output is a header file containing structure 
definitions,
+but the callback functions must be provided by the user.
+These callback functions must be provided as non-static functions in a C file
 and named ``cmd__parsed``.
 The function prototypes can be seen in the generated output header.
 
-The "cmdname" part of the function name is built up by combining the 
non-variable initial tokens in the command.
+The "cmdname" part of the function name is built by combining the non-variable 
initial tokens in the command.
 So, given the commands in our worked example below: ``quit`` and ``show port 
stats ``,
 the callback functions would be:
 
@@ -151,11 +151,11 @@ the callback functions would be:
 ...
}
 
-These functions must be provided by the developer, but, as stated above,
+These functions must be provided by the developer. However, as stated above,
 stub functions may be generated by the script automatically using the 
``--stubs`` parameter.
 
 The same "cmdname" stem is used in the naming of the generated structures too.
-To get at the results structure for each command above,
+To get to the results structure for each command above,
 the ``parsed_result`` parameter should be cast to ``struct cmd_quit_result``
 or ``struct cmd_show_port_stats_result`` respectively.
 
@@ -179,7 +179,7 @@ To integrate the script output with the application,
 we must ``#include`` the generated header into our applications C file,
 and then have the command-line created via either ``cmdline_new`` or 
``cmdline_stdin_new``.
 The first parameter to the function call should be the context array in the 
generated header file,
-``ctx`` by default. (Modifiable via script parameter).
+``ctx`` by default (Modifiable via script parameter).
 
 The callback functions may be in this same file, or in a separate one -
 they just need to be available to the linker at build-time.
diff --git a/doc/guides/prog_guide/log_lib.rst 
b/doc/guides/prog_guide/log_lib.rst
index ff9d1b54a2..05f032dfad 100644
--- a/

[PATCH 3/9] doc: reword argparse section in prog guide

2024-05-13 Thread Nandini Persad
made small edits to sections 6.1 and 6.2 intro

Signed-off-by: Nandini Persad 
---
 doc/guides/prog_guide/argparse_lib.rst | 72 +-
 1 file changed, 35 insertions(+), 37 deletions(-)

diff --git a/doc/guides/prog_guide/argparse_lib.rst 
b/doc/guides/prog_guide/argparse_lib.rst
index a6ac11b1c0..a2af7d49e9 100644
--- a/doc/guides/prog_guide/argparse_lib.rst
+++ b/doc/guides/prog_guide/argparse_lib.rst
@@ -4,22 +4,21 @@
 Argparse Library
 
 
-The argparse library provides argument parsing functionality,
-this library makes it easy to write user-friendly command-line program.
+The argparse library provides argument parsing functionality and makes it easy 
to write user-friendly command-line programming.
 
 Features and Capabilities
 -
 
-- Support parsing optional argument (which could take with no-value,
-  required-value and optional-value).
+- Supports parsing of optional arguments (which can contain no-value,
+  required-value and optional-values).
 
-- Support parsing positional argument (which must take with required-value).
+- Supports parsing of positional arguments (which must contain 
required-values).
 
-- Support automatic generate usage information.
+- Supports automatic generation of usage information.
 
-- Support issue errors when provide with invalid arguments.
+- Provides issue errors when an argument is invalid
 
-- Support parsing argument by two ways:
+- Supports parsing arguments in two ways:
 
   #. autosave: used for parsing known value types;
   #. callback: will invoke user callback to parse.
@@ -27,7 +26,7 @@ Features and Capabilities
 Usage Guide
 ---
 
-The following code demonstrates how to use:
+The following code demonstrates how to use the following:
 
 .. code-block:: C
 
@@ -89,12 +88,12 @@ The following code demonstrates how to use:
   ...
}
 
-In this example, the arguments which start with a hyphen (-) are optional
-arguments (they're "--aaa"/"--bbb"/"--ccc"/"--ddd"/"--eee"/"--fff"); and the
-arguments which don't start with a hyphen (-) are positional arguments
-(they're "ooo"/"ppp").
+In this example, the arguments thhat start with a hyphen (-) are optional
+arguments ("--aaa"/"--bbb"/"--ccc"/"--ddd"/"--eee"/"--fff").
+The arguments that do not start with a hyphen (-) are positional arguments
+("ooo"/"ppp").
 
-Every argument must be set whether to carry a value (one of
+Every argument must set whether it carries a value (one of
 ``RTE_ARGPARSE_ARG_NO_VALUE``, ``RTE_ARGPARSE_ARG_REQUIRED_VALUE`` and
 ``RTE_ARGPARSE_ARG_OPTIONAL_VALUE``).
 
@@ -105,26 +104,26 @@ Every argument must be set whether to carry a value (one 
of
 User Input Requirements
 ~~~
 
-For optional arguments which take no-value,
+For optional arguments which have no-value,
 the following mode is supported (take above "--aaa" as an example):
 
 - The single mode: "--aaa" or "-a".
 
-For optional arguments which take required-value,
+For optional arguments which have required-value,
 the following two modes are supported (take above "--bbb" as an example):
 
 - The kv mode: "--bbb=1234" or "-b=1234".
 
 - The split mode: "--bbb 1234" or "-b 1234".
 
-For optional arguments which take optional-value,
+For optional arguments which have optional-value,
 the following two modes are supported (take above "--ccc" as an example):
 
 - The single mode: "--ccc" or "-c".
 
 - The kv mode: "--ccc=123" or "-c=123".
 
-For positional arguments which must take required-value,
+For positional arguments which must have required-value,
 their values are parsing in the order defined.
 
 .. note::
@@ -132,15 +131,15 @@ their values are parsing in the order defined.
The compact mode is not supported.
Take above "-a" and "-d" as an example, don't support "-ad" input.
 
-Parsing by autosave way
+Parsing the Autosave Method
 ~~~
 
-Argument of known value type (e.g. ``RTE_ARGPARSE_ARG_VALUE_INT``)
-could be parsed using this autosave way,
-and its result will save in the ``val_saver`` field.
+Arguments of a known value type (e.g. ``RTE_ARGPARSE_ARG_VALUE_INT``)
+can be parsed using the autosave method,
+The result will save in the ``val_saver`` field.
 
 In the above example, the arguments "--aaa"/"--bbb"/"--ccc" and "ooo"
-both use this way, the parsing is as follows:
+both use this method. The parsing is as follows:
 
 - For argument "--aaa", it is configured as no-value,
   so the ``aaa_val`` will be set to ``val_set`` field
@@ -150,28 +149,27 @@ both use this way, the parsing is as follows:
   so the ``bbb_val`` wi

[PATCH 2/9] doc: reword pmd section in prog guide

2024-05-13 Thread Nandini Persad
made small edits to section 15.1 and 15.5

Signed-off-by: Nandini Persad 
---
 doc/guides/prog_guide/poll_mode_drv.rst | 151 
 1 file changed, 73 insertions(+), 78 deletions(-)

diff --git a/doc/guides/prog_guide/poll_mode_drv.rst 
b/doc/guides/prog_guide/poll_mode_drv.rst
index 5008b41c60..360af20900 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -6,25 +6,24 @@
 Poll Mode Driver
 
 
-The DPDK includes 1 Gigabit, 10 Gigabit and 40 Gigabit and para virtualized 
virtio Poll Mode Drivers.
+The DPDK includes 1 Gigabit, 10 Gigabit, 40 Gigabit and para virtualized 
virtio Poll Mode Drivers.
 
-A Poll Mode Driver (PMD) consists of APIs, provided through the BSD driver 
running in user space,
-to configure the devices and their respective queues.
+A Poll Mode Driver (PMD) consists of APIs (provided through the BSD driver 
running in user space) to configure the devices and their respective queues.
 In addition, a PMD accesses the RX and TX descriptors directly without any 
interrupts
 (with the exception of Link Status Change interrupts) to quickly receive,
 process and deliver packets in the user's application.
-This section describes the requirements of the PMDs,
-their global design principles and proposes a high-level architecture and a 
generic external API for the Ethernet PMDs.
+This section describes the requirements of the PMDs and
+their global design principles. It also proposes a high-level architecture and 
a generic external API for the Ethernet PMDs.
 
 Requirements and Assumptions
 
 
 The DPDK environment for packet processing applications allows for two models, 
run-to-completion and pipe-line:
 
-*   In the *run-to-completion*  model, a specific port's RX descriptor ring is 
polled for packets through an API.
-Packets are then processed on the same core and placed on a port's TX 
descriptor ring through an API for transmission.
+*   In the *run-to-completion*  model, a specific port's Rx descriptor ring is 
polled for packets through an API.
+Packets are then processed on the same core and placed on a port's Tx 
descriptor ring through an API for transmission.
 
-*   In the *pipe-line*  model, one core polls one or more port's RX descriptor 
ring through an API.
+*   In the *pipe-line*  model, one core polls one or more port's Rx descriptor 
ring through an API.
 Packets are received and passed to another core via a ring.
 The other core continues to process the packet which then may be placed on 
a port's TX descriptor ring through an API for transmission.
 
@@ -50,14 +49,14 @@ The loop for packet processing includes the following steps:
 
 *   Retrieve the received packet from the packet queue
 
-*   Process the received packet, up to its retransmission if forwarded
+*   Process the received packet up to its retransmission if forwarded
 
 To avoid any unnecessary interrupt processing overhead, the execution 
environment must not use any asynchronous notification mechanisms.
 Whenever needed and appropriate, asynchronous communication should be 
introduced as much as possible through the use of rings.
 
 Avoiding lock contention is a key issue in a multi-core environment.
-To address this issue, PMDs are designed to work with per-core private 
resources as much as possible.
-For example, a PMD maintains a separate transmit queue per-core, per-port, if 
the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
+To address this issue, PMDs are designed to work with per core private 
resources as much as possible.
+For example, a PMD maintains a separate transmit queue per core, per port, if 
the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
 In the same way, every receive queue of a port is assigned to and polled by a 
single logical core (lcore).
 
 To comply with Non-Uniform Memory Access (NUMA), memory management is designed 
to assign to each logical core
@@ -101,9 +100,9 @@ However, an rte_eth_tx_burst function is effectively 
implemented by the PMD to m
 
 *   Apply burst-oriented software optimization techniques to remove operations 
that would otherwise be unavoidable, such as ring index wrap back management.
 
-Burst-oriented functions are also introduced via the API for services that are 
intensively used by the PMD.
+Burst-oriented functions are also introduced via the API for services that are 
extensively used by the PMD.
 This applies in particular to buffer allocators used to populate NIC rings, 
which provide functions to allocate/free several buffers at a time.
-For example, an mbuf_multiple_alloc function returning an array of pointers to 
rte_mbuf buffers which speeds up the receive poll function of the PMD when
+An example of this would be an mbuf_multiple_alloc function returning an array 
of pointers to rte_mbuf buffers which speeds up the receive poll function of 
the PMD when
 replenis

[PATCH 8/9] doc: reword stack library section in prog guide

2024-05-13 Thread Nandini Persad
minor change made to wording

Signed-off-by: Nandini Persad 
---
 doc/guides/prog_guide/stack_lib.rst | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/doc/guides/prog_guide/stack_lib.rst 
b/doc/guides/prog_guide/stack_lib.rst
index 975d3ad796..a51df60d13 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -44,8 +44,8 @@ Lock-free Stack
 
 The lock-free stack consists of a linked list of elements, each containing a
 data pointer and a next pointer, and an atomic stack depth counter. The
-lock-free property means that multiple threads can push and pop simultaneously,
-and one thread being preempted/delayed in a push or pop operation will not
+lock-free property means that multiple threads can push and pop simultaneously.
+One thread being preempted/delayed in a push or pop operation will not
 impede the forward progress of any other thread.
 
 The lock-free push operation enqueues a linked list of pointers by pointing the
-- 
2.34.1



[PATCH 5/9] doc: reword trace library section in prog guide

2024-05-13 Thread Nandini Persad
made minor syntax edits to sect 9.1-9.7 of prog guide

Signed-off-by: Nandini Persad 
---
 doc/guides/prog_guide/trace_lib.rst | 50 ++---
 1 file changed, 25 insertions(+), 25 deletions(-)

diff --git a/doc/guides/prog_guide/trace_lib.rst 
b/doc/guides/prog_guide/trace_lib.rst
index d9b17abe90..e2983017d8 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -14,29 +14,29 @@ When recording, specific instrumentation points placed in 
the software source
 code generate events that are saved on a giant tape: a trace file.
 The trace file then later can be opened in *trace viewers* to visualize and
 analyze the trace events with timestamps and multi-core views.
-Such a mechanism will be useful for resolving a wide range of problems such as
-multi-core synchronization issues, latency measurements, finding out the
-post analysis information like CPU idle time, etc that would otherwise be
-extremely challenging to get.
+This mechanism will be useful for resolving a wide range of problems such as
+multi-core synchronization issues, latency measurements, and finding
+post analysis information like CPU idle time, etc., that would otherwise be
+extremely challenging to gather.
 
 Tracing is often compared to *logging*. However, tracers and loggers are two
-different tools, serving two different purposes.
-Tracers are designed to record much lower-level events that occur much more
+different tools serving two different purposes.
+Tracers are designed to record much lower-level events that occur more
 frequently than log messages, often in the range of thousands per second, with
 very little execution overhead.
 Logging is more appropriate for a very high-level analysis of less frequent
 events: user accesses, exceptional conditions (errors and warnings, for
-example), database transactions, instant messaging communications, and such.
+example), database transactions, instant messaging communications, etc.
 Simply put, logging is one of the many use cases that can be satisfied with
 tracing.
 
 DPDK tracing library features
 -
 
-- A framework to add tracepoints in control and fast path APIs with minimum
+- Provides framework to add tracepoints in control and fast path APIs with 
minimum
   impact on performance.
   Typical trace overhead is ~20 cycles and instrumentation overhead is 1 cycle.
-- Enable and disable the tracepoints at runtime.
+- Enable and disable tracepoints at runtime.
 - Save the trace buffer to the filesystem at any point in time.
 - Support ``overwrite`` and ``discard`` trace mode operations.
 - String-based tracepoint object lookup.
@@ -47,7 +47,7 @@ DPDK tracing library features
   For detailed information, refer to
   `Common Trace Format <https://diamon.org/ctf/>`_.
 
-How to add a tracepoint?
+How to add a Tracepoint
 
 
 This section steps you through the details of adding a simple tracepoint.
@@ -67,14 +67,14 @@ Create the tracepoint header file
 rte_trace_point_emit_string(str);
  )
 
-The above macro creates ``app_trace_string`` tracepoint.
+The above macro creates the ``app_trace_string`` tracepoint.
 The user can choose any name for the tracepoint.
 However, when adding a tracepoint in the DPDK library, the
 ``rte__trace_[_]`` naming convention must be
 followed.
 The examples are ``rte_eal_trace_generic_str``, ``rte_mempool_trace_create``.
 
-The ``RTE_TRACE_POINT`` macro expands from above definition as the following
+The ``RTE_TRACE_POINT`` macro expands from the above definition as the 
following
 function template:
 
 .. code-block:: c
@@ -91,7 +91,7 @@ The consumer of this tracepoint can invoke
 ``app_trace_string(const char *str)`` to emit the trace event to the trace
 buffer.
 
-Register the tracepoint
+Register the Tracepoint
 ~~~
 
 .. code-block:: c
@@ -122,40 +122,40 @@ convention.
 
The ``RTE_TRACE_POINT_REGISTER`` defines the placeholder for the
``rte_trace_point_t`` tracepoint object.
-   For generic tracepoint or for tracepoint used in public header files,
+   For a generic tracepoint or for the tracepoint used in public header files,
the user must export a ``__`` symbol
in the library ``.map`` file for this tracepoint
-   to be used out of the library, in shared builds.
+   to be used out of the library in shared builds.
For example, ``__app_trace_string`` will be the exported symbol in the
above example.
 
-Fast path tracepoint
+Fast Path Tracepoint
 
 
 In order to avoid performance impact in fast path code, the library introduced
 ``RTE_TRACE_POINT_FP``. When adding the tracepoint in fast path code,
 the user must use ``RTE_TRACE_POINT_FP`` instead of ``RTE_TRACE_POINT``.
 
-``RTE_TRACE_POINT_FP`` is compiled out by default and it can be enabled using
+``RTE_TRACE_POINT_FP`` is compiled by default and can be enabled using
 the ``enable_trace_fp`` option for meson build.
 

[PATCH 4/9] doc: reword service cores section in prog guide

2024-05-13 Thread Nandini Persad
made minor syntax changes to section 8 of programmer's guide, service cores

Signed-off-by: Nandini Persad 
---
 doc/guides/prog_guide/service_cores.rst | 32 -
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/doc/guides/prog_guide/service_cores.rst 
b/doc/guides/prog_guide/service_cores.rst
index d4e6c3d6e6..59da3964bf 100644
--- a/doc/guides/prog_guide/service_cores.rst
+++ b/doc/guides/prog_guide/service_cores.rst
@@ -4,38 +4,38 @@
 Service Cores
 =
 
-DPDK has a concept known as service cores, which enables a dynamic way of
-performing work on DPDK lcores. Service core support is built into the EAL, and
-an API is provided to optionally allow applications to control how the service
+DPDK has a concept known as service cores. Service cores enable a dynamic way 
of
+performing work on DPDK lcores. Service core support is built into the EAL.
+An API is provided to give you the option of allowing applications to control 
how the service
 cores are used at runtime.
 
-The service cores concept is built up out of services (components of DPDK that
+The service cores concept is built out of services (components of DPDK that
 require CPU cycles to operate) and service cores (DPDK lcores, tasked with
 running services). The power of the service core concept is that the mapping
-between service cores and services can be configured to abstract away the
+between service cores and services can be configured to simplify the
 difference between platforms and environments.
 
-For example, the Eventdev has hardware and software PMDs. Of these the software
+For example, the Eventdev has hardware and software PMDs. Of these the 
software,
 PMD requires an lcore to perform the scheduling operations, while the hardware
 PMD does not. With service cores, the application would not directly notice
-that the scheduling is done in software.
+that the scheduling is done in the software.
 
 For detailed information about the service core API, please refer to the docs.
 
 Service Core Initialization
 ~~~
 
-There are two methods to having service cores in a DPDK application, either by
+There are two methods to having service cores in a DPDK application: by
 using the service coremask, or by dynamically adding cores using the API.
-The simpler of the two is to pass the `-s` coremask argument to EAL, which will
-take any cores available in the main DPDK coremask, and if the bits are also 
set
-in the service coremask the cores become service-cores instead of DPDK
+The simpler of the two is to pass the `-s` coremask argument to the EAL, which 
will
+take any cores available in the main DPDK coremask. If the bits are also set
+in the service coremask, the cores become service-cores instead of DPDK
 application lcores.
 
 Enabling Services on Cores
 ~~
 
-Each registered service can be individually mapped to a service core, or set of
+Each registered service can be individually mapped to a service core, or a set 
of
 service cores. Enabling a service on a particular core means that the lcore in
 question will run the service. Disabling that core on the service stops the
 lcore in question from running the service.
@@ -48,8 +48,8 @@ function to run the service.
 Service Core Statistics
 ~~~
 
-The service core library is capable of collecting runtime statistics like 
number
-of calls to a specific service, and number of cycles used by the service. The
+The service core library is capable of collecting runtime statistics like the 
number
+of calls to a specific service, and the number of cycles used by the service. 
The
 cycle count collection is dynamically configurable, allowing any application to
 profile the services running on the system at any time.
 
@@ -58,9 +58,9 @@ Service Core Tracing
 
 The service core library is instrumented with tracepoints using the DPDK Trace
 Library. These tracepoints allow you to track the service and logical cores
-state. To activate tracing when launching a DPDK program it is necessary to 
use the
+state. To activate tracing when launching a DPDK program, it is necessary to 
use the
 ``--trace`` option to specify a regular expression to select which tracepoints
-to enable. Here is an example if you want to only specify service core 
tracing::
+to enable. Here is an example if you want to specify only service core 
tracing::
 
   ./dpdk/examples/service_cores/build/service_cores --trace="lib.eal.thread*" 
--trace="lib.eal.service*"
 
-- 
2.34.1



[PATCH 7/9] doc: reword cmdline section in prog guide

2024-05-13 Thread Nandini Persad
minor syntax edits

Signed-off-by: Nandini Persad 
---
 doc/guides/prog_guide/cmdline.rst | 34 +++
 1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/doc/guides/prog_guide/cmdline.rst 
b/doc/guides/prog_guide/cmdline.rst
index 6b10ab6c99..8aa1ef180b 100644
--- a/doc/guides/prog_guide/cmdline.rst
+++ b/doc/guides/prog_guide/cmdline.rst
@@ -62,7 +62,7 @@ The format of the list file must follow these requirements:
 
 * One command per line
 
-* Variable fields are prefixed by the type-name in angle-brackets, for example:
+* Variable fields are prefixed by the type-name in angle-brackets. For example:
 
   * ``message``
 
@@ -75,7 +75,7 @@ The format of the list file must follow these requirements:
   * ``dst_ip6``
 
 * Variable fields, which take their values from a list of options,
-  have the comma-separated option list placed in braces, rather than by the 
type name.
+  have the comma-separated option list placed in braces rather than by the 
type name.
   For example,
 
   * ``<(rx,tx,rxtx)>mode``
@@ -112,7 +112,7 @@ The generated content includes:
 
 * A command-line context array definition, suitable for passing to 
``cmdline_new``
 
-If so desired, the script can also output function stubs for the callback 
functions for each command.
+If needed, the script can also output function stubs for the callback 
functions for each command.
 This behaviour is triggered by passing the ``--stubs`` flag to the script.
 In this case, an output file must be provided with a filename ending in ".h",
 and the callback stubs will be written to an equivalent ".c" file.
@@ -120,7 +120,7 @@ and the callback stubs will be written to an equivalent 
".c" file.
 .. note::
 
The stubs are written to a separate file,
-   to allow continuous use of the script to regenerate the command-line header,
+   to allow continuous use of the script to regenerate the command-line header
without overwriting any code the user has added to the callback functions.
This makes it easy to incrementally add new commands to an existing 
application.
 
@@ -154,7 +154,7 @@ the callback functions would be:
 These functions must be provided by the developer. However, as stated above,
 stub functions may be generated by the script automatically using the 
``--stubs`` parameter.
 
-The same "cmdname" stem is used in the naming of the generated structures too.
+The same "cmdname" stem is used in the naming of the generated structures as 
well.
 To get to the results structure for each command above,
 the ``parsed_result`` parameter should be cast to ``struct cmd_quit_result``
 or ``struct cmd_show_port_stats_result`` respectively.
@@ -176,13 +176,12 @@ Integrating with the Application
 
 
 To integrate the script output with the application,
-we must ``#include`` the generated header into our applications C file,
+we must ``#include`` the generated header into our application's C file,
 and then have the command-line created via either ``cmdline_new`` or 
``cmdline_stdin_new``.
 The first parameter to the function call should be the context array in the 
generated header file,
 ``ctx`` by default (Modifiable via script parameter).
 
-The callback functions may be in this same file, or in a separate one -
-they just need to be available to the linker at build-time.
+The callback functions may be in the same or separate file, as long as they 
are available to the linker at build-time.
 
 Limitations of the Script Approach
 ~~
@@ -242,19 +241,19 @@ The resulting struct looks like:
 
 As before, we choose names to match the tokens in the command.
 Since our numeric parameter is a 16-bit value, we use ``uint16_t`` type for it.
-Any of the standard sized integer types can be used as parameters, depending 
on the desired result.
+Any of the standard-sized integer types can be used as parameters depending on 
the desired result.
 
 Beyond the standard integer types,
-the library also allows variable parameters to be of a number of other types,
+the library also allows variable parameters to be of a number of other types
 as called out in the feature list above.
 
 * For variable string parameters,
   the type should be ``cmdline_fixed_string_t`` - the same as for fixed tokens,
   but these will be initialized differently (as described below).
 
-* For ethernet addresses use type ``struct rte_ether_addr``
+* For ethernet addresses, use type ``struct rte_ether_addr``
 
-* For IP addresses use type ``cmdline_ipaddr_t``
+* For IP addresses, use type ``cmdline_ipaddr_t``
 
 Providing Field Initializers
 
@@ -267,6 +266,7 @@ For fixed string tokens, like "quit", "show" and "port", 
the initializer will be
static cmdline_parse_token_string_t cmd_quit_quit_tok =
   TOKEN_STRING_INITIALIZER(struct cmd_quit_result, quit, "qui

[PATCH 9/9] doc: reword rcu library section in prog guide

2024-05-13 Thread Nandini Persad
simple syntax changes made to rcu library sectionin programmer's guide

Signed-off-by: Nandini Persad 
---
 doc/guides/prog_guide/rcu_lib.rst | 77 ---
 1 file changed, 40 insertions(+), 37 deletions(-)

diff --git a/doc/guides/prog_guide/rcu_lib.rst 
b/doc/guides/prog_guide/rcu_lib.rst
index d0aef3bc16..c7ae349184 100644
--- a/doc/guides/prog_guide/rcu_lib.rst
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -8,17 +8,17 @@ RCU Library
 
 Lockless data structures provide scalability and determinism.
 They enable use cases where locking may not be allowed
-(for example real-time applications).
+(for example, real-time applications).
 
 In the following sections, the term "memory" refers to memory allocated
 by typical APIs like malloc() or anything that is representative of
-memory, for example an index of a free element array.
+memory. An example of this is an index of a free element array.
 
 Since these data structures are lockless, the writers and readers
 are accessing the data structures concurrently. Hence, while removing
 an element from a data structure, the writers cannot return the memory
-to the allocator, without knowing that the readers are not
-referencing that element/memory anymore. Hence, it is required to
+to the allocator without knowing that the readers are not
+referencing that element/memory anymore. Therefore, it is required to
 separate the operation of removing an element into two steps:
 
 #. Delete: in this step, the writer removes the reference to the element from
@@ -64,19 +64,19 @@ quiescent state. Reader thread 3 was not accessing D1 when 
the delete
 operation happened. So, reader thread 3 will not have a reference to the
 deleted entry.
 
-It can be noted that, the critical sections for D2 is a quiescent state
-for D1. i.e. for a given data structure Dx, any point in the thread execution
-that does not reference Dx is a quiescent state.
+Note that the critical sections for D2 is a quiescent state
+for D1 (i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state).
 
 Since memory is not freed immediately, there might be a need for
-provisioning of additional memory, depending on the application requirements.
+provisioning additional memory depending on the application requirements.
 
 Factors affecting the RCU mechanism
 ---
 
 It is important to make sure that this library keeps the overhead of
-identifying the end of grace period and subsequent freeing of memory,
-to a minimum. The following paras explain how grace period and critical
+identifying the end of grace period and subsequent freeing of memory
+to a minimum. The following paragraphs explain how grace period and critical
 section affect this overhead.
 
 The writer has to poll the readers to identify the end of grace period.
@@ -119,14 +119,14 @@ How to use this library
 The application must allocate memory and initialize a QS variable.
 
 Applications can call ``rte_rcu_qsbr_get_memsize()`` to calculate the size
-of memory to allocate. This API takes a maximum number of reader threads,
-using this variable, as a parameter.
+of memory to allocate. This API takes a maximum number of reader threads
+using this variable as a parameter.
 
 Further, the application can initialize a QS variable using the API
 ``rte_rcu_qsbr_init()``.
 
 Each reader thread is assumed to have a unique thread ID. Currently, the
-management of the thread ID (for example allocation/free) is left to the
+management of the thread ID (for example, allocation/free) is left to the
 application. The thread ID should be in the range of 0 to
 maximum number of threads provided while creating the QS variable.
 The application could also use ``lcore_id`` as the thread ID where applicable.
@@ -134,13 +134,13 @@ The application could also use ``lcore_id`` as the thread 
ID where applicable.
 The ``rte_rcu_qsbr_thread_register()`` API will register a reader thread
 to report its quiescent state. This can be called from a reader thread.
 A control plane thread can also call this on behalf of a reader thread.
-The reader thread must call ``rte_rcu_qsbr_thread_online()`` API to start
+The reader thread must call the ``rte_rcu_qsbr_thread_online()`` API to start
 reporting its quiescent state.
 
 Some of the use cases might require the reader threads to make blocking API
-calls (for example while using eventdev APIs). The writer thread should not
+calls (for example, while using eventdev APIs). The writer thread should not
 wait for such reader threads to enter quiescent state.  The reader thread must
-call ``rte_rcu_qsbr_thread_offline()`` API, before calling blocking APIs. It
+call ``rte_rcu_qsbr_thread_offline()`` API before calling blocking APIs. It
 can call ``rte_rcu_qsbr_thread_online()`` API once the blocking API call
 returns.
 
@@ -149,13 +149,13 @@ state by calling the API ``rte_rcu_qsbr_start()``. It is 
possib

[PATCH v2 1/9] doc: reword pmd section in prog guide

2024-06-20 Thread Nandini Persad
I made edits for syntax/grammar the PMD section of the prog guide.

Signed-off-by: Nandini Persad 
---
 doc/guides/prog_guide/poll_mode_drv.rst | 151 
 1 file changed, 73 insertions(+), 78 deletions(-)

diff --git a/doc/guides/prog_guide/poll_mode_drv.rst 
b/doc/guides/prog_guide/poll_mode_drv.rst
index 5008b41c60..360af20900 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -6,25 +6,24 @@
 Poll Mode Driver
 
 
-The DPDK includes 1 Gigabit, 10 Gigabit and 40 Gigabit and para virtualized 
virtio Poll Mode Drivers.
+The DPDK includes 1 Gigabit, 10 Gigabit, 40 Gigabit and para virtualized 
virtio Poll Mode Drivers.
 
-A Poll Mode Driver (PMD) consists of APIs, provided through the BSD driver 
running in user space,
-to configure the devices and their respective queues.
+A Poll Mode Driver (PMD) consists of APIs (provided through the BSD driver 
running in user space) to configure the devices and their respective queues.
 In addition, a PMD accesses the RX and TX descriptors directly without any 
interrupts
 (with the exception of Link Status Change interrupts) to quickly receive,
 process and deliver packets in the user's application.
-This section describes the requirements of the PMDs,
-their global design principles and proposes a high-level architecture and a 
generic external API for the Ethernet PMDs.
+This section describes the requirements of the PMDs and
+their global design principles. It also proposes a high-level architecture and 
a generic external API for the Ethernet PMDs.
 
 Requirements and Assumptions
 
 
 The DPDK environment for packet processing applications allows for two models, 
run-to-completion and pipe-line:
 
-*   In the *run-to-completion*  model, a specific port's RX descriptor ring is 
polled for packets through an API.
-Packets are then processed on the same core and placed on a port's TX 
descriptor ring through an API for transmission.
+*   In the *run-to-completion*  model, a specific port's Rx descriptor ring is 
polled for packets through an API.
+Packets are then processed on the same core and placed on a port's Tx 
descriptor ring through an API for transmission.
 
-*   In the *pipe-line*  model, one core polls one or more port's RX descriptor 
ring through an API.
+*   In the *pipe-line*  model, one core polls one or more port's Rx descriptor 
ring through an API.
 Packets are received and passed to another core via a ring.
 The other core continues to process the packet which then may be placed on 
a port's TX descriptor ring through an API for transmission.
 
@@ -50,14 +49,14 @@ The loop for packet processing includes the following steps:
 
 *   Retrieve the received packet from the packet queue
 
-*   Process the received packet, up to its retransmission if forwarded
+*   Process the received packet up to its retransmission if forwarded
 
 To avoid any unnecessary interrupt processing overhead, the execution 
environment must not use any asynchronous notification mechanisms.
 Whenever needed and appropriate, asynchronous communication should be 
introduced as much as possible through the use of rings.
 
 Avoiding lock contention is a key issue in a multi-core environment.
-To address this issue, PMDs are designed to work with per-core private 
resources as much as possible.
-For example, a PMD maintains a separate transmit queue per-core, per-port, if 
the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
+To address this issue, PMDs are designed to work with per core private 
resources as much as possible.
+For example, a PMD maintains a separate transmit queue per core, per port, if 
the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
 In the same way, every receive queue of a port is assigned to and polled by a 
single logical core (lcore).
 
 To comply with Non-Uniform Memory Access (NUMA), memory management is designed 
to assign to each logical core
@@ -101,9 +100,9 @@ However, an rte_eth_tx_burst function is effectively 
implemented by the PMD to m
 
 *   Apply burst-oriented software optimization techniques to remove operations 
that would otherwise be unavoidable, such as ring index wrap back management.
 
-Burst-oriented functions are also introduced via the API for services that are 
intensively used by the PMD.
+Burst-oriented functions are also introduced via the API for services that are 
extensively used by the PMD.
 This applies in particular to buffer allocators used to populate NIC rings, 
which provide functions to allocate/free several buffers at a time.
-For example, an mbuf_multiple_alloc function returning an array of pointers to 
rte_mbuf buffers which speeds up the receive poll function of the PMD when
+An example of this would be an mbuf_multiple_alloc function returning an array 
of pointers to rte_mbuf buffers which speeds up the receive poll function

[PATCH v2 2/9] doc: reword argparse section in prog guide

2024-06-20 Thread Nandini Persad
I have made small edits for syntax in this section.

Signed-off-by: Nandini Persad 
---
 doc/guides/prog_guide/argparse_lib.rst | 75 +-
 1 file changed, 38 insertions(+), 37 deletions(-)

diff --git a/doc/guides/prog_guide/argparse_lib.rst 
b/doc/guides/prog_guide/argparse_lib.rst
index a6ac11b1c0..1acde60861 100644
--- a/doc/guides/prog_guide/argparse_lib.rst
+++ b/doc/guides/prog_guide/argparse_lib.rst
@@ -4,30 +4,31 @@
 Argparse Library
 
 
-The argparse library provides argument parsing functionality,
-this library makes it easy to write user-friendly command-line program.
+The argparse library provides argument parsing functionality and makes it easy 
to write user-friendly command-line programming.
 
 Features and Capabilities
 -
 
-- Support parsing optional argument (which could take with no-value,
-  required-value and optional-value).
+- Supports parsing of optional arguments (which can contain no-value,
+  required-value and optional-values).
 
-- Support parsing positional argument (which must take with required-value).
+- Supports parsing of positional arguments (which must contain 
required-values).
 
-- Support automatic generate usage information.
+- Supports automatic generation of usage information.
 
-- Support issue errors when provide with invalid arguments.
+- Provides issue errors when an argument is invalid
+
+- Supports parsing arguments in two ways:
 
-- Support parsing argument by two ways:
 
   #. autosave: used for parsing known value types;
   #. callback: will invoke user callback to parse.
 
+
 Usage Guide
 ---
 
-The following code demonstrates how to use:
+The following code demonstrates how to use the following:
 
 .. code-block:: C
 
@@ -89,12 +90,12 @@ The following code demonstrates how to use:
   ...
}
 
-In this example, the arguments which start with a hyphen (-) are optional
-arguments (they're "--aaa"/"--bbb"/"--ccc"/"--ddd"/"--eee"/"--fff"); and the
-arguments which don't start with a hyphen (-) are positional arguments
-(they're "ooo"/"ppp").
+In this example, the arguments thhat start with a hyphen (-) are optional
+arguments ("--aaa"/"--bbb"/"--ccc"/"--ddd"/"--eee"/"--fff").
+The arguments that do not start with a hyphen (-) are positional arguments
+("ooo"/"ppp").
 
-Every argument must be set whether to carry a value (one of
+Every argument must set whether it carries a value (one of
 ``RTE_ARGPARSE_ARG_NO_VALUE``, ``RTE_ARGPARSE_ARG_REQUIRED_VALUE`` and
 ``RTE_ARGPARSE_ARG_OPTIONAL_VALUE``).
 
@@ -105,26 +106,26 @@ Every argument must be set whether to carry a value (one 
of
 User Input Requirements
 ~~~
 
-For optional arguments which take no-value,
+For optional arguments which have no-value,
 the following mode is supported (take above "--aaa" as an example):
 
 - The single mode: "--aaa" or "-a".
 
-For optional arguments which take required-value,
+For optional arguments which have required-value,
 the following two modes are supported (take above "--bbb" as an example):
 
 - The kv mode: "--bbb=1234" or "-b=1234".
 
 - The split mode: "--bbb 1234" or "-b 1234".
 
-For optional arguments which take optional-value,
+For optional arguments which have optional-value,
 the following two modes are supported (take above "--ccc" as an example):
 
 - The single mode: "--ccc" or "-c".
 
 - The kv mode: "--ccc=123" or "-c=123".
 
-For positional arguments which must take required-value,
+For positional arguments which must have required-value,
 their values are parsing in the order defined.
 
 .. note::
@@ -132,15 +133,15 @@ their values are parsing in the order defined.
The compact mode is not supported.
Take above "-a" and "-d" as an example, don't support "-ad" input.
 
-Parsing by autosave way
-~~~
+Parsing the Autosave Method
+~~~
 
-Argument of known value type (e.g. ``RTE_ARGPARSE_ARG_VALUE_INT``)
-could be parsed using this autosave way,
-and its result will save in the ``val_saver`` field.
+Arguments of a known value type (e.g. ``RTE_ARGPARSE_ARG_VALUE_INT``)
+can be parsed using the autosave method,
+The result will save in the ``val_saver`` field.
 
 In the above example, the arguments "--aaa"/"--bbb"/"--ccc" and "ooo"
-both use this way, the parsing is as follows:
+both use this method. The parsing is as follows:
 
 - For argument "--aaa", it is configured as no-value,
   so the ``aaa_val`` will be set to ``val_set`` field
@@ -150,28 +151,28 @@ both use this way, the parsing is as follows:
   so the ``bbb_val`` wi

[PATCH v2 3/9] doc: reword design section in contributors guidelines

2024-06-20 Thread Nandini Persad
Minor editing was made for grammar and syntax of design section.

Signed-off-by: Nandini Persad 
---
 .mailmap   |  1 +
 doc/guides/contributing/design.rst | 86 +++---
 doc/guides/linux_gsg/sys_reqs.rst  |  2 +-
 3 files changed, 45 insertions(+), 44 deletions(-)

diff --git a/.mailmap b/.mailmap
index 66ebc20666..7d4929c5d1 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1002,6 +1002,7 @@ Naga Suresh Somarowthu 
 Nalla Pradeep 
 Na Na 
 Nan Chen 
+Nandini Persad 
 Nannan Lu 
 Nan Zhou 
 Narcisa Vasile   

diff --git a/doc/guides/contributing/design.rst 
b/doc/guides/contributing/design.rst
index b724177ba1..3d1f5aeb91 100644
--- a/doc/guides/contributing/design.rst
+++ b/doc/guides/contributing/design.rst
@@ -1,6 +1,7 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
 Copyright 2018 The DPDK contributors
 
+
 Design
 ==
 
@@ -8,22 +9,26 @@ Design
 Environment or Architecture-specific Sources
 
 
-In DPDK and DPDK applications, some code is specific to an architecture (i686, 
x86_64) or to an executive environment (freebsd or linux) and so on.
-As far as is possible, all such instances of architecture or env-specific code 
should be provided via standard APIs in the EAL.
+In DPDK and DPDK applications, some code is architecture-specific (i686, 
x86_64) or  environment-specific (FreeBsd or Linux, etc.).
+When feasible, such instances of architecture or env-specific code should be 
provided via standard APIs in the EAL.
+
+By convention, a file is specific if the directory is indicated. Otherwise, it 
is common.
 
-By convention, a file is common if it is not located in a directory indicating 
that it is specific.
-For instance, a file located in a subdir of "x86_64" directory is specific to 
this architecture.
+For example:
+
+A file located in a subdir of "x86_64" directory is specific to this 
architecture.
 A file located in a subdir of "linux" is specific to this execution 
environment.
 
 .. note::
 
Code in DPDK libraries and applications should be generic.
-   The correct location for architecture or executive environment specific 
code is in the EAL.
+   The correct location for architecture or executive environment-specific 
code is in the EAL.
+
+When necessary, there are several ways to handle specific code:
 
-When absolutely necessary, there are several ways to handle specific code:
 
-* Use a ``#ifdef`` with a build definition macro in the C code.
-  This can be done when the differences are small and they can be embedded in 
the same C file:
+* When the differences are small and they can be embedded in the same C file, 
use a ``#ifdef`` with a build definition macro in the C code.
+
 
   .. code-block:: c
 
@@ -33,9 +38,9 @@ When absolutely necessary, there are several ways to handle 
specific code:
  titi();
  #endif
 
-* Use build definition macros and conditions in the Meson build file. This is 
done when the differences are more significant.
-  In this case, the code is split into two separate files that are 
architecture or environment specific.
-  This should only apply inside the EAL library.
+
+* When the differences are more significant, use build definition macros and 
conditions in the Meson build file. In this case, the code is split into two 
separate files that are architecture or environment specific. This should only 
apply inside the EAL library.
+
 
 Per Architecture Sources
 
@@ -43,7 +48,8 @@ Per Architecture Sources
 The following macro options can be used:
 
 * ``RTE_ARCH`` is a string that contains the name of the architecture.
-* ``RTE_ARCH_I686``, ``RTE_ARCH_X86_64``, ``RTE_ARCH_X86_X32``, 
``RTE_ARCH_PPC_64``, ``RTE_ARCH_RISCV``, ``RTE_ARCH_LOONGARCH``, 
``RTE_ARCH_ARM``, ``RTE_ARCH_ARMv7`` or ``RTE_ARCH_ARM64`` are defined only if 
we are building for those architectures.
+* ``RTE_ARCH_I686``, ``RTE_ARCH_X86_64``, ``RTE_ARCH_X86_X32``, 
``RTE_ARCH_PPC_64``, ``RTE_ARCH_RISCV``, ``RTE_ARCH_LOONGARCH``, 
``RTE_ARCH_ARM``, ``RTE_ARCH_ARMv7`` or ``RTE_ARCH_ARM64`` are defined when 
building for these architectures.
+
 
 Per Execution Environment Sources
 ~
@@ -51,30 +57,22 @@ Per Execution Environment Sources
 The following macro options can be used:
 
 * ``RTE_EXEC_ENV`` is a string that contains the name of the executive 
environment.
-* ``RTE_EXEC_ENV_FREEBSD``, ``RTE_EXEC_ENV_LINUX`` or ``RTE_EXEC_ENV_WINDOWS`` 
are defined only if we are building for this execution environment.
+* ``RTE_EXEC_ENV_FREEBSD``, ``RTE_EXEC_ENV_LINUX`` or ``RTE_EXEC_ENV_WINDOWS`` 
are defined only when building for this execution environment.
+
 
 Mbuf features
 -
 
-The ``rte_mbuf`` structure must be kept small (128 bytes).
-
-In order to add new features without wasting buffer space for unused features,
-some fields and flags can be registered dynamically in a shared area.
-The "dynam

[PATCH v2 4/9] doc: reword service cores section in prog guide

2024-06-20 Thread Nandini Persad
I've made minor syntax changes to section 8 of programmer's guide, service 
cores.

Signed-off-by: Nandini Persad 
---
 doc/guides/prog_guide/service_cores.rst | 32 -
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/doc/guides/prog_guide/service_cores.rst 
b/doc/guides/prog_guide/service_cores.rst
index d4e6c3d6e6..59da3964bf 100644
--- a/doc/guides/prog_guide/service_cores.rst
+++ b/doc/guides/prog_guide/service_cores.rst
@@ -4,38 +4,38 @@
 Service Cores
 =
 
-DPDK has a concept known as service cores, which enables a dynamic way of
-performing work on DPDK lcores. Service core support is built into the EAL, and
-an API is provided to optionally allow applications to control how the service
+DPDK has a concept known as service cores. Service cores enable a dynamic way 
of
+performing work on DPDK lcores. Service core support is built into the EAL.
+An API is provided to give you the option of allowing applications to control 
how the service
 cores are used at runtime.
 
-The service cores concept is built up out of services (components of DPDK that
+The service cores concept is built out of services (components of DPDK that
 require CPU cycles to operate) and service cores (DPDK lcores, tasked with
 running services). The power of the service core concept is that the mapping
-between service cores and services can be configured to abstract away the
+between service cores and services can be configured to simplify the
 difference between platforms and environments.
 
-For example, the Eventdev has hardware and software PMDs. Of these the software
+For example, the Eventdev has hardware and software PMDs. Of these the 
software,
 PMD requires an lcore to perform the scheduling operations, while the hardware
 PMD does not. With service cores, the application would not directly notice
-that the scheduling is done in software.
+that the scheduling is done in the software.
 
 For detailed information about the service core API, please refer to the docs.
 
 Service Core Initialization
 ~~~
 
-There are two methods to having service cores in a DPDK application, either by
+There are two methods to having service cores in a DPDK application: by
 using the service coremask, or by dynamically adding cores using the API.
-The simpler of the two is to pass the `-s` coremask argument to EAL, which will
-take any cores available in the main DPDK coremask, and if the bits are also 
set
-in the service coremask the cores become service-cores instead of DPDK
+The simpler of the two is to pass the `-s` coremask argument to the EAL, which 
will
+take any cores available in the main DPDK coremask. If the bits are also set
+in the service coremask, the cores become service-cores instead of DPDK
 application lcores.
 
 Enabling Services on Cores
 ~~
 
-Each registered service can be individually mapped to a service core, or set of
+Each registered service can be individually mapped to a service core, or a set 
of
 service cores. Enabling a service on a particular core means that the lcore in
 question will run the service. Disabling that core on the service stops the
 lcore in question from running the service.
@@ -48,8 +48,8 @@ function to run the service.
 Service Core Statistics
 ~~~
 
-The service core library is capable of collecting runtime statistics like 
number
-of calls to a specific service, and number of cycles used by the service. The
+The service core library is capable of collecting runtime statistics like the 
number
+of calls to a specific service, and the number of cycles used by the service. 
The
 cycle count collection is dynamically configurable, allowing any application to
 profile the services running on the system at any time.
 
@@ -58,9 +58,9 @@ Service Core Tracing
 
 The service core library is instrumented with tracepoints using the DPDK Trace
 Library. These tracepoints allow you to track the service and logical cores
-state. To activate tracing when launching a DPDK program it is necessary to 
use the
+state. To activate tracing when launching a DPDK program, it is necessary to 
use the
 ``--trace`` option to specify a regular expression to select which tracepoints
-to enable. Here is an example if you want to only specify service core 
tracing::
+to enable. Here is an example if you want to specify only service core 
tracing::
 
   ./dpdk/examples/service_cores/build/service_cores --trace="lib.eal.thread*" 
--trace="lib.eal.service*"
 
-- 
2.34.1



[PATCH v2 5/9] doc: reword trace library section in prog guide

2024-06-20 Thread Nandini Persad
Minor syntax edits were made to sect the trace library section of prog guide.

Signed-off-by: Nandini Persad 
---
 doc/guides/prog_guide/trace_lib.rst | 50 ++---
 1 file changed, 25 insertions(+), 25 deletions(-)

diff --git a/doc/guides/prog_guide/trace_lib.rst 
b/doc/guides/prog_guide/trace_lib.rst
index d9b17abe90..e2983017d8 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -14,29 +14,29 @@ When recording, specific instrumentation points placed in 
the software source
 code generate events that are saved on a giant tape: a trace file.
 The trace file then later can be opened in *trace viewers* to visualize and
 analyze the trace events with timestamps and multi-core views.
-Such a mechanism will be useful for resolving a wide range of problems such as
-multi-core synchronization issues, latency measurements, finding out the
-post analysis information like CPU idle time, etc that would otherwise be
-extremely challenging to get.
+This mechanism will be useful for resolving a wide range of problems such as
+multi-core synchronization issues, latency measurements, and finding
+post analysis information like CPU idle time, etc., that would otherwise be
+extremely challenging to gather.
 
 Tracing is often compared to *logging*. However, tracers and loggers are two
-different tools, serving two different purposes.
-Tracers are designed to record much lower-level events that occur much more
+different tools serving two different purposes.
+Tracers are designed to record much lower-level events that occur more
 frequently than log messages, often in the range of thousands per second, with
 very little execution overhead.
 Logging is more appropriate for a very high-level analysis of less frequent
 events: user accesses, exceptional conditions (errors and warnings, for
-example), database transactions, instant messaging communications, and such.
+example), database transactions, instant messaging communications, etc.
 Simply put, logging is one of the many use cases that can be satisfied with
 tracing.
 
 DPDK tracing library features
 -
 
-- A framework to add tracepoints in control and fast path APIs with minimum
+- Provides framework to add tracepoints in control and fast path APIs with 
minimum
   impact on performance.
   Typical trace overhead is ~20 cycles and instrumentation overhead is 1 cycle.
-- Enable and disable the tracepoints at runtime.
+- Enable and disable tracepoints at runtime.
 - Save the trace buffer to the filesystem at any point in time.
 - Support ``overwrite`` and ``discard`` trace mode operations.
 - String-based tracepoint object lookup.
@@ -47,7 +47,7 @@ DPDK tracing library features
   For detailed information, refer to
   `Common Trace Format <https://diamon.org/ctf/>`_.
 
-How to add a tracepoint?
+How to add a Tracepoint
 
 
 This section steps you through the details of adding a simple tracepoint.
@@ -67,14 +67,14 @@ Create the tracepoint header file
 rte_trace_point_emit_string(str);
  )
 
-The above macro creates ``app_trace_string`` tracepoint.
+The above macro creates the ``app_trace_string`` tracepoint.
 The user can choose any name for the tracepoint.
 However, when adding a tracepoint in the DPDK library, the
 ``rte__trace_[_]`` naming convention must be
 followed.
 The examples are ``rte_eal_trace_generic_str``, ``rte_mempool_trace_create``.
 
-The ``RTE_TRACE_POINT`` macro expands from above definition as the following
+The ``RTE_TRACE_POINT`` macro expands from the above definition as the 
following
 function template:
 
 .. code-block:: c
@@ -91,7 +91,7 @@ The consumer of this tracepoint can invoke
 ``app_trace_string(const char *str)`` to emit the trace event to the trace
 buffer.
 
-Register the tracepoint
+Register the Tracepoint
 ~~~
 
 .. code-block:: c
@@ -122,40 +122,40 @@ convention.
 
The ``RTE_TRACE_POINT_REGISTER`` defines the placeholder for the
``rte_trace_point_t`` tracepoint object.
-   For generic tracepoint or for tracepoint used in public header files,
+   For a generic tracepoint or for the tracepoint used in public header files,
the user must export a ``__`` symbol
in the library ``.map`` file for this tracepoint
-   to be used out of the library, in shared builds.
+   to be used out of the library in shared builds.
For example, ``__app_trace_string`` will be the exported symbol in the
above example.
 
-Fast path tracepoint
+Fast Path Tracepoint
 
 
 In order to avoid performance impact in fast path code, the library introduced
 ``RTE_TRACE_POINT_FP``. When adding the tracepoint in fast path code,
 the user must use ``RTE_TRACE_POINT_FP`` instead of ``RTE_TRACE_POINT``.
 
-``RTE_TRACE_POINT_FP`` is compiled out by default and it can be enabled using
+``RTE_TRACE_POINT_FP`` is compiled by default and can be enabled using
 the ``enable_trace_fp`` opti

[PATCH v2 6/9] doc: reword log library section in prog guide

2024-06-20 Thread Nandini Persad
Minor changes made for syntax in the log library section and 7.1
section of the programmer's guide. A couple sentences at the end of the
trace library section were also edited.

Signed-off-by: Nandini Persad 
---
 doc/guides/prog_guide/cmdline.rst   | 24 +++---
 doc/guides/prog_guide/log_lib.rst   | 32 ++---
 doc/guides/prog_guide/trace_lib.rst | 22 ++--
 3 files changed, 39 insertions(+), 39 deletions(-)

diff --git a/doc/guides/prog_guide/cmdline.rst 
b/doc/guides/prog_guide/cmdline.rst
index e20281ceb5..6b10ab6c99 100644
--- a/doc/guides/prog_guide/cmdline.rst
+++ b/doc/guides/prog_guide/cmdline.rst
@@ -5,8 +5,8 @@ Command-line Library
 
 
 Since its earliest versions, DPDK has included a command-line library -
-primarily for internal use by, for example, ``dpdk-testpmd`` and the 
``dpdk-test`` binaries,
-but the library is also exported on install and can be used by any end 
application.
+primarily for internal use by, for example, ``dpdk-testpmd`` and the 
``dpdk-test`` binaries.
+However, the library is also exported on install and can be used by any end 
application.
 This chapter covers the basics of the command-line library and how to use it 
in an application.
 
 Library Features
@@ -18,7 +18,7 @@ The DPDK command-line library supports the following features:
 
 * Ability to read and process commands taken from an input file, e.g. startup 
script
 
-* Parameterized commands able to take multiple parameters with different 
datatypes:
+* Parameterized commands that can take multiple parameters with different 
datatypes:
 
* Strings
* Signed/unsigned 16/32/64-bit integers
@@ -56,7 +56,7 @@ Creating a Command List File
 The ``dpdk-cmdline-gen.py`` script takes as input a list of commands to be 
used by the application.
 While these can be piped to it via standard input, using a list file is 
probably best.
 
-The format of the list file must be:
+The format of the list file must follow these requirements:
 
 * Comment lines start with '#' as first non-whitespace character
 
@@ -75,7 +75,7 @@ The format of the list file must be:
   * ``dst_ip6``
 
 * Variable fields, which take their values from a list of options,
-  have the comma-separated option list placed in braces, rather than a the 
type name.
+  have the comma-separated option list placed in braces, rather than by the 
type name.
   For example,
 
   * ``<(rx,tx,rxtx)>mode``
@@ -127,13 +127,13 @@ and the callback stubs will be written to an equivalent 
".c" file.
 Providing the Function Callbacks
 
 
-As discussed above, the script output is a header file, containing structure 
definitions,
-but the callback functions themselves obviously have to be provided by the 
user.
-These callback functions must be provided as non-static functions in a C file,
+As discussed above, the script output is a header file containing structure 
definitions,
+but the callback functions must be provided by the user.
+These callback functions must be provided as non-static functions in a C file
 and named ``cmd__parsed``.
 The function prototypes can be seen in the generated output header.
 
-The "cmdname" part of the function name is built up by combining the 
non-variable initial tokens in the command.
+The "cmdname" part of the function name is built by combining the non-variable 
initial tokens in the command.
 So, given the commands in our worked example below: ``quit`` and ``show port 
stats ``,
 the callback functions would be:
 
@@ -151,11 +151,11 @@ the callback functions would be:
 ...
}
 
-These functions must be provided by the developer, but, as stated above,
+These functions must be provided by the developer. However, as stated above,
 stub functions may be generated by the script automatically using the 
``--stubs`` parameter.
 
 The same "cmdname" stem is used in the naming of the generated structures too.
-To get at the results structure for each command above,
+To get to the results structure for each command above,
 the ``parsed_result`` parameter should be cast to ``struct cmd_quit_result``
 or ``struct cmd_show_port_stats_result`` respectively.
 
@@ -179,7 +179,7 @@ To integrate the script output with the application,
 we must ``#include`` the generated header into our applications C file,
 and then have the command-line created via either ``cmdline_new`` or 
``cmdline_stdin_new``.
 The first parameter to the function call should be the context array in the 
generated header file,
-``ctx`` by default. (Modifiable via script parameter).
+``ctx`` by default (Modifiable via script parameter).
 
 The callback functions may be in this same file, or in a separate one -
 they just need to be available to the linker at build-time.
diff --git a/doc/guides/prog_guide/log_lib.rst 
b/doc/guides/prog_guide/log_lib.rst
index ff9d1b54a2..05f032dfad 100644
--- a/

[PATCH v2 7/9] doc: reword cmdline section in prog guide

2024-06-20 Thread Nandini Persad
Minor syntax edits made to the cmdline section.

Signed-off-by: Nandini Persad 
---
 doc/guides/prog_guide/cmdline.rst | 34 +++
 1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/doc/guides/prog_guide/cmdline.rst 
b/doc/guides/prog_guide/cmdline.rst
index 6b10ab6c99..8aa1ef180b 100644
--- a/doc/guides/prog_guide/cmdline.rst
+++ b/doc/guides/prog_guide/cmdline.rst
@@ -62,7 +62,7 @@ The format of the list file must follow these requirements:
 
 * One command per line
 
-* Variable fields are prefixed by the type-name in angle-brackets, for example:
+* Variable fields are prefixed by the type-name in angle-brackets. For example:
 
   * ``message``
 
@@ -75,7 +75,7 @@ The format of the list file must follow these requirements:
   * ``dst_ip6``
 
 * Variable fields, which take their values from a list of options,
-  have the comma-separated option list placed in braces, rather than by the 
type name.
+  have the comma-separated option list placed in braces rather than by the 
type name.
   For example,
 
   * ``<(rx,tx,rxtx)>mode``
@@ -112,7 +112,7 @@ The generated content includes:
 
 * A command-line context array definition, suitable for passing to 
``cmdline_new``
 
-If so desired, the script can also output function stubs for the callback 
functions for each command.
+If needed, the script can also output function stubs for the callback 
functions for each command.
 This behaviour is triggered by passing the ``--stubs`` flag to the script.
 In this case, an output file must be provided with a filename ending in ".h",
 and the callback stubs will be written to an equivalent ".c" file.
@@ -120,7 +120,7 @@ and the callback stubs will be written to an equivalent 
".c" file.
 .. note::
 
The stubs are written to a separate file,
-   to allow continuous use of the script to regenerate the command-line header,
+   to allow continuous use of the script to regenerate the command-line header
without overwriting any code the user has added to the callback functions.
This makes it easy to incrementally add new commands to an existing 
application.
 
@@ -154,7 +154,7 @@ the callback functions would be:
 These functions must be provided by the developer. However, as stated above,
 stub functions may be generated by the script automatically using the 
``--stubs`` parameter.
 
-The same "cmdname" stem is used in the naming of the generated structures too.
+The same "cmdname" stem is used in the naming of the generated structures as 
well.
 To get to the results structure for each command above,
 the ``parsed_result`` parameter should be cast to ``struct cmd_quit_result``
 or ``struct cmd_show_port_stats_result`` respectively.
@@ -176,13 +176,12 @@ Integrating with the Application
 
 
 To integrate the script output with the application,
-we must ``#include`` the generated header into our applications C file,
+we must ``#include`` the generated header into our application's C file,
 and then have the command-line created via either ``cmdline_new`` or 
``cmdline_stdin_new``.
 The first parameter to the function call should be the context array in the 
generated header file,
 ``ctx`` by default (Modifiable via script parameter).
 
-The callback functions may be in this same file, or in a separate one -
-they just need to be available to the linker at build-time.
+The callback functions may be in the same or separate file, as long as they 
are available to the linker at build-time.
 
 Limitations of the Script Approach
 ~~
@@ -242,19 +241,19 @@ The resulting struct looks like:
 
 As before, we choose names to match the tokens in the command.
 Since our numeric parameter is a 16-bit value, we use ``uint16_t`` type for it.
-Any of the standard sized integer types can be used as parameters, depending 
on the desired result.
+Any of the standard-sized integer types can be used as parameters depending on 
the desired result.
 
 Beyond the standard integer types,
-the library also allows variable parameters to be of a number of other types,
+the library also allows variable parameters to be of a number of other types
 as called out in the feature list above.
 
 * For variable string parameters,
   the type should be ``cmdline_fixed_string_t`` - the same as for fixed tokens,
   but these will be initialized differently (as described below).
 
-* For ethernet addresses use type ``struct rte_ether_addr``
+* For ethernet addresses, use type ``struct rte_ether_addr``
 
-* For IP addresses use type ``cmdline_ipaddr_t``
+* For IP addresses, use type ``cmdline_ipaddr_t``
 
 Providing Field Initializers
 
@@ -267,6 +266,7 @@ For fixed string tokens, like "quit", "show" and "port", 
the initializer will be
static cmdline_parse_token_string_t cmd_quit_quit_tok =
   TOKEN_STRING_INITIALIZER(struct cmd_

[PATCH v2 8/9] doc: reword stack library section in prog guide

2024-06-20 Thread Nandini Persad
Minor changes made to wording of the stack library section in prog guide.

Signed-off-by: Nandini Persad 
---
 doc/guides/prog_guide/stack_lib.rst | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/doc/guides/prog_guide/stack_lib.rst 
b/doc/guides/prog_guide/stack_lib.rst
index 975d3ad796..a51df60d13 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -44,8 +44,8 @@ Lock-free Stack
 
 The lock-free stack consists of a linked list of elements, each containing a
 data pointer and a next pointer, and an atomic stack depth counter. The
-lock-free property means that multiple threads can push and pop simultaneously,
-and one thread being preempted/delayed in a push or pop operation will not
+lock-free property means that multiple threads can push and pop simultaneously.
+One thread being preempted/delayed in a push or pop operation will not
 impede the forward progress of any other thread.
 
 The lock-free push operation enqueues a linked list of pointers by pointing the
-- 
2.34.1



[PATCH v2 9/9] doc: reword rcu library section in prog guide

2024-06-20 Thread Nandini Persad
Simple syntax changes made to the rcu library section in programmer's guide.

Signed-off-by: Nandini Persad 
---
 doc/guides/prog_guide/rcu_lib.rst | 77 ---
 1 file changed, 40 insertions(+), 37 deletions(-)

diff --git a/doc/guides/prog_guide/rcu_lib.rst 
b/doc/guides/prog_guide/rcu_lib.rst
index d0aef3bc16..c7ae349184 100644
--- a/doc/guides/prog_guide/rcu_lib.rst
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -8,17 +8,17 @@ RCU Library
 
 Lockless data structures provide scalability and determinism.
 They enable use cases where locking may not be allowed
-(for example real-time applications).
+(for example, real-time applications).
 
 In the following sections, the term "memory" refers to memory allocated
 by typical APIs like malloc() or anything that is representative of
-memory, for example an index of a free element array.
+memory. An example of this is an index of a free element array.
 
 Since these data structures are lockless, the writers and readers
 are accessing the data structures concurrently. Hence, while removing
 an element from a data structure, the writers cannot return the memory
-to the allocator, without knowing that the readers are not
-referencing that element/memory anymore. Hence, it is required to
+to the allocator without knowing that the readers are not
+referencing that element/memory anymore. Therefore, it is required to
 separate the operation of removing an element into two steps:
 
 #. Delete: in this step, the writer removes the reference to the element from
@@ -64,19 +64,19 @@ quiescent state. Reader thread 3 was not accessing D1 when 
the delete
 operation happened. So, reader thread 3 will not have a reference to the
 deleted entry.
 
-It can be noted that, the critical sections for D2 is a quiescent state
-for D1. i.e. for a given data structure Dx, any point in the thread execution
-that does not reference Dx is a quiescent state.
+Note that the critical sections for D2 is a quiescent state
+for D1 (i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state).
 
 Since memory is not freed immediately, there might be a need for
-provisioning of additional memory, depending on the application requirements.
+provisioning additional memory depending on the application requirements.
 
 Factors affecting the RCU mechanism
 ---
 
 It is important to make sure that this library keeps the overhead of
-identifying the end of grace period and subsequent freeing of memory,
-to a minimum. The following paras explain how grace period and critical
+identifying the end of grace period and subsequent freeing of memory
+to a minimum. The following paragraphs explain how grace period and critical
 section affect this overhead.
 
 The writer has to poll the readers to identify the end of grace period.
@@ -119,14 +119,14 @@ How to use this library
 The application must allocate memory and initialize a QS variable.
 
 Applications can call ``rte_rcu_qsbr_get_memsize()`` to calculate the size
-of memory to allocate. This API takes a maximum number of reader threads,
-using this variable, as a parameter.
+of memory to allocate. This API takes a maximum number of reader threads
+using this variable as a parameter.
 
 Further, the application can initialize a QS variable using the API
 ``rte_rcu_qsbr_init()``.
 
 Each reader thread is assumed to have a unique thread ID. Currently, the
-management of the thread ID (for example allocation/free) is left to the
+management of the thread ID (for example, allocation/free) is left to the
 application. The thread ID should be in the range of 0 to
 maximum number of threads provided while creating the QS variable.
 The application could also use ``lcore_id`` as the thread ID where applicable.
@@ -134,13 +134,13 @@ The application could also use ``lcore_id`` as the thread 
ID where applicable.
 The ``rte_rcu_qsbr_thread_register()`` API will register a reader thread
 to report its quiescent state. This can be called from a reader thread.
 A control plane thread can also call this on behalf of a reader thread.
-The reader thread must call ``rte_rcu_qsbr_thread_online()`` API to start
+The reader thread must call the ``rte_rcu_qsbr_thread_online()`` API to start
 reporting its quiescent state.
 
 Some of the use cases might require the reader threads to make blocking API
-calls (for example while using eventdev APIs). The writer thread should not
+calls (for example, while using eventdev APIs). The writer thread should not
 wait for such reader threads to enter quiescent state.  The reader thread must
-call ``rte_rcu_qsbr_thread_offline()`` API, before calling blocking APIs. It
+call ``rte_rcu_qsbr_thread_offline()`` API before calling blocking APIs. It
 can call ``rte_rcu_qsbr_thread_online()`` API once the blocking API call
 returns.
 
@@ -149,13 +149,13 @@ state by calling the API ``rte_rcu_qsbr_start()``. It is 
possib

[PATCH v3] doc: add new driver guidelines

2024-09-10 Thread Nandini Persad
This document was created to assist contributors
in creating DPDK drivers, providing suggestions
and guidelines for how to upstream effectively.

Co-authored-by: Ferruh Yigit 
Co-authored-by: Thomas Mojalon 
Signed-off-by: Nandini Persad 
Acked-by: Chengwen Feng 
---
 doc/guides/contributing/new_driver.rst | 200 +
 1 file changed, 200 insertions(+)
 create mode 100644 doc/guides/contributing/new_driver.rst

diff --git a/doc/guides/contributing/new_driver.rst 
b/doc/guides/contributing/new_driver.rst
new file mode 100644
index 00..2a4781e673
--- /dev/null
+++ b/doc/guides/contributing/new_driver.rst
@@ -0,0 +1,200 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+Copyright 2024 The DPDK contributors
+
+
+Upsreaming New DPDK Drivers Guide
+=
+
+The DPDK project continously grows its ecosystem by adding support for new
+devices. This document is designed to assist contributors in creating DPDK 
+drivers, also known as Poll Mode Drivers (PMDs).
+
+Public support for a device ensures accessibility across various operating
+systems and guarantees community maintenance in future releases. 
+If a new device is similar to an existing one,
+it is more efficient to update the existing driver.
+
+Here are our best practice recommendations for creating a new driver.
+
+
+Early Engagement with the Community
+---
+
+When creating a new driver, we highly recommend engaging with the DPDK
+community early instead of allowing the work to mature.
+
+Public discussions help align development with DPDK expectations.
+You may submit a roadmap before the release to inform the community of
+your plans. Additionally, sending a Request for Comments (RFC) early in
+the release cycle, or even during the prior release, is advisable. 
+
+DPDK is mainly consumed via Long Term Support (LTS) releases.
+It is common to target a new PMD to a LTS release. For this, it is
+suggested to start upstreaming at least one release before a LTS release.
+
+
+Progressive Work
+
+
+To continually progress your work, we recommend planning for incremental
+upstreaming across multiple patch series or releases.
+
+It's important to prioritize quality of the driver over upstreaming
+in a single release or single patch series.
+
+
+Finalizing
+--
+
+Once your driver has been added, you, as the author,
+have a responsibility to the community to maintain it.
+
+This includes the public test report. Authors must send a public
+test report after the first upstreaming of the PMD. The same
+public test procedure may be reproduced regularly per release.
+
+After the PMD is upstreamed, the author should send a patch
+to update the website with the name of the new PMD and supported devices.
+
+For more information about the role of maintainers, see the
+:doc:'dpdk/doc/guides/contributing/patches.rst' page.
+
+
+Splitting into Patches
+--
+
+We recommend that drivers are split into patches, so that each patch represents
+a single feature. If the driver code is already developed, it may be 
challenging
+to split. However, there are many benefits to doing so.
+
+Splitting patches makes it easier to understand a feature and clarifies the
+list of components/files that compose that specific feature.
+
+It also enables the ability to track from the source code to the feature
+it is enabled for and helps users to undertand the reasoning and intention
+of implementation. This kind of tracing is regularly required
+for defect resolution and refactoring.
+
+Another benefit of splitting the codebase per feature is that it highlights
+unnecessary or irrelevant code, as any code not belonging to any specific
+feature becomes obvious.
+
+Git bisect is also more useful if patches are split per patch.
+
+The split should focus on logical features
+rather than file-based divisions.
+
+Each patch in the series must compile without errors
+and should maintain funtionality.
+
+Enable the build as early as possible within the series
+to facilitate continuous integration and testing.
+This approach ensures a clear and manageable development process.
+
+We suggest splitting patches following this approach:
+
+* Each patch should be organized logically as a new feature.
+* Run test tools per patch (more on this below).
+* Update relevant documentation and .ini file with each patch.
+
+
+The following order in the patch series is as suggested below.
+
+The first patch should have the driver's skeleton which should include:
+
+* Maintainer's file update
+* Driver documentation
+* Document must have links to official product documentation web page
+* The new document should be added into the index (doc/guides/index.rst)
+* Initial .ini file
+* Release notes announcement for the new driver
+
+
+The next patches should include basic device features.
+The following is suggested sample list to

Re: [PATCH v3] doc: add new driver guidelines

2024-09-11 Thread Nandini Persad
Hi Ferruh,

I will work with Stephen on this. For the tone of the list, I believe we
can try different ways to make the tone more friendly, while still being
concise.

What about something like this:
- Avoid including unused headers (process-iwyu.py).
- Keep compiler warnings enabled in the build file.
- Instead of using #ifdef with driver-specific macros, opt for runtime
configuration.
- Document device parameters in the driver guide.
- Make device operations structs 'const'.
- Use dynamic logging.
- Skip DPDK version checks (RTE_VERSION_NUM) in the upstream code.
- Add SPDX license tags and copyright notices on all sides.
- Don’t introduce public APIs directly from the driver.

It's slightly more friendly.

Let me know what you think, I'm open to trying another way.

On Tue, Sep 10, 2024 at 5:16 PM Ferruh Yigit  wrote:

> On 9/10/2024 3:58 PM, Nandini Persad wrote:
> > This document was created to assist contributors
> > in creating DPDK drivers, providing suggestions
> > and guidelines for how to upstream effectively.
> >
>
> There are minor differences in this v3 and v2, isn't this version on top
> of v2, can those changes be from Stephen?
>
> <...>
>
> > +
> > +Additional Suggestions
> > +--
> > +
> > +* We recommend using DPDK macros instead of inventing new ones in the
> PMD.
> > +* Do not include unused headers (process-iwyu.py).
> > +* Do not disable compiler warning in the build file.
> > +* Do not use #ifdef with driver-defined macros, instead prefer runtime
> configuration.
> > +* Document device parameters in the driver guide.
> > +* Make device operations struct 'const'.
> > +* Use dynamic logging.
> > +* Do not use DPDK version checks (via RTE_VERSION_NUM) in the upstream
> code.
> > +* Be sure to have SPDX license tags and copyright notice on each side.
> > +* Do not introduce public Apis directly from the driver.
> >
>
> API (Application Programming Interface) is an acronym and should be all
> uppercase, like 'APIs'.
>
> Overall the language in this list is imperative, I think it helps to
> make it simple, but I am not sure about the tone, I wonder if we can do
> better, do you have any suggestions?
>
>
> > +
> > +
> > +Dependencies
> > +
> > +
> > +At times, drivers may have dependencies to external software.
> > +For driver dependencies, same DPDK rules for dependencies applies.
> > +Dependencies should be publicly and freely available to
> > +upstream the driver.
> > +
> > +
> > +Test Tools
> > +--
> > +
> > +Per patch in a patch series, be sure to use the proper test tools.
> > +
> > +* checkpatches.sh
> > +* check-git-log.sh
> > +* check-meson.py
> > +* check-doc-vs-code.sh
> > +* check-spdx-tag.sh
> >
>
> `check-spdx-tag.sh` seems moved in v2 to "additional suggestions", I am
> for keeping it here, as "additional suggestions" are more things to take
> into consideration during design/development, above are actual scripts
> that we can use to verify code.
>
> And long term intention was to move this "tools to run list" to a more
> generic documentation, as these are not really specific to new PMD
> guide, but "additional suggestions" will stay in this document.
>
>


Re: [PATCH v3] doc: add new driver guidelines

2024-09-12 Thread Nandini Persad
I like the separation. I can include it in V4 and see, it would be helpful to 
know if it’s more or less confusing that way.

For the prompt before each list, can we say something like “Avoid doing the 
following” and “Suggested actions” or something a little better grammatically. 
We could also just say “Avoid”.

From: Ferruh Yigit 
Sent: Thursday, September 12, 2024 1:13:33 AM
To: Nandini Persad 
Cc: dev@dpdk.org ; Thomas Mojalon ; Stephen 
Hemminger 
Subject: Re: [PATCH v3] doc: add new driver guidelines

On 9/11/2024 5:04 PM, Nandini Persad wrote:
> Hi Ferruh,
>
> I will work with Stephen on this. For the tone of the list, I believe we
> can try different ways to make the tone more friendly, while still being
> concise.
>
> What about something like this:
> # Avoid including unused headers (process-iwyu.py).
> # Keep compiler warnings enabled in the build file.
> # Instead of using #ifdef with driver-specific macros, opt for runtime
> configuration.
> # Document device parameters in the driver guide.
> # Make device operations structs 'const'.
> # Use dynamic logging.
> # Skip DPDK version checks (RTE_VERSION_NUM) in the upstream code.
> # Add SPDX license tags and copyright notices on all sides.
> # Don’t introduce public APIs directly from the driver.
>
> It's slightly more friendly.
>
> Let me know what you think, I'm open to trying another way.
>

I think above is better.

Another option can be separating it as "Do" and "Do Not" list, as
following, do you think does it help, or makes it harder to understand?

Avoid doing:
- Using PMD specific macros when DPDK ones exist
- Including unused headers
- Disable compiler warnings for driver
- #ifdef with driver-defined macros
- DPDK version checks (via RTE_VERSION_NUM) in the upstream code
- Public APIs directly from the driver

Suggested to do:
- Runtime configuration when applicable
- Document device parameters in the driver guide
- Make device operations struct 'const'
- Dynamic logging
- SPDX license tags and copyright notice on each file


> On Tue, Sep 10, 2024 at 5:16 PM Ferruh Yigit  <mailto:ferruh.yi...@amd.com>> wrote:
>
> On 9/10/2024 3:58 PM, Nandini Persad wrote:
> > This document was created to assist contributors
> > in creating DPDK drivers, providing suggestions
> > and guidelines for how to upstream effectively.
> >
>
> There are minor differences in this v3 and v2, isn't this version on top
> of v2, can those changes be from Stephen?
>
> <...>
>
> > +
> > +Additional Suggestions
> > +--
> > +
> > +* We recommend using DPDK macros instead of inventing new ones in
> the PMD.
> > +* Do not include unused headers (process-iwyu.py).
> > +* Do not disable compiler warning in the build file.
> > +* Do not use #ifdef with driver-defined macros, instead prefer
> runtime configuration.
> > +* Document device parameters in the driver guide.
> > +* Make device operations struct 'const'.
> > +* Use dynamic logging.
> > +* Do not use DPDK version checks (via RTE_VERSION_NUM) in the
> upstream code.
> > +* Be sure to have SPDX license tags and copyright notice on each
> side.
> > +* Do not introduce public Apis directly from the driver.
> >
>
> API (Application Programming Interface) is an acronym and should be all
> uppercase, like 'APIs'.
>
> Overall the language in this list is imperative, I think it helps to
> make it simple, but I am not sure about the tone, I wonder if we can do
> better, do you have any suggestions?
>
>
> > +
> > +
> > +Dependencies
> > +
> > +
> > +At times, drivers may have dependencies to external software.
> > +For driver dependencies, same DPDK rules for dependencies applies.
> > +Dependencies should be publicly and freely available to
> > +upstream the driver.
> > +
> > +
> > +Test Tools
> > +--
> > +
> > +Per patch in a patch series, be sure to use the proper test tools.
> > +
> > +* checkpatches.sh
> > +* check-git-log.sh
> > +* check-meson.py
> > +* check-doc-vs-code.sh
> > +* check-spdx-tag.sh
> >
>
> `check-spdx-tag.sh` seems moved in v2 to "additional suggestions", I am
> for keeping it here, as "additional suggestions" are more things to take
> into consideration during design/development, above are actual scripts
> that we can use to verify code.
>
> And long term intention was to move this "tools to run list" to a more
> generic documentation, as these are not really specific to new PMD
> guide, but "additional suggestions" will stay in this document.
>



Re: [PATCH v3] doc: add new driver guidelines

2024-09-12 Thread Nandini Persad
Excellent. Will send the update in accordingly.

From: Ferruh Yigit 
Sent: Thursday, September 12, 2024 6:37:35 AM
To: Nandini Persad 
Cc: dev@dpdk.org ; Thomas Mojalon ; Stephen 
Hemminger 
Subject: Re: [PATCH v3] doc: add new driver guidelines

On 9/12/2024 2:18 PM, Nandini Persad wrote:
> I like the separation. I can include it in V4 and see, it would be
> helpful to know if it’s more or less confusing that way.
>
> For the prompt before each list, can we say something like “Avoid doing
> the following” and “Suggested actions” or something a little better
> grammatically. We could also just say “Avoid”.
>

Agree to have better header for the lists, what about:
"Avoid doing the following" and
"Remember to do the following"

Or we can go with whatever you think more convenient.

> 
> *From:* Ferruh Yigit 
> *Sent:* Thursday, September 12, 2024 1:13:33 AM
> *To:* Nandini Persad 
> *Cc:* dev@dpdk.org ; Thomas Mojalon ;
> Stephen Hemminger 
> *Subject:* Re: [PATCH v3] doc: add new driver guidelines
>
> On 9/11/2024 5:04 PM, Nandini Persad wrote:
>> Hi Ferruh,
>>
>> I will work with Stephen on this. For the tone of the list, I believe we
>> can try different ways to make the tone more friendly, while still being
>> concise.
>>
>> What about something like this:
>> # Avoid including unused headers (process-iwyu.py).
>> # Keep compiler warnings enabled in the build file.
>> # Instead of using #ifdef with driver-specific macros, opt for runtime
>> configuration.
>> # Document device parameters in the driver guide.
>> # Make device operations structs 'const'.
>> # Use dynamic logging.
>> # Skip DPDK version checks (RTE_VERSION_NUM) in the upstream code.
>> # Add SPDX license tags and copyright notices on all sides.
>> # Don’t introduce public APIs directly from the driver.
>>
>> It's slightly more friendly.
>>
>> Let me know what you think, I'm open to trying another way.
>>
>
> I think above is better.
>
> Another option can be separating it as "Do" and "Do Not" list, as
> following, do you think does it help, or makes it harder to understand?
>
> Avoid doing:
> - Using PMD specific macros when DPDK ones exist
> - Including unused headers
> - Disable compiler warnings for driver
> - #ifdef with driver-defined macros
> - DPDK version checks (via RTE_VERSION_NUM) in the upstream code
> - Public APIs directly from the driver
>
> Suggested to do:
> - Runtime configuration when applicable
> - Document device parameters in the driver guide
> - Make device operations struct 'const'
> - Dynamic logging
> - SPDX license tags and copyright notice on each file
>
>
>> On Tue, Sep 10, 2024 at 5:16 PM Ferruh Yigit > <mailto:ferruh.yi...@amd.com <mailto:ferruh.yi...@amd.com>>> wrote:
>>
>> On 9/10/2024 3:58 PM, Nandini Persad wrote:
>> > This document was created to assist contributors
>> > in creating DPDK drivers, providing suggestions
>> > and guidelines for how to upstream effectively.
>> >
>>
>> There are minor differences in this v3 and v2, isn't this version on top
>> of v2, can those changes be from Stephen?
>>
>> <...>
>>
>> > +
>> > +Additional Suggestions
>> > +--
>> > +
>> > +* We recommend using DPDK macros instead of inventing new ones in
>> the PMD.
>> > +* Do not include unused headers (process-iwyu.py).
>> > +* Do not disable compiler warning in the build file.
>> > +* Do not use #ifdef with driver-defined macros, instead prefer
>> runtime configuration.
>> > +* Document device parameters in the driver guide.
>> > +* Make device operations struct 'const'.
>> > +* Use dynamic logging.
>> > +* Do not use DPDK version checks (via RTE_VERSION_NUM) in the
>> upstream code.
>> > +* Be sure to have SPDX license tags and copyright notice on each
>> side.
>> > +* Do not introduce public Apis directly from the driver.
>> >
>>
>> API (Application Programming Interface) is an acronym and should be all
>> uppercase, like 'APIs'.
>>
>> Overall the language in this list is imperative, I think it helps to
>> make it simple, but I am not sure about the tone, I wonder if we can do
>> better, do you have any suggestions?
>>
>&g

[PATCH] doc: reword sample app guides

2024-10-09 Thread Nandini Persad
I have reviewed these sections for grammar/clarity
and made small modifications to the formatting of sections
to adhere to a template which will create uniformality
in the sample application user guides overall.

Signed-off-by: Nandini Persad 
Acked-by: Chengwen Feng 
---
 doc/guides/sample_app_ug/cmd_line.rst | 24 
 doc/guides/sample_app_ug/dma.rst  | 38 ++---
 doc/guides/sample_app_ug/ethtool.rst  | 13 +++--
 doc/guides/sample_app_ug/flow_filtering.rst   | 50 +
 doc/guides/sample_app_ug/hello_world.rst  |  6 +-
 doc/guides/sample_app_ug/intro.rst| 20 +++
 doc/guides/sample_app_ug/ip_frag.rst  | 11 ++--
 doc/guides/sample_app_ug/ip_reassembly.rst| 38 +++--
 doc/guides/sample_app_ug/ipv4_multicast.rst   | 39 ++---
 doc/guides/sample_app_ug/keep_alive.rst   | 10 ++--
 .../sample_app_ug/l2_forward_crypto.rst   | 29 +-
 .../sample_app_ug/l2_forward_job_stats.rst| 56 +++
 doc/guides/sample_app_ug/rxtx_callbacks.rst   | 21 ---
 doc/guides/sample_app_ug/skeleton.rst | 30 +-
 14 files changed, 212 insertions(+), 173 deletions(-)

diff --git a/doc/guides/sample_app_ug/cmd_line.rst 
b/doc/guides/sample_app_ug/cmd_line.rst
index 6655b1e5a7..72ce5fc27f 100644
--- a/doc/guides/sample_app_ug/cmd_line.rst
+++ b/doc/guides/sample_app_ug/cmd_line.rst
@@ -13,7 +13,7 @@ Overview
 The Command Line sample application is a simple application that
 demonstrates the use of the command line interface in the DPDK.
 This application is a readline-like interface that can be used
-to debug a DPDK application, in a Linux* application environment.
+to debug a DPDK application in a Linux* application environment.
 
 .. note::
 
@@ -22,10 +22,13 @@ to debug a DPDK application, in a Linux* application 
environment.
 See also the "rte_cmdline library should not be used in production code 
due to limited testing" item
 in the "Known Issues" section of the Release Notes.
 
-The Command Line sample application supports some of the features of the GNU 
readline library such as, completion,
-cut/paste and some other special bindings that make configuration and debug 
faster and easier.
+The Command Line sample application supports some of the features of
+the GNU readline library such as completion, cut/paste and other
+special bindings that make configuration and debug faster and easier.
+
+The application shows how the rte_cmdline application can be extended
+to handle a list of objects.
 
-The application shows how the rte_cmdline application can be extended to 
handle a list of objects.
 There are three simple commands:
 
 *   add obj_name IP: Add a new object with an IP/IPv6 address associated to it.
@@ -48,7 +51,7 @@ The application is located in the ``cmd_line`` sub-directory.
 Running the Application
 ---
 
-To run the application in linux environment, issue the following command:
+To run the application in a linux environment, issue the following command:
 
 .. code-block:: console
 
@@ -60,7 +63,7 @@ and the Environment Abstraction Layer (EAL) options.
 Explanation
 ---
 
-The following sections provide some explanation of the code.
+The following sections provide explanation of the code.
 
 EAL Initialization and cmdline Start
 
@@ -73,7 +76,7 @@ This is achieved as follows:
 :start-after: Initialization of the Environment Abstraction Layer (EAL). 8<
 :end-before: >8 End of initialization of Environment Abstraction Layer 
(EAL).
 
-Then, a new command line object is created and started to interact with the 
user through the console:
+Then, a new command line object is created and starts to interact with the 
user through the console:
 
 .. literalinclude:: ../../../examples/cmdline/main.c
 :language: c
@@ -81,13 +84,14 @@ Then, a new command line object is created and started to 
interact with the user
 :end-before: >8 End of creating a new command line object.
 :dedent: 1
 
-The cmd line_interact() function returns when the user types **Ctrl-d** and in 
this case,
-the application exits.
+The cmd line_interact() function returns when the user types **Ctrl-d** and,
+in this case, the application exits.
 
 Defining a cmdline Context
 ~~
 
-A cmdline context is a list of commands that are listed in a NULL-terminated 
table, for example:
+A cmdline context is a list of commands that are listed in a NULL-terminated 
table.
+For example:
 
 .. literalinclude:: ../../../examples/cmdline/commands.c
 :language: c
diff --git a/doc/guides/sample_app_ug/dma.rst b/doc/guides/sample_app_ug/dma.rst
index 2765895564..701d09d1b3 100644
--- a/doc/guides/sample_app_ug/dma.rst
+++ b/doc/guides/sample_app_ug/dma.rst
@@ -10,10 +10,10 @@ Overview
 
 
 This sample is intended as a demonstration of the basic components of a DPDK
-forwardi

[PATCH v5] doc: add new driver guidelines

2024-10-04 Thread Nandini Persad
This document was created to assist contributors in creating DPDK drivers
and provides suggestions and guidelines on how to upstream effectively.

Signed-off-by: Ferruh Yigit 
Signed-off-by: Nandini Persad 
Reviewed-by: Stephen Hemminger 
Acked-by: Chengwen Feng 
---
 doc/guides/contributing/index.rst  |   1 +
 doc/guides/contributing/new_driver.rst | 213 +
 2 files changed, 214 insertions(+)
 create mode 100644 doc/guides/contributing/new_driver.rst

diff --git a/doc/guides/contributing/index.rst 
b/doc/guides/contributing/index.rst
index dcb9b1fbf0..7fc6511361 100644
--- a/doc/guides/contributing/index.rst
+++ b/doc/guides/contributing/index.rst
@@ -15,6 +15,7 @@ Contributor's Guidelines
 documentation
 unit_test
 new_library
+new_driver
 patches
 vulnerability
 stable
diff --git a/doc/guides/contributing/new_driver.rst 
b/doc/guides/contributing/new_driver.rst
new file mode 100644
index 00..9e88de5445
--- /dev/null
+++ b/doc/guides/contributing/new_driver.rst
@@ -0,0 +1,213 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright 2024 The DPDK contributors
+
+
+Adding a New Driver
+===
+
+The DPDK project continuously grows its ecosystem by adding support for new 
devices.
+This document is designed to assist contributors in creating DPDK
+drivers, also known as Poll Mode Drivers (PMD).
+
+By having public support for a device, we can ensure accessibility across 
various
+operating systems and guarantee community maintenance in future releases.
+If a new device is similar to a device already supported by an existing driver,
+it is more efficient to update the existing driver.
+
+Here are our best practice recommendations for creating a new driver.
+
+
+Early Engagement with the Community
+---
+
+When creating a new driver, we highly recommend engaging with the DPDK
+community early instead of waiting the work to mature.
+
+These public discussions help align development of your driver with DPDK 
expectations.
+You may submit a roadmap before the release to inform the community of
+your plans. Additionally, sending a Request for Comments (RFC) early in
+the release cycle, or even during the prior release, is advisable.
+
+DPDK is mainly consumed via Long Term Support (LTS) releases.
+It is common to target a new PMD to a LTS release. For this, it is
+suggested to start upstreaming at least one release before a LTS release.
+
+
+Progressive Work
+
+
+To continually progress your work, we recommend planning for incremental
+upstreaming across multiple patch series or releases.
+
+It's important to prioritize quality of the driver over upstreaming
+in a single release or single patch series.
+
+
+Finalizing
+--
+
+Once the driver has been upstreamed, the author has
+a responsibility to the community to maintain it.
+
+This includes the public test report. Authors must send a public
+test report after the first upstreaming of the PMD. The same
+public test procedure may be reproduced regularly per release.
+
+After the PMD is upstreamed, the author should send a patch
+to update the website with the name of the new PMD and supported devices
+via the DPDK mailing list.
+
+For more information about the role of maintainers, see :doc:`patches`.
+
+
+Splitting into Patches
+--
+
+We recommend that drivers are split into patches, so that each patch represents
+a single feature. If the driver code is already developed, it may be 
challenging
+to split. However, there are many benefits to doing so.
+
+Splitting patches makes it easier to understand a feature and clarifies the
+list of components/files that compose that specific feature.
+
+It also enables the ability to track from the source code to the feature
+it is enabled for and helps users to understand the reasoning and intention
+of implementation. This kind of tracing is regularly required
+for defect resolution and refactoring.
+
+Another benefit of splitting the codebase per feature is that it highlights
+unnecessary or irrelevant code, as any code not belonging to any specific
+feature becomes obvious.
+
+Git bisect is also more useful if patches are split per patch.
+
+The split should focus on logical features
+rather than file-based divisions.
+
+Each patch in the series must compile without errors
+and should maintain functionality.
+
+Enable the build as early as possible within the series
+to facilitate continuous integration and testing.
+This approach ensures a clear and manageable development process.
+
+We suggest splitting patches following this approach:
+
+* Each patch should be organized logically as a new feature.
+* Run test tools per patch (See :ref:`tool_list`:).
+* Update relevant documentation and .ini file with each patch.
+
+
+The following order in the patch series is as suggested below.
+
+The first patch should have the driver's skeleton which

[PATCH v6] doc: add new driver guidelines

2024-10-06 Thread Nandini Persad
This document was created to assist contributors in creating DPDK drivers
and provides suggestions and guidelines on how to upstream effectively.

Signed-off-by: Ferruh Yigit 
Signed-off-by: Nandini Persad 
Reviewed-by: Stephen Hemminger 
Acked-by: Chengwen Feng 
---
 doc/guides/contributing/index.rst  |   1 +
 doc/guides/contributing/new_driver.rst | 213 +
 2 files changed, 214 insertions(+)
 create mode 100644 doc/guides/contributing/new_driver.rst

diff --git a/doc/guides/contributing/index.rst 
b/doc/guides/contributing/index.rst
index dcb9b1fbf0..7fc6511361 100644
--- a/doc/guides/contributing/index.rst
+++ b/doc/guides/contributing/index.rst
@@ -15,6 +15,7 @@ Contributor's Guidelines
 documentation
 unit_test
 new_library
+new_driver
 patches
 vulnerability
 stable
diff --git a/doc/guides/contributing/new_driver.rst 
b/doc/guides/contributing/new_driver.rst
new file mode 100644
index 00..a4edbc05d6
--- /dev/null
+++ b/doc/guides/contributing/new_driver.rst
@@ -0,0 +1,213 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright 2024 The DPDK contributors
+
+
+Adding a New Driver
+===
+
+The DPDK project continuously grows its ecosystem by adding support for new 
devices.
+This document is designed to assist contributors in creating DPDK
+drivers, also known as Poll Mode Drivers (PMD).
+
+By having public support for a device, we can ensure accessibility across 
various
+operating systems and guarantee community maintenance in future releases.
+If a new device is similar to a device already supported by an existing driver,
+it is more efficient to update the existing driver.
+
+Here are our best practice recommendations for creating a new driver.
+
+
+Early Engagement with the Community
+---
+
+When creating a new driver, we highly recommend engaging with the DPDK
+community early instead of waiting the work to mature.
+
+These public discussions help align development of your driver with DPDK 
expectations.
+You may submit a roadmap before the release to inform the community of
+your plans. Additionally, sending a Request for Comments (RFC) early in
+the release cycle, or even during the prior release, is advisable.
+
+DPDK is mainly consumed via Long Term Support (LTS) releases.
+It is common to target a new PMD to a LTS release. For this, it is
+suggested to start upstreaming at least one release before a LTS release.
+
+
+Progressive Work
+
+
+To continually progress your work, we recommend planning for incremental
+upstreaming across multiple patch series or releases.
+
+It's important to prioritize quality of the driver over upstreaming
+in a single release or single patch series.
+
+
+Finalizing
+--
+
+Once the driver has been upstreamed, the author has
+a responsibility to the community to maintain it.
+
+This includes the public test report. Authors must send a public
+test report after the first upstreaming of the PMD. The same
+public test procedure may be reproduced regularly per release.
+
+After the PMD is upstreamed, the author should send a patch
+to update the website with the name of the new PMD and supported devices
+via the DPDK mailing list.
+
+For more information about the role of maintainers, see :doc:`patches`.
+
+
+Splitting into Patches
+--
+
+We recommend that drivers are split into patches, so that each patch represents
+a single feature. If the driver code is already developed, it may be 
challenging
+to split. However, there are many benefits to doing so.
+
+Splitting patches makes it easier to understand a feature and clarifies the
+list of components/files that compose that specific feature.
+
+It also enables the ability to track from the source code to the feature
+it is enabled for and helps users to understand the reasoning and intention
+of implementation. This kind of tracing is regularly required
+for defect resolution and refactoring.
+
+Another benefit of splitting the codebase per feature is that it highlights
+unnecessary or irrelevant code, as any code not belonging to any specific
+feature becomes obvious.
+
+Git bisect is also more useful if patches are split per patch.
+
+The split should focus on logical features
+rather than file-based divisions.
+
+Each patch in the series must compile without errors
+and should maintain functionality.
+
+Enable the build as early as possible within the series
+to facilitate continuous integration and testing.
+This approach ensures a clear and manageable development process.
+
+We suggest splitting patches following this approach:
+
+* Each patch should be organized logically as a new feature.
+* Run test tools per patch (See :ref:`tool_list`).
+* Update relevant documentation and .ini file with each patch.
+
+
+The following order in the patch series is as suggested below.
+
+The first patch should have the driver's skeleton which

[PATCH v2] doc: reword sample app guides

2024-10-06 Thread Nandini Persad
I have reviewed these sections for grammar/clarity
and made small modifications to the formatting of sections
to adhere to a template which will create uniformality
in the sample application user guides overall.

Signed-off-by: Nandini Persad 
---
 .../prog_guide/switch_representation.rst  | 18 +++---
 .../traffic_metering_and_policing.rst |  4 +-
 doc/guides/sample_app_ug/cmd_line.rst | 24 
 doc/guides/sample_app_ug/dma.rst  | 38 ++---
 doc/guides/sample_app_ug/ethtool.rst  | 13 +++--
 doc/guides/sample_app_ug/flow_filtering.rst   | 50 +
 doc/guides/sample_app_ug/hello_world.rst  |  6 +-
 doc/guides/sample_app_ug/intro.rst| 20 +++
 doc/guides/sample_app_ug/ip_frag.rst  | 11 ++--
 doc/guides/sample_app_ug/ip_reassembly.rst| 38 +++--
 doc/guides/sample_app_ug/ipv4_multicast.rst   | 39 ++---
 doc/guides/sample_app_ug/keep_alive.rst   | 10 ++--
 .../sample_app_ug/l2_forward_crypto.rst   | 29 +-
 .../sample_app_ug/l2_forward_job_stats.rst| 56 +++
 doc/guides/sample_app_ug/rxtx_callbacks.rst   | 21 ---
 doc/guides/sample_app_ug/skeleton.rst | 30 +-
 16 files changed, 223 insertions(+), 184 deletions(-)

diff --git a/doc/guides/prog_guide/switch_representation.rst 
b/doc/guides/prog_guide/switch_representation.rst
index 46e0ca85a5..98c2830b03 100644
--- a/doc/guides/prog_guide/switch_representation.rst
+++ b/doc/guides/prog_guide/switch_representation.rst
@@ -15,8 +15,8 @@ Network adapters with multiple physical ports and/or SR-IOV 
capabilities
 usually support the offload of traffic steering rules between their virtual
 functions (VFs), sub functions (SFs), physical functions (PFs) and ports.
 
-Like for standard Ethernet switches, this involves a combination of
-automatic MAC learning and manual configuration. For most purposes it is
+For standard Ethernet switches, this involves a combination of
+automatic MAC learning and manual configuration. For most purposes, it is
 managed by the host system and fully transparent to users and applications.
 
 On the other hand, applications typically found on hypervisors that process
@@ -26,23 +26,23 @@ according on their own criteria.
 Without a standard software interface to manage traffic steering rules
 between VFs, SFs, PFs and the various physical ports of a given device,
 applications cannot take advantage of these offloads; software processing is
-mandatory even for traffic which ends up re-injected into the device it
+mandatory, even for traffic, which ends up re-injected into the device it
 originates from.
 
 This document describes how such steering rules can be configured through
 the DPDK flow API (**rte_flow**), with emphasis on the SR-IOV use case
-(PF/VF steering) using a single physical port for clarity, however the same
+(PF/VF steering) using a single physical port for clarity. However, the same
 logic applies to any number of ports without necessarily involving SR-IOV.
 
 Sub Function
 
-Besides SR-IOV, Sub function is a portion of the PCI device, a SF netdev
-has its own dedicated queues(txq, rxq). A SF netdev supports E-Switch
+Besides SR-IOV, Sub function is a portion of the PCI device and a SF netdev
+has its own dedicated queues (txq, rxq). A SF netdev supports E-Switch
 representation offload similar to existing PF and VF representors.
 A SF shares PCI level resources with other SFs and/or with its parent PCI
 function.
 
-Sub function is created on-demand, coexists with VFs. Number of SFs is
+Sub function is created on-demand and coexists with VFs. The number of SFs is
 limited by hardware resources.
 
 Port Representors
@@ -80,7 +80,7 @@ thought as a software "patch panel" front-end for 
applications.
-a pci:dbdf,representor=[pf[0-1],pf2vf[0-2],pf3[3,5]]
 
 - As virtual devices, they may be more limited than their physical
-  counterparts, for instance by exposing only a subset of device
+  counterparts. For instance, by exposing only a subset of device
   configuration callbacks and/or by not necessarily having Rx/Tx capability.
 
 - Among other things, they can be used to assign MAC addresses to the
@@ -100,7 +100,7 @@ thought as a software "patch panel" front-end for 
applications.
 
 - The device or group relationship of ports can be discovered using the
   switch ``domain_id`` field within the devices switch information structure. 
By
-  default the switch ``domain_id`` of a port will be
+  default, the switch ``domain_id`` of a port will be
   ``RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID`` to indicate that the port doesn't
   support the concept of a switch domain, but ports which do support the 
concept
   will be allocated a unique switch ``domain_id``, ports within the same switch
diff --git a/doc/guides/prog_guide/traffic_metering_and_policing.rst 
b/doc/guides/prog_guide/traffic_metering_and_policing.rst
index 2e8f419f97

[PATCH] doc: reword sample app guides

2024-10-06 Thread Nandini Persad
I have reviewed these sections for grammar/clarity
and made small modifications to the formatting of sections
to adhere to a template which will create uniformality
in the sample application user guides overall.

Signed-off-by: Nandini Persad 
---
 .../prog_guide/switch_representation.rst  | 18 +++---
 .../traffic_metering_and_policing.rst |  4 +-
 doc/guides/sample_app_ug/cmd_line.rst | 24 
 doc/guides/sample_app_ug/dma.rst  | 38 ++---
 doc/guides/sample_app_ug/ethtool.rst  | 13 +++--
 doc/guides/sample_app_ug/flow_filtering.rst   | 50 +
 doc/guides/sample_app_ug/hello_world.rst  |  6 +-
 doc/guides/sample_app_ug/intro.rst| 20 +++
 doc/guides/sample_app_ug/ip_frag.rst  | 11 ++--
 doc/guides/sample_app_ug/ip_reassembly.rst| 38 +++--
 doc/guides/sample_app_ug/ipv4_multicast.rst   | 39 ++---
 doc/guides/sample_app_ug/keep_alive.rst   | 10 ++--
 .../sample_app_ug/l2_forward_crypto.rst   | 29 +-
 .../sample_app_ug/l2_forward_job_stats.rst| 56 +++
 doc/guides/sample_app_ug/rxtx_callbacks.rst   | 21 ---
 doc/guides/sample_app_ug/skeleton.rst | 30 +-
 16 files changed, 223 insertions(+), 184 deletions(-)

diff --git a/doc/guides/prog_guide/switch_representation.rst 
b/doc/guides/prog_guide/switch_representation.rst
index 46e0ca85a5..98c2830b03 100644
--- a/doc/guides/prog_guide/switch_representation.rst
+++ b/doc/guides/prog_guide/switch_representation.rst
@@ -15,8 +15,8 @@ Network adapters with multiple physical ports and/or SR-IOV 
capabilities
 usually support the offload of traffic steering rules between their virtual
 functions (VFs), sub functions (SFs), physical functions (PFs) and ports.
 
-Like for standard Ethernet switches, this involves a combination of
-automatic MAC learning and manual configuration. For most purposes it is
+For standard Ethernet switches, this involves a combination of
+automatic MAC learning and manual configuration. For most purposes, it is
 managed by the host system and fully transparent to users and applications.
 
 On the other hand, applications typically found on hypervisors that process
@@ -26,23 +26,23 @@ according on their own criteria.
 Without a standard software interface to manage traffic steering rules
 between VFs, SFs, PFs and the various physical ports of a given device,
 applications cannot take advantage of these offloads; software processing is
-mandatory even for traffic which ends up re-injected into the device it
+mandatory, even for traffic, which ends up re-injected into the device it
 originates from.
 
 This document describes how such steering rules can be configured through
 the DPDK flow API (**rte_flow**), with emphasis on the SR-IOV use case
-(PF/VF steering) using a single physical port for clarity, however the same
+(PF/VF steering) using a single physical port for clarity. However, the same
 logic applies to any number of ports without necessarily involving SR-IOV.
 
 Sub Function
 
-Besides SR-IOV, Sub function is a portion of the PCI device, a SF netdev
-has its own dedicated queues(txq, rxq). A SF netdev supports E-Switch
+Besides SR-IOV, Sub function is a portion of the PCI device and a SF netdev
+has its own dedicated queues (txq, rxq). A SF netdev supports E-Switch
 representation offload similar to existing PF and VF representors.
 A SF shares PCI level resources with other SFs and/or with its parent PCI
 function.
 
-Sub function is created on-demand, coexists with VFs. Number of SFs is
+Sub function is created on-demand and coexists with VFs. The number of SFs is
 limited by hardware resources.
 
 Port Representors
@@ -80,7 +80,7 @@ thought as a software "patch panel" front-end for 
applications.
-a pci:dbdf,representor=[pf[0-1],pf2vf[0-2],pf3[3,5]]
 
 - As virtual devices, they may be more limited than their physical
-  counterparts, for instance by exposing only a subset of device
+  counterparts. For instance, by exposing only a subset of device
   configuration callbacks and/or by not necessarily having Rx/Tx capability.
 
 - Among other things, they can be used to assign MAC addresses to the
@@ -100,7 +100,7 @@ thought as a software "patch panel" front-end for 
applications.
 
 - The device or group relationship of ports can be discovered using the
   switch ``domain_id`` field within the devices switch information structure. 
By
-  default the switch ``domain_id`` of a port will be
+  default, the switch ``domain_id`` of a port will be
   ``RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID`` to indicate that the port doesn't
   support the concept of a switch domain, but ports which do support the 
concept
   will be allocated a unique switch ``domain_id``, ports within the same switch
diff --git a/doc/guides/prog_guide/traffic_metering_and_policing.rst 
b/doc/guides/prog_guide/traffic_metering_and_policing.rst
index 2e8f419f97

[PATCH] doc: reword sample app guides

2024-10-21 Thread Nandini Persad
I revised these sections mostly for grammar and clarity.

Signed-off-by: Nandini Persad 
---
 doc/guides/sample_app_ug/l2_forward_cat.rst   | 24 +++--
 doc/guides/sample_app_ug/l2_forward_event.rst | 96 +--
 .../sample_app_ug/l2_forward_macsec.rst   |  8 +-
 .../sample_app_ug/l2_forward_real_virtual.rst | 49 +-
 doc/guides/sample_app_ug/l3_forward.rst   | 29 +++---
 doc/guides/sample_app_ug/l3_forward_graph.rst | 12 +--
 .../sample_app_ug/l3_forward_power_man.rst| 10 +-
 doc/guides/sample_app_ug/link_status_intr.rst |  9 +-
 doc/guides/sample_app_ug/server_node_efd.rst  | 12 +--
 9 files changed, 128 insertions(+), 121 deletions(-)

diff --git a/doc/guides/sample_app_ug/l2_forward_cat.rst 
b/doc/guides/sample_app_ug/l2_forward_cat.rst
index 51621b692f..ab38d99821 100644
--- a/doc/guides/sample_app_ug/l2_forward_cat.rst
+++ b/doc/guides/sample_app_ug/l2_forward_cat.rst
@@ -4,19 +4,23 @@
 L2 Forwarding Sample Application with Cache Allocation Technology (CAT)
 ===
 
-Basic Forwarding sample application is a simple *skeleton* example of
+The Basic Forwarding sample application is a *skeleton* example of
 a forwarding application. It has been extended to make use of CAT via extended
 command line options and linking against the libpqos library.
 
-It is intended as a demonstration of the basic components of a DPDK forwarding
-application and use of the libpqos library to program CAT.
-For more detailed implementations see the L2 and L3 forwarding
+Overview
+
+
+This app is intended as a demonstration of the basic components of a DPDK 
forwarding
+application and use of the libpqos library to the program CAT.
+For more detailed implementations, see the L2 and L3 forwarding
 sample applications.
 
 CAT and Code Data Prioritization (CDP) features allow management of the CPU's
 last level cache. CAT introduces classes of service (COS) that are essentially
 bitmasks. In current CAT implementations, a bit in a COS bitmask corresponds to
 one cache way in last level cache.
+
 A CPU core is always assigned to one of the CAT classes.
 By programming CPU core assignment and COS bitmasks, applications can be given
 exclusive, shared, or mixed access to the CPU's last level cache.
@@ -29,7 +33,7 @@ By default, after reset, all CPU cores are assigned to COS 0 
and all classes
 are programmed to allow fill into all cache ways.
 CDP is off by default.
 
-For more information about CAT please see:
+For more information about CAT, please see:
 
 * https://github.com/01org/intel-cmt-cat
 
@@ -57,7 +61,7 @@ To compile the application, export the path to PQoS lib:
export CFLAGS=-I/path/to/intel-cmt-cat/include
export LDFLAGS=-L/path/to/intel-cmt-cat/lib
 
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
 
 The application is located in the ``l2fwd-cat`` sub-directory.
 
@@ -71,13 +75,13 @@ To run the example in a ``linux`` environment and enable 
CAT on cpus 0-2:
 
 .//examples/dpdk-l2fwd-cat -l 1 -n 4 -- --l3ca="0x3@(0-2)"
 
-or to enable CAT and CDP on cpus 1,3:
+Or to enable CAT and CDP on cpus 1,3:
 
 .. code-block:: console
 
 .//examples/dpdk-l2fwd-cat -l 1 -n 4 -- 
--l3ca="(0x00C00,0x00300)@(1,3)"
 
-If CDP is not supported it will fail with following error message:
+If CDP is not supported, it will fail with following error message:
 
 .. code-block:: console
 
@@ -191,8 +195,8 @@ function. The value returned is the number of parsed 
arguments:
 ``cat_init()`` is a wrapper function which parses the command, validates
 the requested parameters and configures CAT accordingly.
 
-Parsing of command line arguments is done in ``parse_args(...)``.
-libpqos is then initialized with the ``pqos_init(...)`` call. Next, libpqos is
+The parsing of command line arguments is done in ``parse_args(...)``.
+Libpqos is then initialized with the ``pqos_init(...)`` call. Next, libpqos is
 queried for system CPU information and L3CA capabilities via
 ``pqos_cap_get(...)`` and ``pqos_cap_get_type(..., PQOS_CAP_TYPE_L3CA, ...)``
 calls. When all capability and topology information is collected, the requested
diff --git a/doc/guides/sample_app_ug/l2_forward_event.rst 
b/doc/guides/sample_app_ug/l2_forward_event.rst
index 904f6f1a4a..4c9cac871e 100644
--- a/doc/guides/sample_app_ug/l2_forward_event.rst
+++ b/doc/guides/sample_app_ug/l2_forward_event.rst
@@ -6,24 +6,24 @@
 L2 Forwarding Eventdev Sample Application
 =
 
-The L2 Forwarding eventdev sample application is a simple example of packet
-processing using the Data Plane Development Kit (DPDK) to demonstrate usage of
-poll and event mode packet I/O mechanism.
+The L2 Forwarding eventdev sample application is an example of packet
+processing using the Data Plane Development Kit (DPDK) to demonstrate the 
usage of
+the poll and e

[PATCH] doc: reword sample app guides

2024-10-06 Thread Nandini Persad
I have reviewed these sections for grammar/clarity
and made small modifications to the formatting of sections
to adhere to a template which will create uniformality
in the sample application user guides overall.

Signed-off-by: Nandini Persad 
---
 .../prog_guide/switch_representation.rst  | 18 +++---
 .../traffic_metering_and_policing.rst |  4 +-
 doc/guides/sample_app_ug/cmd_line.rst | 24 
 doc/guides/sample_app_ug/dma.rst  | 38 ++---
 doc/guides/sample_app_ug/ethtool.rst  | 13 +++--
 doc/guides/sample_app_ug/flow_filtering.rst   | 50 +
 doc/guides/sample_app_ug/hello_world.rst  |  6 +-
 doc/guides/sample_app_ug/intro.rst| 20 +++
 doc/guides/sample_app_ug/ip_frag.rst  | 11 ++--
 doc/guides/sample_app_ug/ip_reassembly.rst| 38 +++--
 doc/guides/sample_app_ug/ipv4_multicast.rst   | 39 ++---
 doc/guides/sample_app_ug/keep_alive.rst   | 10 ++--
 .../sample_app_ug/l2_forward_crypto.rst   | 29 +-
 .../sample_app_ug/l2_forward_job_stats.rst| 56 +++
 doc/guides/sample_app_ug/rxtx_callbacks.rst   | 21 ---
 doc/guides/sample_app_ug/skeleton.rst | 30 +-
 16 files changed, 223 insertions(+), 184 deletions(-)

diff --git a/doc/guides/prog_guide/switch_representation.rst 
b/doc/guides/prog_guide/switch_representation.rst
index 46e0ca85a5..98c2830b03 100644
--- a/doc/guides/prog_guide/switch_representation.rst
+++ b/doc/guides/prog_guide/switch_representation.rst
@@ -15,8 +15,8 @@ Network adapters with multiple physical ports and/or SR-IOV 
capabilities
 usually support the offload of traffic steering rules between their virtual
 functions (VFs), sub functions (SFs), physical functions (PFs) and ports.
 
-Like for standard Ethernet switches, this involves a combination of
-automatic MAC learning and manual configuration. For most purposes it is
+For standard Ethernet switches, this involves a combination of
+automatic MAC learning and manual configuration. For most purposes, it is
 managed by the host system and fully transparent to users and applications.
 
 On the other hand, applications typically found on hypervisors that process
@@ -26,23 +26,23 @@ according on their own criteria.
 Without a standard software interface to manage traffic steering rules
 between VFs, SFs, PFs and the various physical ports of a given device,
 applications cannot take advantage of these offloads; software processing is
-mandatory even for traffic which ends up re-injected into the device it
+mandatory, even for traffic, which ends up re-injected into the device it
 originates from.
 
 This document describes how such steering rules can be configured through
 the DPDK flow API (**rte_flow**), with emphasis on the SR-IOV use case
-(PF/VF steering) using a single physical port for clarity, however the same
+(PF/VF steering) using a single physical port for clarity. However, the same
 logic applies to any number of ports without necessarily involving SR-IOV.
 
 Sub Function
 
-Besides SR-IOV, Sub function is a portion of the PCI device, a SF netdev
-has its own dedicated queues(txq, rxq). A SF netdev supports E-Switch
+Besides SR-IOV, Sub function is a portion of the PCI device and a SF netdev
+has its own dedicated queues (txq, rxq). A SF netdev supports E-Switch
 representation offload similar to existing PF and VF representors.
 A SF shares PCI level resources with other SFs and/or with its parent PCI
 function.
 
-Sub function is created on-demand, coexists with VFs. Number of SFs is
+Sub function is created on-demand and coexists with VFs. The number of SFs is
 limited by hardware resources.
 
 Port Representors
@@ -80,7 +80,7 @@ thought as a software "patch panel" front-end for 
applications.
-a pci:dbdf,representor=[pf[0-1],pf2vf[0-2],pf3[3,5]]
 
 - As virtual devices, they may be more limited than their physical
-  counterparts, for instance by exposing only a subset of device
+  counterparts. For instance, by exposing only a subset of device
   configuration callbacks and/or by not necessarily having Rx/Tx capability.
 
 - Among other things, they can be used to assign MAC addresses to the
@@ -100,7 +100,7 @@ thought as a software "patch panel" front-end for 
applications.
 
 - The device or group relationship of ports can be discovered using the
   switch ``domain_id`` field within the devices switch information structure. 
By
-  default the switch ``domain_id`` of a port will be
+  default, the switch ``domain_id`` of a port will be
   ``RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID`` to indicate that the port doesn't
   support the concept of a switch domain, but ports which do support the 
concept
   will be allocated a unique switch ``domain_id``, ports within the same switch
diff --git a/doc/guides/prog_guide/traffic_metering_and_policing.rst 
b/doc/guides/prog_guide/traffic_metering_and_policing.rst
index 2e8f419f97

[PATCH v2] doc: reword sample app guides

2024-10-10 Thread Nandini Persad
I have reviewed these sections for grammar/clarity
and made small modifications to the formatting of sections
to adhere to a template which will create uniformality
in the sample application user guides overall.

Signed-off-by: Nandini Persad 
Acked-by: Chengwen Feng 
---
 doc/guides/sample_app_ug/cmd_line.rst | 24 
 doc/guides/sample_app_ug/dma.rst  | 38 ++---
 doc/guides/sample_app_ug/ethtool.rst  | 15 +++--
 doc/guides/sample_app_ug/flow_filtering.rst   | 50 +
 doc/guides/sample_app_ug/hello_world.rst  |  6 +-
 doc/guides/sample_app_ug/intro.rst| 20 +++
 doc/guides/sample_app_ug/ip_frag.rst  | 11 ++--
 doc/guides/sample_app_ug/ip_reassembly.rst| 38 +++--
 doc/guides/sample_app_ug/ipv4_multicast.rst   | 39 ++---
 doc/guides/sample_app_ug/keep_alive.rst   | 10 ++--
 .../sample_app_ug/l2_forward_crypto.rst   | 29 +-
 .../sample_app_ug/l2_forward_job_stats.rst| 56 +++
 doc/guides/sample_app_ug/rxtx_callbacks.rst   | 21 ---
 doc/guides/sample_app_ug/skeleton.rst | 30 +-
 14 files changed, 213 insertions(+), 174 deletions(-)

diff --git a/doc/guides/sample_app_ug/cmd_line.rst 
b/doc/guides/sample_app_ug/cmd_line.rst
index 6655b1e5a7..72ce5fc27f 100644
--- a/doc/guides/sample_app_ug/cmd_line.rst
+++ b/doc/guides/sample_app_ug/cmd_line.rst
@@ -13,7 +13,7 @@ Overview
 The Command Line sample application is a simple application that
 demonstrates the use of the command line interface in the DPDK.
 This application is a readline-like interface that can be used
-to debug a DPDK application, in a Linux* application environment.
+to debug a DPDK application in a Linux* application environment.
 
 .. note::
 
@@ -22,10 +22,13 @@ to debug a DPDK application, in a Linux* application 
environment.
 See also the "rte_cmdline library should not be used in production code 
due to limited testing" item
 in the "Known Issues" section of the Release Notes.
 
-The Command Line sample application supports some of the features of the GNU 
readline library such as, completion,
-cut/paste and some other special bindings that make configuration and debug 
faster and easier.
+The Command Line sample application supports some of the features of
+the GNU readline library such as completion, cut/paste and other
+special bindings that make configuration and debug faster and easier.
+
+The application shows how the rte_cmdline application can be extended
+to handle a list of objects.
 
-The application shows how the rte_cmdline application can be extended to 
handle a list of objects.
 There are three simple commands:
 
 *   add obj_name IP: Add a new object with an IP/IPv6 address associated to it.
@@ -48,7 +51,7 @@ The application is located in the ``cmd_line`` sub-directory.
 Running the Application
 ---
 
-To run the application in linux environment, issue the following command:
+To run the application in a linux environment, issue the following command:
 
 .. code-block:: console
 
@@ -60,7 +63,7 @@ and the Environment Abstraction Layer (EAL) options.
 Explanation
 ---
 
-The following sections provide some explanation of the code.
+The following sections provide explanation of the code.
 
 EAL Initialization and cmdline Start
 
@@ -73,7 +76,7 @@ This is achieved as follows:
 :start-after: Initialization of the Environment Abstraction Layer (EAL). 8<
 :end-before: >8 End of initialization of Environment Abstraction Layer 
(EAL).
 
-Then, a new command line object is created and started to interact with the 
user through the console:
+Then, a new command line object is created and starts to interact with the 
user through the console:
 
 .. literalinclude:: ../../../examples/cmdline/main.c
 :language: c
@@ -81,13 +84,14 @@ Then, a new command line object is created and started to 
interact with the user
 :end-before: >8 End of creating a new command line object.
 :dedent: 1
 
-The cmd line_interact() function returns when the user types **Ctrl-d** and in 
this case,
-the application exits.
+The cmd line_interact() function returns when the user types **Ctrl-d** and,
+in this case, the application exits.
 
 Defining a cmdline Context
 ~~
 
-A cmdline context is a list of commands that are listed in a NULL-terminated 
table, for example:
+A cmdline context is a list of commands that are listed in a NULL-terminated 
table.
+For example:
 
 .. literalinclude:: ../../../examples/cmdline/commands.c
 :language: c
diff --git a/doc/guides/sample_app_ug/dma.rst b/doc/guides/sample_app_ug/dma.rst
index 2765895564..c98646593b 100644
--- a/doc/guides/sample_app_ug/dma.rst
+++ b/doc/guides/sample_app_ug/dma.rst
@@ -10,10 +10,10 @@ Overview
 
 
 This sample is intended as a demonstration of the basic components of a DPDK
-forwardi

[PATCH v3] doc: reword sample app guides

2024-10-14 Thread Nandini Persad
I have reviewed these sections for grammar/clarity
and made small modifications to the formatting of sections
to adhere to a template which will create uniformality
in the sample application user guides overall.

Signed-off-by: Nandini Persad 
Acked-by: Chengwen Feng 
---
 doc/guides/sample_app_ug/cmd_line.rst | 24 
 doc/guides/sample_app_ug/dma.rst  | 38 ++---
 doc/guides/sample_app_ug/ethtool.rst  | 15 +++--
 doc/guides/sample_app_ug/flow_filtering.rst   | 50 +
 doc/guides/sample_app_ug/hello_world.rst  |  6 +-
 doc/guides/sample_app_ug/intro.rst| 20 +++
 doc/guides/sample_app_ug/ip_frag.rst  | 11 ++--
 doc/guides/sample_app_ug/ip_reassembly.rst| 38 +++--
 doc/guides/sample_app_ug/ipv4_multicast.rst   | 39 ++---
 doc/guides/sample_app_ug/keep_alive.rst   | 10 ++--
 .../sample_app_ug/l2_forward_crypto.rst   | 30 +-
 .../sample_app_ug/l2_forward_job_stats.rst| 56 +++
 doc/guides/sample_app_ug/rxtx_callbacks.rst   | 21 ---
 doc/guides/sample_app_ug/skeleton.rst | 30 +-
 14 files changed, 214 insertions(+), 174 deletions(-)

diff --git a/doc/guides/sample_app_ug/cmd_line.rst 
b/doc/guides/sample_app_ug/cmd_line.rst
index 6655b1e5a7..72ce5fc27f 100644
--- a/doc/guides/sample_app_ug/cmd_line.rst
+++ b/doc/guides/sample_app_ug/cmd_line.rst
@@ -13,7 +13,7 @@ Overview
 The Command Line sample application is a simple application that
 demonstrates the use of the command line interface in the DPDK.
 This application is a readline-like interface that can be used
-to debug a DPDK application, in a Linux* application environment.
+to debug a DPDK application in a Linux* application environment.
 
 .. note::
 
@@ -22,10 +22,13 @@ to debug a DPDK application, in a Linux* application 
environment.
 See also the "rte_cmdline library should not be used in production code 
due to limited testing" item
 in the "Known Issues" section of the Release Notes.
 
-The Command Line sample application supports some of the features of the GNU 
readline library such as, completion,
-cut/paste and some other special bindings that make configuration and debug 
faster and easier.
+The Command Line sample application supports some of the features of
+the GNU readline library such as completion, cut/paste and other
+special bindings that make configuration and debug faster and easier.
+
+The application shows how the rte_cmdline application can be extended
+to handle a list of objects.
 
-The application shows how the rte_cmdline application can be extended to 
handle a list of objects.
 There are three simple commands:
 
 *   add obj_name IP: Add a new object with an IP/IPv6 address associated to it.
@@ -48,7 +51,7 @@ The application is located in the ``cmd_line`` sub-directory.
 Running the Application
 ---
 
-To run the application in linux environment, issue the following command:
+To run the application in a linux environment, issue the following command:
 
 .. code-block:: console
 
@@ -60,7 +63,7 @@ and the Environment Abstraction Layer (EAL) options.
 Explanation
 ---
 
-The following sections provide some explanation of the code.
+The following sections provide explanation of the code.
 
 EAL Initialization and cmdline Start
 
@@ -73,7 +76,7 @@ This is achieved as follows:
 :start-after: Initialization of the Environment Abstraction Layer (EAL). 8<
 :end-before: >8 End of initialization of Environment Abstraction Layer 
(EAL).
 
-Then, a new command line object is created and started to interact with the 
user through the console:
+Then, a new command line object is created and starts to interact with the 
user through the console:
 
 .. literalinclude:: ../../../examples/cmdline/main.c
 :language: c
@@ -81,13 +84,14 @@ Then, a new command line object is created and started to 
interact with the user
 :end-before: >8 End of creating a new command line object.
 :dedent: 1
 
-The cmd line_interact() function returns when the user types **Ctrl-d** and in 
this case,
-the application exits.
+The cmd line_interact() function returns when the user types **Ctrl-d** and,
+in this case, the application exits.
 
 Defining a cmdline Context
 ~~
 
-A cmdline context is a list of commands that are listed in a NULL-terminated 
table, for example:
+A cmdline context is a list of commands that are listed in a NULL-terminated 
table.
+For example:
 
 .. literalinclude:: ../../../examples/cmdline/commands.c
 :language: c
diff --git a/doc/guides/sample_app_ug/dma.rst b/doc/guides/sample_app_ug/dma.rst
index 2765895564..c98646593b 100644
--- a/doc/guides/sample_app_ug/dma.rst
+++ b/doc/guides/sample_app_ug/dma.rst
@@ -10,10 +10,10 @@ Overview
 
 
 This sample is intended as a demonstration of the basic components of a DPDK
-forwardi

[PATCH] doc/guides: add security document

2024-11-12 Thread Nandini Persad
This is a new document covering security protocols implemented
in DPDK.

Signed-off-by: Nandini Persad 
Signed-off-by: Thomas Monjalon 
Reviewed-by: Stephen Hemminger 
---
 doc/guides/index.rst  |   1 +
 doc/guides/security/index.rst | 333 ++
 2 files changed, 334 insertions(+)
 create mode 100644 doc/guides/security/index.rst

diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 244b99624c..b8fddc56ae 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -13,6 +13,7 @@ DPDK documentation
sample_app_ug/index
prog_guide/index
howto/index
+   security/index
tools/index
testpmd_app_ug/index
nics/index
diff --git a/doc/guides/security/index.rst b/doc/guides/security/index.rst
new file mode 100644
index 00..5547a93aec
--- /dev/null
+++ b/doc/guides/security/index.rst
@@ -0,0 +1,333 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+
+Security Support Guide
+==
+
+This document describes the security features available in the DPDK.
+This guide will provides information on each protocol,
+including supported algorithms, practical implementation details, and 
references.
+
+By detailing the supported algorithms and providing insights into each
+security protocol, this document serves as a resource for anyone looking
+to implement or enhance security measures within their DPDK-based environments.
+
+
+
+Related Documentation
+-
+
+Here is a list of related documents that provide detail of each library,
+its capabilities and what level of support it currently has within DPDK.
+
+* :doc:`Crypto Device Drivers <../cryptodevs/index>`
+  This section contains information about all the crypto drivers in DPDK,
+  such as feature support availability, cipher algorithms and authentication
+  algorithms.
+
+* :doc:`Security Library <../prog_guide/rte_security>`
+  This library is the glue between ethdev and and crypto dev. It includes 
low-level supported protocols such as MACsec, TLS, IPSec, and PDCP.
+
+* Protocols: These include two supported protocols in DPDK.
+  * :doc:`IPSec Library <../prog_guide/ipsec_lib>`
+  * :doc:`PDCP Library <../prog_guide/pdcp_lib>`
+
+
+Protocols
+-
+
+QUIC
+
+
+QUIC (Quick UDP Internet Connections) is a transport layer network
+protocol designed by Google to improve the speed and reliability of web 
connections.
+QUIC is built on top of the User Datagram Protocol (UDP) and uses a 
combination of
+encryption and multiplexing to achieve its goals. The protocol's main goal is 
to
+reduce latency compared to Transmission Control Protocol (TCP). QUIC also
+aims to make HTTP traffic more secure and eventually replace TCP and TLS on
+the web.
+
+Media over QUICK (MoQ) is a new live media protocol powered by QUIC. It is
+a TCP/UDP replacement designed for HTTP/3.
+
+
+**Wikipedia Link**
+* https://en.wikipedia.org/wiki/QUIC
+
+**Standard Link**
+* https://quic.video/
+
+**Level of Support in DPDK**
+* Not supported in DPDK.
+
+**Pros**
+* Useful for time-sensitive application like online gaming or video 
streaming.
+* Can send multiple streams of data over a single channel.
+* Automatically limits the packet transmission rate to counteract load 
peaks and avoid overload, even with low bandwidth connections.
+* Uses TLS 1.3, which offers better security than others.
+* Fast data transfer.
+* Combines features of TCP, such as reliability and congestion 
control, with the speed and flexibility of UDP.
+
+**Cons**
+* Has more complex protocol logic, which can result in higher CPU and 
memory usage compared to TCP.
+* May result in poorer transmission rates.
+* Requires changes to client and server, making it more challenging to 
deploy that TCP.
+* Not yet as widely deployed as TCP.
+
+
+MACSec
+~~
+
+MACsec (accelerated by Marvell) is a network security standard that operates
+at the medium access control layer and defines connectionless data 
confidentiality
+and integrity for media access independent protocols. It is standardized by the
+IEEE 802.1 working group.
+
+
+**Wikipedia Link**
+* https://en.wikipedia.org/wiki/IEEE_802.1AE
+
+**Standard Link**
+* https://1.ieee802.org/security/802-1ae/
+
+**Level of Support in DPDK**
+* Supported in DPDK + 'Sample Application 
<https://doc.dpdk.org/guides/sample_app_ug/l2_forward_macsec.html>'
+
+**Supported Algorithms**
+* As specified by MACsec specification: AES-128-GCM, AES-256-GCM
+
+**Drivers**
+* Marvell cnxk Ethernet PMD which supports inline MACsec
+
+**Facts**
+* Uses the AES-GCM cryptography algorithm
+* Works on layer 2 and protects all DHCP and ARP traffic
+* Each MAC frame has a separate integrity verification code
+* Prevents attackers from resending copied MAC frames into the networ

[PATCH v3] doc: add security document

2024-11-19 Thread Nandini Persad
This is a new document covering security protocols
implemented in DPDK.

Signed-off-by: Nandini Persad 
Signed-off-by: Thomas Monjalon 
Reviewed-by: Stephen Hemminger 
---
 doc/guides/index.rst  |   1 +
 doc/guides/security/index.rst | 336 ++
 2 files changed, 337 insertions(+)
 create mode 100644 doc/guides/security/index.rst

diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 244b99624c..b8fddc56ae 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -13,6 +13,7 @@ DPDK documentation
sample_app_ug/index
prog_guide/index
howto/index
+   security/index
tools/index
testpmd_app_ug/index
nics/index
diff --git a/doc/guides/security/index.rst b/doc/guides/security/index.rst
new file mode 100644
index 00..efe04066b8
--- /dev/null
+++ b/doc/guides/security/index.rst
@@ -0,0 +1,336 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+
+Security Support Guide
+==
+
+This document describes the security features available in the DPDK.
+This guide will provides information on each protocol,
+including supported algorithms, practical implementation details, and 
references.
+
+By detailing the supported algorithms and providing insights into each
+security protocol, this document serves as a resource for anyone looking
+to implement or enhance security measures within their DPDK-based environments.
+
+
+
+Related Documentation
+-
+
+Here is a list of related documents that provide detail of each library,
+its capabilities and what level of support it currently has within DPDK.
+
+* :doc:`Crypto Device Drivers <../cryptodevs/index>`
+  This section contains information about all the crypto drivers in DPDK,
+  such as feature support availability, cipher algorithms and authentication
+  algorithms.
+
+* :doc:`Security Library <../prog_guide/rte_security>`
+  This library is the glue between ethdev and crypto dev. It includes 
low-level supported protocols such as MACsec, TLS, IPSec, and PDCP.
+
+* Protocols: These include two supported protocols in DPDK.
+  * :doc:`IPSec Library <../prog_guide/ipsec_lib>`
+  * :doc:`PDCP Library <../prog_guide/pdcp_lib>`
+
+
+Protocols
+-
+
+
+MACSec
+~~
+
+MACsec (accelerated by Marvell) is a network security standard that operates
+at the medium access control layer and defines connectionless data 
confidentiality
+and integrity for media access independent protocols. It is standardized by the
+IEEE 802.1 working group.
+
+
+**Wikipedia Link**
+* https://en.wikipedia.org/wiki/IEEE_802.1AE
+
+**Standard Link**
+* https://1.ieee802.org/security/802-1ae/
+
+**Level of Support in DPDK**
+* Supported in DPDK + Sample Application :doc:`MACSec Sample 
Application <../sample_app_ug/l2_forward_macsec>`
+
+**Supported Algorithms**
+* As specified by MACsec specification: AES-128-GCM, AES-256-GCM
+
+**Drivers**
+* Marvell cnxk Ethernet PMD which supports inline MACsec
+
+**Facts**
+* Uses the AES-GCM cryptography algorithm
+* Works on layer 2 and protects all DHCP and ARP traffic
+* Each MAC frame has a separate integrity verification code
+* Prevents attackers from resending copied MAC frames into the network 
without being detected
+* Commonly used in environments where securing Ethernet traffic 
between devices is critical, such as in enterprise networks, data centers and 
service provider networks
+* Applications do not need modification to work with IPsec
+
+**Cons**
+* Only operates at Layer 2, so it doesn't protect traffic beyond the 
local Ethernet segment or over Layer 3 networks or the internet
+* Data is decrypted and re-encrypted at each network device,
+which could expose data at each point
+* Can't detect rogue devices that operate on Layer 1
+* Relies on hardware for encryption and decryption, so not all network 
devices can use it
+
+
+IPSec
+~
+
+IPsec (accelerated by Intel, Marvell, Netronome, NXP) allows secure 
communication
+over the internet by encrypting data traffic between two or more devices or 
networks.
+IPsec works on a different layer than MACsec, at layer 3.
+
+**Wikipedia Link**
+* https://en.wikipedia.org/wiki/IPsec
+
+**Standard Link**
+* https://datatracker.ietf.org/wg/ipsec/about/
+
+**Level of Support in DPDK**
+* Supported
+* High-level library and sample application
+* :doc:`IPSec Library <../prog_guide/ipsec_lib>`
+* :doc:`IPSec Sample Application <../sample_app_ug/ipsec_secgw>`
+
+**Supported Algorithms**
+* AES-GCM and ChaCha20-Poly1305
+* AES CBC and AES-CTR
+* HMAC-SHA1/SHA2 for integrity protection and authenticity
+
+**Pros**
+* Uses public keys to create an encrypted, authenticated tunnel to 
resources
+* Offers strong security, scalability, and inter

[PATCH v2] doc: add security document

2024-11-18 Thread Nandini Persad
This is a new document covering security protocols
implemented in DPDK.

Signed-off-by: Nandini Persad 
Signed-off-by: Thomas Monjalon 
Reviewed-by: Stephen Hemminger 
---
V2 - incorporate review feedback
 doc/guides/index.rst |   1 +
 doc/guides/security/security.rst | 337 +++
 2 files changed, 338 insertions(+)
 create mode 100644 doc/guides/security/security.rst

diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 244b99624c..b8fddc56ae 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -13,6 +13,7 @@ DPDK documentation
sample_app_ug/index
prog_guide/index
howto/index
+   security/index
tools/index
testpmd_app_ug/index
nics/index
diff --git a/doc/guides/security/security.rst b/doc/guides/security/security.rst
new file mode 100644
index 00..ab2dfa4a4a
--- /dev/null
+++ b/doc/guides/security/security.rst
@@ -0,0 +1,337 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+
+Security Support Guide
+==
+
+This document describes the security features available in the DPDK.
+This guide will provides information on each protocol,
+including supported algorithms, practical implementation details, and 
references.
+
+By detailing the supported algorithms and providing insights into each
+security protocol, this document serves as a resource for anyone looking
+to implement or enhance security measures within their DPDK-based environments.
+
+
+
+Related Documentation
+-
+
+Here is a list of related documents that provide detail of each library,
+its capabilities and what level of support it currently has within DPDK.
+
+* :doc:`Crypto Device Drivers <../cryptodevs/index>`
+  This section contains information about all the crypto drivers in DPDK,
+  such as feature support availability, cipher algorithms and authentication
+  algorithms.
+
+* :doc:`Security Library <../prog_guide/rte_security>`
+  This library is the glue between ethdev and crypto dev. It includes 
low-level supported protocols such as MACsec, TLS, IPSec, and PDCP.
+
+* Protocols: These include two supported protocols in DPDK.
+  * :doc:`IPSec Library <../prog_guide/ipsec_lib>`
+  * :doc:`PDCP Library <../prog_guide/pdcp_lib>`
+
+
+Protocols
+-
+
+
+MACSec
+~~
+
+MACsec (accelerated by Marvell) is a network security standard that operates
+at the medium access control layer and defines connectionless data 
confidentiality
+and integrity for media access independent protocols. It is standardized by the
+IEEE 802.1 working group.
+
+
+**Wikipedia Link**
+* https://en.wikipedia.org/wiki/IEEE_802.1AE
+
+**Standard Link**
+* https://1.ieee802.org/security/802-1ae/
+
+**Level of Support in DPDK**
+* Supported in DPDK + Sample Application :doc:`MACSec Sample 
Application <../sample_app_ug/l2_forward_macsec.html>>`
+
+**Supported Algorithms**
+* As specified by MACsec specification: AES-128-GCM, AES-256-GCM
+
+**Drivers**
+* Marvell cnxk Ethernet PMD which supports inline MACsec
+
+**Facts**
+* Uses the AES-GCM cryptography algorithm
+* Works on layer 2 and protects all DHCP and ARP traffic
+* Each MAC frame has a separate integrity verification code
+* Prevents attackers from resending copied MAC frames into the network 
without being detected
+* Commonly used in environments where securing Ethernet traffic 
between devices is critical, such as in enterprise networks, data centers and 
service provider networks
+* Applications do not need modification to work with IPsec
+
+**Cons**
+* Only operates at Layer 2, so it doesn't protect traffic beyond the 
local Ethernet segment or over Layer 3 networks or the internet
+* Data is decrypted and re-encrypted at each network device,
+which could expose data at each point
+* Can't detect rogue devices that operate on Layer 1
+* Relies on hardware for encryption and decryption, so not all network 
devices can use it
+
+
+IPSec
+~
+
+IPsec (accelerated by Intel, Marvell, Netronome, NXP) allows secure 
communication
+over the internet by encrypting data traffic between two or more devices or 
networks.
+IPsec works on a different layer than MACsec, at layer 3.
+
+**Wikipedia Link**
+* https://en.wikipedia.org/wiki/IPsec
+
+**Standard Link**
+* https://datatracker.ietf.org/wg/ipsec/about/
+
+**Level of Support in DPDK**
+* Supported
+* High-level library and sample application
+* :doc:`IPSec Library <../prog_guide/ipsec_lib>`
+* :doc:`IPSec Sample Application <../sample_app_ug/ipsec_secgw>`
+
+**Supported Algorithms**
+* AES-GCM and ChaCha20-Poly1305
+* AES CBC and AES-CTR
+* HMAC-SHA1/SHA2 for integrity protection and authenticity
+
+**Pros**
+* Uses public keys to create an encrypted, authenticated tunnel to 
resour

[PATCH v3] doc: reword glossary

2024-12-01 Thread Nandini Persad
I added additional reference links and definitions to many
of the terms in the glossary. Please feel free to provide
feedback to ensure my definitions suit the proper context
in the DPDK community.

Signed-off-by: Nandini Persad 
---
 doc/guides/prog_guide/glossary.rst | 120 -
 1 file changed, 84 insertions(+), 36 deletions(-)

diff --git a/doc/guides/prog_guide/glossary.rst 
b/doc/guides/prog_guide/glossary.rst
index 8d6349701e..d832d4c0be 100644
--- a/doc/guides/prog_guide/glossary.rst
+++ b/doc/guides/prog_guide/glossary.rst
@@ -6,70 +6,90 @@ Glossary
 
 
 ACL
-   Access Control List
+   An `access control list (ACL) 
<https://en.wikipedia.org/wiki/Access-control_list>`_
+   is a set of rules that define who can access a resource and what actions 
they can perform.
 
 API
Application Programming Interface
 
 ASLR
-   Linux* kernel Address-Space Layout Randomization
+   `Address-Space Layout Randomization (ASLR) 
<https://en.wikipedia.org/wiki/Address_space_layout_randomization>`_
+   is a computer security technique that protects against buffer overflow 
attacks by randomizing the location of
+   executables in memory.
 
 BSD
-   Berkeley Software Distribution
+   `Berkeley Software Distribution (BSD) 
<https://en.wikipedia.org/wiki/Berkeley_Software_Distribution>`_
+   is an version of Unix™ operating system.
 
 Clr
Clear
 
 CIDR
-   Classless Inter-Domain Routing
+   `Classless Inter-Domain Routing (CIDR) 
<https://datatracker.ietf.org/doc/html/rfc1918>`_
+   is a method of assigning IP address that improves data routing efficiency 
on the internet and is used in IPv4 and IPv6.
 
 Control Plane
-   The control plane is concerned with the routing of packets and with
-   providing a start or end point.
+   A `Control Plane <https://en.wikipedia.org/wiki/Control_plane>`_ is a 
concept in networking that refers to the part of the system
+   responsible for managing and making decisions about where and how data 
packets are forwarded within a network.
 
 Core
A core may include several lcores or threads if the processor supports
-   hyperthreading.
+   `simultaneous multithreading (SMT) 
<https://en.wikipedia.org/wiki/Simultaneous_multithreading>`_
 
 Core Components
-   A set of libraries provided by the DPDK, including eal, ring, mempool,
-   mbuf, timers, and so on.
+   A set of libraries provided by DPDK which are used by nearly all 
applications and
+   upon which other DPDK libraries and drivers depend. For example, eal, ring, 
mempool and mbuf.
 
 CPU
Central Processing Unit
 
 CRC
Cyclic Redundancy Check
+   An algorithm that detects errors in data transmission and storage.
 
 Data Plane
-   In contrast to the control plane, the data plane in a network architecture
-   are the layers involved when forwarding packets.  These layers must be
-   highly optimized to achieve good performance.
+   In contrast to the control plane, which is responsible for setting up and 
managing data connections,
+   the `data plane <https://en.wikipedia.org/wiki/Data_plane>`_ in a network 
architecture includes the
+   layers involved when processing and forwarding data packets between 
communicating endpoints.
+   These layers must be highly optimized to achieve good performance.
 
 DIMM
Dual In-line Memory Module
+   A module containing one or several Random Access Memory (RAM) or Dynamic 
RAM (DRAM) chips on a printed
+   circuit board that connect it directly to the computer motherboard.
 
 Doxygen
-   A documentation generator used in the DPDK to generate the API reference.
+   `Doxygen <https://www.doxygen.nl/>`_ is a
+   documentation generator used in the DPDK to generate the API reference.
 
 DPDK
Data Plane Development Kit
 
 DRAM
-   Dynamic Random Access Memory
+   `Dynamic Random Access Memory 
<https://en.wikipedia.org/wiki/Dynamic_random-access_memory>`_
+   is  type of random access memory (RAM) that is used in computers to 
temporarily store information.
 
 EAL
-   The Environment Abstraction Layer (EAL) provides a generic interface that
-   hides the environment specifics from the applications and libraries.  The
-   services expected from the EAL are: development kit loading and launching,
-   core affinity/ assignment procedures, system memory allocation/description,
-   PCI bus access, inter-partition communication.
+   :doc:`Environment Abstraction Layer (EAL) `
+   is a the core DPDK library that provides a generic interface
+   that hides the environment specifics from the applications and libraries.
+   The services expected from the EAL are: loading and launching, core 
management,
+   memory allocation, bus management, and inter-partition communication.
+
+EAL Thread
+   An EAL thread is typically a thread that runs packet processing tasks. 
These threads are often
+   pinned to logical cores (lcores), which helps to ensure that packet 
processing tasks are executed with
+   min

[PATCH v2] doc: reword glossary

2024-11-21 Thread Nandini Persad
I added additional reference links and definitions to many
of the terms in the glossary. Please feel free to provide
feedback to ensure my definitions suit the proper context
in the DPDK community.

Signed-off-by: Nandini Persad 
---
 doc/guides/prog_guide/glossary.rst | 101 ++---
 1 file changed, 78 insertions(+), 23 deletions(-)

diff --git a/doc/guides/prog_guide/glossary.rst 
b/doc/guides/prog_guide/glossary.rst
index 8d6349701e..9f85e46437 100644
--- a/doc/guides/prog_guide/glossary.rst
+++ b/doc/guides/prog_guide/glossary.rst
@@ -6,70 +6,92 @@ Glossary
 
 
 ACL
-   Access Control List
+   An access control list (ACL) is a set of rules that define who can access a 
resource and what actions they can perform.
+   `ACL Link 
<https://www.fortinet.com/resources/cyberglossary/network-access-control-list#:~:text=A%20network%20access%20control%20list%20(ACL)%20is%20made%20up%20of,device%2C%20it%20cannot%20gain%20access.>`_
 
 API
Application Programming Interface
 
 ASLR
Linux* kernel Address-Space Layout Randomization
+   A computer security technique that protects against buffer overflow attacks 
by randomizing the location of executables in memory in Linux.
+   `ASLR Link 
<https://en.wikipedia.org/wiki/Address_space_layout_randomization>`_
 
 BSD
-   Berkeley Software Distribution
+   Berkeley Software Distribution is a Unix-like operating system.
 
 Clr
Clear
 
 CIDR
Classless Inter-Domain Routing
+   A method of assigning IP address that improves data routing efficiency on 
the internet and is used in IPv4 and IPv6.
+   `RFC Link <https://datatracker.ietf.org/doc/html/rfc1918>`_
 
 Control Plane
-   The control plane is concerned with the routing of packets and with
-   providing a start or end point.
+   A Control Plane is a key concept in networking that refers to the part of a 
network system
+   responsible for managing and making decisions about where and how data 
packets are forwarded within a network.
 
 Core
-   A core may include several lcores or threads if the processor supports
-   hyperthreading.
+   A core may include several lcores or threads if the processor supports 
simultaneous multithreading (SMT).
+   `Simultaneous Multithreading 
<https://en.wikipedia.org/wiki/Simultaneous_multithreading>`_
 
 Core Components
-   A set of libraries provided by the DPDK, including eal, ring, mempool,
-   mbuf, timers, and so on.
+   A set of libraries provided by DPDK which are used by nearly all 
applications and
+   upon which other DPDK libraries and drivers depend. For example, eal, ring, 
mempool and mbuf.
 
 CPU
Central Processing Unit
 
 CRC
Cyclic Redundancy Check
+   An algorithm that detects errors in data transmission and storage.
 
 Data Plane
-   In contrast to the control plane, the data plane in a network architecture
-   are the layers involved when forwarding packets.  These layers must be
-   highly optimized to achieve good performance.
+   In contrast to the control plane, which is responsible for setting up and 
managing data connections,
+   the data plane in a network architecture includes the layers involved when 
processing and forwarding
+   data packets between communicating endpoints. These layers must be highly 
optimized to achieve good performance.
 
 DIMM
Dual In-line Memory Module
+   A module containing one or several Random Access Memory (RAM) or Dynamic 
RAM (DRAM) chips on a printed
+   circuit board that connect it directly to the computer motherboard.
 
 Doxygen
A documentation generator used in the DPDK to generate the API reference.
+   `Doxygen Link <https://www.doxygen.nl/>`_
 
 DPDK
Data Plane Development Kit
 
 DRAM
Dynamic Random Access Memory
+   A type of random access memory (RAM) that is used in computers to 
temporarily store information.
+   `Link <https://en.wikipedia.org/wiki/Dynamic_random-access_memory>`_
 
 EAL
-   The Environment Abstraction Layer (EAL) provides a generic interface that
-   hides the environment specifics from the applications and libraries.  The
-   services expected from the EAL are: development kit loading and launching,
-   core affinity/ assignment procedures, system memory allocation/description,
-   PCI bus access, inter-partition communication.
+   The Environment Abstraction Layer (EAL) is a DPDK core library that 
provides a generic interface
+   that hides the environment specifics from the applications and libraries. 
The services expected
+   from the EAL are: development kit loading and launching, core affinity/ 
assignment procedures, system
+   memory allocation/description, PCI bus access, inter-partition 
communication.
+   `Link 
<https://github.com/emmericp/dpdk-github-inofficial/blob/master/doc/guides/prog_guide/env_abstraction_layer.rst>`_
+
+EAL Thread
+   An EAL thread is typically a thread that runs packet processing tasks. 
These threads are often
+   pinned to logical cores (lcores), whi

[PATCH] doc: reword glossary

2024-11-21 Thread Nandini Persad
I added additional reference links and definitions to many
of the terms in the glossary. Please feel free to provide
feedback to ensure my definitions suit the proper context
in the DPDK community.

Signed-off-by: Nandini Persad 
---
 doc/guides/prog_guide/glossary.rst | 107 ++---
 1 file changed, 81 insertions(+), 26 deletions(-)

diff --git a/doc/guides/prog_guide/glossary.rst 
b/doc/guides/prog_guide/glossary.rst
index 8d6349701e..fc79e9656f 100644
--- a/doc/guides/prog_guide/glossary.rst
+++ b/doc/guides/prog_guide/glossary.rst
@@ -6,70 +6,92 @@ Glossary
 
 
 ACL
-   Access Control List
+   An access control list (ACL) is a set of rules that define who can access a 
resource and what actions they can perform. 
+   `ACL Link 
<https://www.fortinet.com/resources/cyberglossary/network-access-control-list#:~:text=A%20network%20access%20control%20list%20(ACL)%20is%20made%20up%20of,device%2C%20it%20cannot%20gain%20access.>`_
 
 API
Application Programming Interface
 
 ASLR
Linux* kernel Address-Space Layout Randomization
+   A computer security technique that protects against buffer overflow attacks 
by randomizing the location of executables in memory in Linux. 
+   `ASLR Link 
<https://en.wikipedia.org/wiki/Address_space_layout_randomization>`_
 
 BSD
-   Berkeley Software Distribution
+   Berkeley Software Distribution is a Unix-like operating system.
 
 Clr
Clear
 
 CIDR
Classless Inter-Domain Routing
+   A method of assigning IP address that improves data routing efficiency on 
the internet and is used in IPv4 and IPv6.
+   `RFC Link <https://datatracker.ietf.org/doc/html/rfc1918>`_
 
 Control Plane
-   The control plane is concerned with the routing of packets and with
-   providing a start or end point.
+   A Control Plane is a key concept in networking that refers to the part of a 
network system
+   responsible for managing and making decisions about where and how data 
packets are forwarded within a network.
 
 Core
-   A core may include several lcores or threads if the processor supports
-   hyperthreading.
+   A core may include several lcores or threads if the processor supports 
simultaneous multithreading (SMT).
+   `Simultaneous Multithreading 
<https://en.wikipedia.org/wiki/Simultaneous_multithreading>`_
 
 Core Components
-   A set of libraries provided by the DPDK, including eal, ring, mempool,
-   mbuf, timers, and so on.
+   A set of libraries provided by DPDK which are used by nearly all 
applications and
+   upon which other DPDK libraries and drivers depend. For example, eal, ring, 
mempool and mbuf.
 
 CPU
Central Processing Unit
 
 CRC
Cyclic Redundancy Check
+   An algorithm that detects errors in data transmission and storage.
 
 Data Plane
-   In contrast to the control plane, the data plane in a network architecture
-   are the layers involved when forwarding packets.  These layers must be
-   highly optimized to achieve good performance.
+   In contrast to the control plane, which is responsible for setting up and 
managing data connections,
+   the data plane in a network architecture includes the layers involved when 
processing and forwarding
+   data packets between communicating endpoints. These layers must be highly 
optimized to achieve good performance.
 
 DIMM
Dual In-line Memory Module
-
+   A module containing one or several Random Access Memory (RAM) or Dynamic 
RAM (DRAM) chips on a printed
+   circuit board that connect it directly to the computer motherboard.
+   
 Doxygen
A documentation generator used in the DPDK to generate the API reference.
+   `Doxygen Link <https://www.doxygen.nl/>`_
 
 DPDK
Data Plane Development Kit
 
 DRAM
Dynamic Random Access Memory
+   A type of random access memory (RAM) that is used in computers to 
temporarily store information.
+   `Link <https://en.wikipedia.org/wiki/Dynamic_random-access_memory>`_
 
 EAL
-   The Environment Abstraction Layer (EAL) provides a generic interface that
-   hides the environment specifics from the applications and libraries.  The
-   services expected from the EAL are: development kit loading and launching,
-   core affinity/ assignment procedures, system memory allocation/description,
-   PCI bus access, inter-partition communication.
-
+   The Environment Abstraction Layer (EAL) is a DPDK core library that 
provides a generic interface
+   that hides the environment specifics from the applications and libraries. 
The services expected
+   from the EAL are: development kit loading and launching, core affinity/ 
assignment procedures, system
+   memory allocation/description, PCI bus access, inter-partition 
communication.
+   `Link 
<https://github.com/emmericp/dpdk-github-inofficial/blob/master/doc/guides/prog_guide/env_abstraction_layer.rst>`_
+
+EAL Thread
+   An EAL thread is typically a thread that runs packet processing tasks. 
These threads are often
+   pinned to logical cores (

[PATCH] doc: reword sample application guides

2025-01-27 Thread Nandini Persad
I have revised these sections to suit the template, but also,
for punctuation, clarity, and removing repitition when necessary.

Signed-off-by: Nandini Persad 
---
 doc/guides/sample_app_ug/dist_app.rst |  24 +--
 .../sample_app_ug/eventdev_pipeline.rst   |  20 +--
 doc/guides/sample_app_ug/fips_validation.rst  | 139 +-
 doc/guides/sample_app_ug/ip_pipeline.rst  |  12 +-
 doc/guides/sample_app_ug/ipsec_secgw.rst  |  95 ++--
 doc/guides/sample_app_ug/multi_process.rst|  66 +
 doc/guides/sample_app_ug/packet_ordering.rst  |  19 ++-
 doc/guides/sample_app_ug/pipeline.rst |  10 +-
 doc/guides/sample_app_ug/ptpclient.rst|  56 +++
 doc/guides/sample_app_ug/qos_metering.rst |  11 +-
 doc/guides/sample_app_ug/qos_scheduler.rst|  10 +-
 doc/guides/sample_app_ug/service_cores.rst|  41 +++---
 doc/guides/sample_app_ug/test_pipeline.rst|   2 +-
 doc/guides/sample_app_ug/timer.rst|  13 +-
 doc/guides/sample_app_ug/vdpa.rst |  39 ++---
 doc/guides/sample_app_ug/vhost.rst|  51 ---
 doc/guides/sample_app_ug/vhost_blk.rst|  21 +--
 doc/guides/sample_app_ug/vhost_crypto.rst |  15 +-
 .../sample_app_ug/vm_power_management.rst | 138 -
 .../sample_app_ug/vmdq_dcb_forwarding.rst |  77 +-
 doc/guides/sample_app_ug/vmdq_forwarding.rst  |  28 ++--
 21 files changed, 456 insertions(+), 431 deletions(-)

diff --git a/doc/guides/sample_app_ug/dist_app.rst 
b/doc/guides/sample_app_ug/dist_app.rst
index 5c80561187..7a841bff8a 100644
--- a/doc/guides/sample_app_ug/dist_app.rst
+++ b/doc/guides/sample_app_ug/dist_app.rst
@@ -4,7 +4,7 @@
 Distributor Sample Application
 ==
 
-The distributor sample application is a simple example of packet distribution
+The distributor sample application is an example of packet distribution
 to cores using the Data Plane Development Kit (DPDK). It also makes use of
 Intel Speed Select Technology - Base Frequency (Intel SST-BF) to pin the
 distributor to the higher frequency core if available.
@@ -31,7 +31,7 @@ generator as shown in the figure below.
 Compiling the Application
 -
 
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
 
 The application is located in the ``distributor`` sub-directory.
 
@@ -66,7 +66,7 @@ The distributor application consists of four types of 
threads: a receive
 thread (``lcore_rx()``), a distributor thread (``lcore_dist()``), a set of
 worker threads (``lcore_worker()``), and a transmit thread(``lcore_tx()``).
 How these threads work together is shown in :numref:`figure_dist_app` below.
-The ``main()`` function launches  threads of these four types.  Each thread
+The ``main()`` function launches threads of these four types. Each thread
 has a while loop which will be doing processing and which is terminated
 only upon SIGINT or ctrl+C.
 
@@ -86,7 +86,7 @@ the distributor, doing a simple XOR operation on the input 
port mbuf field
 (to indicate the output port which will be used later for packet transmission)
 and then finally returning the packets back to the distributor thread.
 
-The distributor thread will then call the distributor api
+The distributor thread will then call the distributor API
 ``rte_distributor_returned_pkts()`` to get the processed packets, and will 
enqueue
 them to another rte_ring for transfer to the TX thread for transmission on the
 output port. The transmit thread will dequeue the packets from the ring and
@@ -105,7 +105,7 @@ final statistics to the user.
 
 
 Intel SST-BF Support
-
+
 
 In DPDK 19.05, support was added to the power management library for
 Intel-SST-BF, a technology that allows some cores to run at a higher
@@ -114,20 +114,20 @@ and is entitled
 `Intel Speed Select Technology – Base Frequency - Enhancing Performance 
<https://builders.intel.com/docs/networkbuilders/intel-speed-select-technology-base-frequency-enhancing-performance.pdf>`_
 
 The distributor application was also enhanced to be aware of these higher
-frequency SST-BF cores, and when starting the application, if high frequency
+frequency SST-BF cores. When starting the application, if high frequency
 SST-BF cores are present in the core mask, the application will identify these
 cores and pin the workloads appropriately. The distributor core is usually
 the bottleneck, so this is given first choice of the high frequency SST-BF
-cores, followed by the rx core and the tx core.
+cores, followed by the Rx core and the Tx core.
 
 Debug Logging Support
--
+~
 
 Debug logging is provided as part of the application; the user needs to 
uncomment
 the line "#define DEBUG" defined in start of the application in main.c to 
enable debug logs.
 
 Statistics
---
+~~
 
 The main fun

[PATCH v2] doc: reword sample application guides

2025-02-16 Thread Nandini Persad
I have revised these sections to suit the template, but also,
for punctuation, clarity, and removing repetition when necessary.

Signed-off-by: Nandini Persad 
---
 doc/guides/sample_app_ug/dist_app.rst |  24 +--
 .../sample_app_ug/eventdev_pipeline.rst   |  20 +--
 doc/guides/sample_app_ug/fips_validation.rst  |  23 ++-
 doc/guides/sample_app_ug/ip_pipeline.rst  |  12 +-
 doc/guides/sample_app_ug/ipsec_secgw.rst  |  95 ++--
 doc/guides/sample_app_ug/multi_process.rst|  64 
 doc/guides/sample_app_ug/packet_ordering.rst  |  19 ++-
 doc/guides/sample_app_ug/pipeline.rst |  10 +-
 doc/guides/sample_app_ug/ptpclient.rst|  56 +++
 doc/guides/sample_app_ug/qos_metering.rst |  11 +-
 doc/guides/sample_app_ug/qos_scheduler.rst|  10 +-
 doc/guides/sample_app_ug/service_cores.rst|  41 +++---
 doc/guides/sample_app_ug/test_pipeline.rst|   2 +-
 doc/guides/sample_app_ug/timer.rst|  13 +-
 doc/guides/sample_app_ug/vdpa.rst |  39 ++---
 doc/guides/sample_app_ug/vhost.rst|  51 ---
 doc/guides/sample_app_ug/vhost_blk.rst|  21 +--
 doc/guides/sample_app_ug/vhost_crypto.rst |  15 +-
 .../sample_app_ug/vm_power_management.rst | 138 --
 .../sample_app_ug/vmdq_dcb_forwarding.rst |  77 +-
 doc/guides/sample_app_ug/vmdq_forwarding.rst  |  28 ++--
 21 files changed, 397 insertions(+), 372 deletions(-)

diff --git a/doc/guides/sample_app_ug/dist_app.rst 
b/doc/guides/sample_app_ug/dist_app.rst
index 5c80561187..7a841bff8a 100644
--- a/doc/guides/sample_app_ug/dist_app.rst
+++ b/doc/guides/sample_app_ug/dist_app.rst
@@ -4,7 +4,7 @@
 Distributor Sample Application
 ==
 
-The distributor sample application is a simple example of packet distribution
+The distributor sample application is an example of packet distribution
 to cores using the Data Plane Development Kit (DPDK). It also makes use of
 Intel Speed Select Technology - Base Frequency (Intel SST-BF) to pin the
 distributor to the higher frequency core if available.
@@ -31,7 +31,7 @@ generator as shown in the figure below.
 Compiling the Application
 -
 
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
 
 The application is located in the ``distributor`` sub-directory.
 
@@ -66,7 +66,7 @@ The distributor application consists of four types of 
threads: a receive
 thread (``lcore_rx()``), a distributor thread (``lcore_dist()``), a set of
 worker threads (``lcore_worker()``), and a transmit thread(``lcore_tx()``).
 How these threads work together is shown in :numref:`figure_dist_app` below.
-The ``main()`` function launches  threads of these four types.  Each thread
+The ``main()`` function launches threads of these four types. Each thread
 has a while loop which will be doing processing and which is terminated
 only upon SIGINT or ctrl+C.
 
@@ -86,7 +86,7 @@ the distributor, doing a simple XOR operation on the input 
port mbuf field
 (to indicate the output port which will be used later for packet transmission)
 and then finally returning the packets back to the distributor thread.
 
-The distributor thread will then call the distributor api
+The distributor thread will then call the distributor API
 ``rte_distributor_returned_pkts()`` to get the processed packets, and will 
enqueue
 them to another rte_ring for transfer to the TX thread for transmission on the
 output port. The transmit thread will dequeue the packets from the ring and
@@ -105,7 +105,7 @@ final statistics to the user.
 
 
 Intel SST-BF Support
-
+
 
 In DPDK 19.05, support was added to the power management library for
 Intel-SST-BF, a technology that allows some cores to run at a higher
@@ -114,20 +114,20 @@ and is entitled
 `Intel Speed Select Technology – Base Frequency - Enhancing Performance 
<https://builders.intel.com/docs/networkbuilders/intel-speed-select-technology-base-frequency-enhancing-performance.pdf>`_
 
 The distributor application was also enhanced to be aware of these higher
-frequency SST-BF cores, and when starting the application, if high frequency
+frequency SST-BF cores. When starting the application, if high frequency
 SST-BF cores are present in the core mask, the application will identify these
 cores and pin the workloads appropriately. The distributor core is usually
 the bottleneck, so this is given first choice of the high frequency SST-BF
-cores, followed by the rx core and the tx core.
+cores, followed by the Rx core and the Tx core.
 
 Debug Logging Support
--
+~
 
 Debug logging is provided as part of the application; the user needs to 
uncomment
 the line "#define DEBUG" defined in start of the application in main.c to 
enable debug logs.
 
 Statistics
---
+~~
 
 The main function will print s

[PATCH v2] doc: remove known issues

2025-04-10 Thread Nandini Persad
I have uploaded all these known issues into Bugzilla,
so they are not needed here anymore.

Signed-off-by: 
---
 doc/guides/rel_notes/index.rst|   1 -
 doc/guides/rel_notes/known_issues.rst | 875 --
 2 files changed, 876 deletions(-)
 delete mode 100644 doc/guides/rel_notes/known_issues.rst

diff --git a/doc/guides/rel_notes/index.rst b/doc/guides/rel_notes/index.rst
index 6462f01966..fdc30741f9 100644
--- a/doc/guides/rel_notes/index.rst
+++ b/doc/guides/rel_notes/index.rst
@@ -46,5 +46,4 @@ Release Notes
 release_2_1
 release_2_0
 release_1_8
-known_issues
 deprecation
diff --git a/doc/guides/rel_notes/known_issues.rst 
b/doc/guides/rel_notes/known_issues.rst
deleted file mode 100644
index 73c72ba484..00
--- a/doc/guides/rel_notes/known_issues.rst
+++ /dev/null
@@ -1,875 +0,0 @@
-..  SPDX-License-Identifier: BSD-3-Clause
-Copyright(c) 2010-2014 Intel Corporation.
-
-Known Issues and Limitations in Legacy Releases
-===
-
-This section describes known issues with the DPDK software that aren't covered 
in the version specific release
-notes sections.
-
-
-Unit Test for Link Bonding may fail at test_tlb_tx_burst()
---
-
-**Description**:
-   Unit tests will fail in ``test_tlb_tx_burst()`` function with error for 
uneven distribution of packets.
-
-**Implication**:
-   Unit test link_bonding_autotest will fail.
-
-**Resolution/Workaround**:
-   There is no workaround available.
-
-**Affected Environment/Platform**:
-   Fedora 20.
-
-**Driver/Module**:
-   Link Bonding.
-
-
-Pause Frame Forwarding does not work properly on igb
-
-
-**Description**:
-   For igb devices rte_eth_flow_ctrl_set does not work as expected.
-   Pause frames are always forwarded on igb, regardless of the ``RFCE``, 
``MPMCF`` and ``DPF`` registers.
-
-**Implication**:
-   Pause frames will never be rejected by the host on 1G NICs and they will 
always be forwarded.
-
-**Resolution/Workaround**:
-   There is no workaround available.
-
-**Affected Environment/Platform**:
-   All.
-
-**Driver/Module**:
-   Poll Mode Driver (PMD).
-
-
-In packets provided by the PMD, some flags are missing
---
-
-**Description**:
-   In packets provided by the PMD, some flags are missing.
-   The application does not have access to information provided by the hardware
-   (packet is broadcast, packet is multicast, packet is IPv4 and so on).
-
-**Implication**:
-   The ``ol_flags`` field in the ``rte_mbuf`` structure is not correct and 
should not be used.
-
-**Resolution/Workaround**:
-   The application has to parse the Ethernet header itself to get the 
information, which is slower.
-
-**Affected Environment/Platform**:
-   All.
-
-**Driver/Module**:
-   Poll Mode Driver (PMD).
-
-The rte_malloc library is not fully implemented

-
-**Description**:
-   The ``rte_malloc`` library is not fully implemented.
-
-**Implication**:
-   All debugging features of rte_malloc library described in architecture 
documentation are not yet implemented.
-
-**Resolution/Workaround**:
-   No workaround available.
-
-**Affected Environment/Platform**:
-   All.
-
-**Driver/Module**:
-   ``rte_malloc``.
-
-
-HPET reading is slow
-
-
-**Description**:
-   Reading the HPET chip is slow.
-
-**Implication**:
-   An application that calls ``rte_get_hpet_cycles()`` or 
``rte_timer_manage()`` runs slower.
-
-**Resolution/Workaround**:
-   The application should not call these functions too often in the main loop.
-   An alternative is to use the TSC register through ``rte_rdtsc()`` which is 
faster,
-   but specific to an lcore and is a cycle reference, not a time reference.
-
-**Affected Environment/Platform**:
-   All.
-
-**Driver/Module**:
-   Environment Abstraction Layer (EAL).
-
-
-HPET timers do not work on the Osage customer reference platform
-
-
-**Description**:
-   HPET timers do not work on the Osage customer reference platform which 
includes an Intel® Xeon® processor 5500
-   series processor) using the released BIOS from Intel.
-
-**Implication**:
-   On Osage boards, the implementation of the ``rte_delay_us()`` function must 
be changed to not use the HPET timer.
-
-**Resolution/Workaround**:
-   This can be addressed by building the system with ``RTE_LIBEAL_USE_HPET`` 
unset
-   or by using the ``--no-hpet`` EAL option.
-
-**Affected Environment/Platform**:
-   The Osage customer reference platform.
-   Other vendor platforms with Intel®  Xeon® processor 5500 series processors 
should
-   work correctly, provided the BIOS supports HPET.
-
-**Driver/Module**:
-   ``lib/eal/include/rte_cycles.h``
-
-
-Not all variants of supported NIC types have been used in testing
-

[PATCH] doc: reword contributor's guide for grammar/clarity

2025-04-01 Thread Nandini Persad
I'm reviewing the Contributor's Guidelines to be more
clear and concise where necessary.

Signed-off-by: Nandini Persad 
---
 doc/guides/contributing/coding_style.rst | 105 +++
 doc/guides/contributing/design.rst   |   9 +-
 doc/guides/contributing/new_library.rst  |  31 ---
 3 files changed, 77 insertions(+), 68 deletions(-)

diff --git a/doc/guides/contributing/coding_style.rst 
b/doc/guides/contributing/coding_style.rst
index 1ebc79ca3c..c465bd0f1f 100644
--- a/doc/guides/contributing/coding_style.rst
+++ b/doc/guides/contributing/coding_style.rst
@@ -15,14 +15,13 @@ It is based on the Linux Kernel coding guidelines and the 
FreeBSD 7.2 Kernel Dev
 General Guidelines
 --
 
-The rules and guidelines given in this document cannot cover every situation, 
so the following general guidelines should be used as a fallback:
+This document's rules and guidelines cannot cover every scenario, so the 
following general principles
+should be used as a fallback.
 
-* The code style should be consistent within each individual file.
-* In the case of creating new files, the style should be consistent within 
each file in a given directory or module.
-* The primary reason for coding standards is to increase code readability and 
comprehensibility, therefore always use whatever option will make the code 
easiest to read.
-
-Line length is recommended to be not more than 80 characters, including 
comments.
-[Tab stop size should be assumed to be 8-characters wide].
+* Maintain a consistent coding style within each file.
+* When creating new files, ensure consistency with other files in the same 
directory or module.
+* Prioritize readability and clarity. Choose the style that makes the code 
easiest to read.
+* Keep line length within 80 characters, including comments. Assume a tab stop 
size of 8 characters.
 
 .. note::
 
@@ -36,7 +35,7 @@ Usual Comments
 ~~
 
 These comments should be used in normal cases.
-To document a public API, a doxygen-like format must be used: refer to 
:ref:`doxygen_guidelines`.
+To document a public API, a Doxygen-like format must be used. Refer to the 
:ref:`doxygen_guidelines`.
 
 .. code-block:: c
 
@@ -110,15 +109,19 @@ Headers should be protected against multiple inclusion 
with the usual:
 Macros
 ~~
 
-Do not ``#define`` or declare names except with the standard DPDK prefix: 
``RTE_``.
-This is to ensure there are no collisions with definitions in the application 
itself.
+Use only the standard DPDK prefix (RTE_) when defining or declaring names
+to prevent conflicts with application definitions.
+
+Macro Naming:
 
-The names of "unsafe" macros (ones that have side effects), and the names of 
macros for manifest constants, are all in uppercase.
+* "Unsafe" macros (those with side effects) and macros for manifest constants 
must be in uppercase.
+* Expression-like macros should either expand to a single token or be enclosed 
in outer parentheses.
+* If a macro inlines a function, use the lowercase function name and an 
uppercase macro name.
 
-The expansions of expression-like macros are either a single token or have 
outer parentheses.
-If a macro is an inline expansion of a function, the function name is all in 
lowercase and the macro has the same name all in uppercase.
-If the macro encapsulates a compound statement, enclose it in a do-while loop, 
so that it can be used safely in if statements.
-Any final statement-terminating semicolon should be supplied by the macro 
invocation rather than the macro, to make parsing easier for pretty-printers 
and editors.
+Encapsulation:
+
+* Macros that wrap compound statements should be enclosed in a do while loop 
to ensure safe use in "if" statements.
+* The semicolon terminating a statement should be provided by the macro 
invocation, not the macro itself, to improve readability for formatters and 
editors.
 
 For example:
 
@@ -138,38 +141,34 @@ Conditional Compilation
 
 .. note::
 
-   Conditional compilation should be used only when absolutely necessary,
-   as it increases the number of target binaries that need to be built and 
tested.
-   See below for details of some utility macros/defines available
-   to allow ifdefs/macros to be replaced by C conditional in some cases.
-
-Some high-level guidelines on the use of conditional compilation:
-
-* If code can compile on all platforms/systems,
-  but cannot run on some due to lack of support,
-  then regular C conditionals, as described in the next section,
-  should be used instead of conditional compilation.
-* If the code in question cannot compile on all systems,
-  but constitutes only a small fragment of a file,
-  then conditional compilation should be used, as described in this section.
-* If the code for conditional compilation implements an interface in an OS
-  or platform-specific way, then create a file for each OS or platform
-  and select the appropriate file using th

[PATCH] doc: remove known issues

2025-04-01 Thread Nandini Persad
I have uploaded all these known issues into Bugzilla,
so they are not needed here anymore.

Signed-off-by: 
---
 doc/guides/rel_notes/known_issues.rst | 875 --
 1 file changed, 875 deletions(-)
 delete mode 100644 doc/guides/rel_notes/known_issues.rst

diff --git a/doc/guides/rel_notes/known_issues.rst 
b/doc/guides/rel_notes/known_issues.rst
deleted file mode 100644
index 73c72ba484..00
--- a/doc/guides/rel_notes/known_issues.rst
+++ /dev/null
@@ -1,875 +0,0 @@
-..  SPDX-License-Identifier: BSD-3-Clause
-Copyright(c) 2010-2014 Intel Corporation.
-
-Known Issues and Limitations in Legacy Releases
-===
-
-This section describes known issues with the DPDK software that aren't covered 
in the version specific release
-notes sections.
-
-
-Unit Test for Link Bonding may fail at test_tlb_tx_burst()
---
-
-**Description**:
-   Unit tests will fail in ``test_tlb_tx_burst()`` function with error for 
uneven distribution of packets.
-
-**Implication**:
-   Unit test link_bonding_autotest will fail.
-
-**Resolution/Workaround**:
-   There is no workaround available.
-
-**Affected Environment/Platform**:
-   Fedora 20.
-
-**Driver/Module**:
-   Link Bonding.
-
-
-Pause Frame Forwarding does not work properly on igb
-
-
-**Description**:
-   For igb devices rte_eth_flow_ctrl_set does not work as expected.
-   Pause frames are always forwarded on igb, regardless of the ``RFCE``, 
``MPMCF`` and ``DPF`` registers.
-
-**Implication**:
-   Pause frames will never be rejected by the host on 1G NICs and they will 
always be forwarded.
-
-**Resolution/Workaround**:
-   There is no workaround available.
-
-**Affected Environment/Platform**:
-   All.
-
-**Driver/Module**:
-   Poll Mode Driver (PMD).
-
-
-In packets provided by the PMD, some flags are missing
---
-
-**Description**:
-   In packets provided by the PMD, some flags are missing.
-   The application does not have access to information provided by the hardware
-   (packet is broadcast, packet is multicast, packet is IPv4 and so on).
-
-**Implication**:
-   The ``ol_flags`` field in the ``rte_mbuf`` structure is not correct and 
should not be used.
-
-**Resolution/Workaround**:
-   The application has to parse the Ethernet header itself to get the 
information, which is slower.
-
-**Affected Environment/Platform**:
-   All.
-
-**Driver/Module**:
-   Poll Mode Driver (PMD).
-
-The rte_malloc library is not fully implemented

-
-**Description**:
-   The ``rte_malloc`` library is not fully implemented.
-
-**Implication**:
-   All debugging features of rte_malloc library described in architecture 
documentation are not yet implemented.
-
-**Resolution/Workaround**:
-   No workaround available.
-
-**Affected Environment/Platform**:
-   All.
-
-**Driver/Module**:
-   ``rte_malloc``.
-
-
-HPET reading is slow
-
-
-**Description**:
-   Reading the HPET chip is slow.
-
-**Implication**:
-   An application that calls ``rte_get_hpet_cycles()`` or 
``rte_timer_manage()`` runs slower.
-
-**Resolution/Workaround**:
-   The application should not call these functions too often in the main loop.
-   An alternative is to use the TSC register through ``rte_rdtsc()`` which is 
faster,
-   but specific to an lcore and is a cycle reference, not a time reference.
-
-**Affected Environment/Platform**:
-   All.
-
-**Driver/Module**:
-   Environment Abstraction Layer (EAL).
-
-
-HPET timers do not work on the Osage customer reference platform
-
-
-**Description**:
-   HPET timers do not work on the Osage customer reference platform which 
includes an Intel® Xeon® processor 5500
-   series processor) using the released BIOS from Intel.
-
-**Implication**:
-   On Osage boards, the implementation of the ``rte_delay_us()`` function must 
be changed to not use the HPET timer.
-
-**Resolution/Workaround**:
-   This can be addressed by building the system with ``RTE_LIBEAL_USE_HPET`` 
unset
-   or by using the ``--no-hpet`` EAL option.
-
-**Affected Environment/Platform**:
-   The Osage customer reference platform.
-   Other vendor platforms with Intel®  Xeon® processor 5500 series processors 
should
-   work correctly, provided the BIOS supports HPET.
-
-**Driver/Module**:
-   ``lib/eal/include/rte_cycles.h``
-
-
-Not all variants of supported NIC types have been used in testing
--
-
-**Description**:
-   The supported network interface cards can come in a number of variants with 
different device ID's.
-   Not all of these variants have been tested with the DPDK.
-
-   The NIC device identifiers used during testing:
-
-   * Intel® Ethernet Controller XL710 for 40GbE QSFP

[PATCH v3] doc: remove known issues

2025-04-11 Thread Nandini Persad
I have uploaded all these known issues into Bugzilla,
so they are not needed here anymore.

Signed-off-by: Nandini Persad 
---
 doc/guides/rel_notes/index.rst|   1 -
 doc/guides/rel_notes/known_issues.rst | 875 --
 2 files changed, 876 deletions(-)
 delete mode 100644 doc/guides/rel_notes/known_issues.rst

diff --git a/doc/guides/rel_notes/index.rst b/doc/guides/rel_notes/index.rst
index 6462f01966..fdc30741f9 100644
--- a/doc/guides/rel_notes/index.rst
+++ b/doc/guides/rel_notes/index.rst
@@ -46,5 +46,4 @@ Release Notes
 release_2_1
 release_2_0
 release_1_8
-known_issues
 deprecation
diff --git a/doc/guides/rel_notes/known_issues.rst 
b/doc/guides/rel_notes/known_issues.rst
deleted file mode 100644
index 73c72ba484..00
--- a/doc/guides/rel_notes/known_issues.rst
+++ /dev/null
@@ -1,875 +0,0 @@
-..  SPDX-License-Identifier: BSD-3-Clause
-Copyright(c) 2010-2014 Intel Corporation.
-
-Known Issues and Limitations in Legacy Releases
-===
-
-This section describes known issues with the DPDK software that aren't covered 
in the version specific release
-notes sections.
-
-
-Unit Test for Link Bonding may fail at test_tlb_tx_burst()
---
-
-**Description**:
-   Unit tests will fail in ``test_tlb_tx_burst()`` function with error for 
uneven distribution of packets.
-
-**Implication**:
-   Unit test link_bonding_autotest will fail.
-
-**Resolution/Workaround**:
-   There is no workaround available.
-
-**Affected Environment/Platform**:
-   Fedora 20.
-
-**Driver/Module**:
-   Link Bonding.
-
-
-Pause Frame Forwarding does not work properly on igb
-
-
-**Description**:
-   For igb devices rte_eth_flow_ctrl_set does not work as expected.
-   Pause frames are always forwarded on igb, regardless of the ``RFCE``, 
``MPMCF`` and ``DPF`` registers.
-
-**Implication**:
-   Pause frames will never be rejected by the host on 1G NICs and they will 
always be forwarded.
-
-**Resolution/Workaround**:
-   There is no workaround available.
-
-**Affected Environment/Platform**:
-   All.
-
-**Driver/Module**:
-   Poll Mode Driver (PMD).
-
-
-In packets provided by the PMD, some flags are missing
---
-
-**Description**:
-   In packets provided by the PMD, some flags are missing.
-   The application does not have access to information provided by the hardware
-   (packet is broadcast, packet is multicast, packet is IPv4 and so on).
-
-**Implication**:
-   The ``ol_flags`` field in the ``rte_mbuf`` structure is not correct and 
should not be used.
-
-**Resolution/Workaround**:
-   The application has to parse the Ethernet header itself to get the 
information, which is slower.
-
-**Affected Environment/Platform**:
-   All.
-
-**Driver/Module**:
-   Poll Mode Driver (PMD).
-
-The rte_malloc library is not fully implemented

-
-**Description**:
-   The ``rte_malloc`` library is not fully implemented.
-
-**Implication**:
-   All debugging features of rte_malloc library described in architecture 
documentation are not yet implemented.
-
-**Resolution/Workaround**:
-   No workaround available.
-
-**Affected Environment/Platform**:
-   All.
-
-**Driver/Module**:
-   ``rte_malloc``.
-
-
-HPET reading is slow
-
-
-**Description**:
-   Reading the HPET chip is slow.
-
-**Implication**:
-   An application that calls ``rte_get_hpet_cycles()`` or 
``rte_timer_manage()`` runs slower.
-
-**Resolution/Workaround**:
-   The application should not call these functions too often in the main loop.
-   An alternative is to use the TSC register through ``rte_rdtsc()`` which is 
faster,
-   but specific to an lcore and is a cycle reference, not a time reference.
-
-**Affected Environment/Platform**:
-   All.
-
-**Driver/Module**:
-   Environment Abstraction Layer (EAL).
-
-
-HPET timers do not work on the Osage customer reference platform
-
-
-**Description**:
-   HPET timers do not work on the Osage customer reference platform which 
includes an Intel® Xeon® processor 5500
-   series processor) using the released BIOS from Intel.
-
-**Implication**:
-   On Osage boards, the implementation of the ``rte_delay_us()`` function must 
be changed to not use the HPET timer.
-
-**Resolution/Workaround**:
-   This can be addressed by building the system with ``RTE_LIBEAL_USE_HPET`` 
unset
-   or by using the ``--no-hpet`` EAL option.
-
-**Affected Environment/Platform**:
-   The Osage customer reference platform.
-   Other vendor platforms with Intel®  Xeon® processor 5500 series processors 
should
-   work correctly, provided the BIOS supports HPET.
-
-**Driver/Module**:
-   ``lib/eal/include/rte_cycles.h``
-
-
-Not all variants of supported NIC types have been us

[PATCH v2] doc: reword contributor's guide for grammar/clarity

2025-04-11 Thread Nandini Persad
Reviewing the Contributor Guidleines for grammar and
comprehension.

Signed-off-by: Nandini Persad 
---
 doc/guides/contributing/coding_style.rst | 105 +++
 doc/guides/contributing/design.rst   |   9 +-
 doc/guides/contributing/new_library.rst  |  31 ---
 3 files changed, 77 insertions(+), 68 deletions(-)

diff --git a/doc/guides/contributing/coding_style.rst 
b/doc/guides/contributing/coding_style.rst
index 1ebc79ca3c..bc8021e767 100644
--- a/doc/guides/contributing/coding_style.rst
+++ b/doc/guides/contributing/coding_style.rst
@@ -15,14 +15,13 @@ It is based on the Linux Kernel coding guidelines and the 
FreeBSD 7.2 Kernel Dev
 General Guidelines
 --
 
-The rules and guidelines given in this document cannot cover every situation, 
so the following general guidelines should be used as a fallback:
+This document's rules and guidelines cannot cover every scenario, so the 
following general principles
+should be used as a fallback.
 
-* The code style should be consistent within each individual file.
-* In the case of creating new files, the style should be consistent within 
each file in a given directory or module.
-* The primary reason for coding standards is to increase code readability and 
comprehensibility, therefore always use whatever option will make the code 
easiest to read.
-
-Line length is recommended to be not more than 80 characters, including 
comments.
-[Tab stop size should be assumed to be 8-characters wide].
+* Maintain a consistent coding style within each file.
+* When creating new files, ensure consistency with other files in the same 
directory or module.
+* Prioritize readability and clarity. Choose the style that makes the code 
easiest to read.
+* Keep line length within 80 characters, including comments. Assume a tab stop 
size of 8 characters.
 
 .. note::
 
@@ -36,7 +35,7 @@ Usual Comments
 ~~
 
 These comments should be used in normal cases.
-To document a public API, a doxygen-like format must be used: refer to 
:ref:`doxygen_guidelines`.
+To document a public API, a Doxygen-like format must be used. Refer to the 
:ref:`doxygen_guidelines`.
 
 .. code-block:: c
 
@@ -110,15 +109,19 @@ Headers should be protected against multiple inclusion 
with the usual:
 Macros
 ~~
 
-Do not ``#define`` or declare names except with the standard DPDK prefix: 
``RTE_``.
-This is to ensure there are no collisions with definitions in the application 
itself.
+Use only the standard DPDK prefix (RTE_) when defining or declaring names
+to prevent conflicts with application definitions.
+
+Macro Naming:
+
+* "Unsafe" macros (those with side effects) and macros for manifest constants 
must be in uppercase.
+* Expression-like macros should either expand to a single token or be enclosed 
in outer parentheses.
+* If a macro inlines a function, use the lowercase function name and an 
uppercase macro name.
 
-The names of "unsafe" macros (ones that have side effects), and the names of 
macros for manifest constants, are all in uppercase.
+Encapsulation:
 
-The expansions of expression-like macros are either a single token or have 
outer parentheses.
-If a macro is an inline expansion of a function, the function name is all in 
lowercase and the macro has the same name all in uppercase.
-If the macro encapsulates a compound statement, enclose it in a do-while loop, 
so that it can be used safely in if statements.
-Any final statement-terminating semicolon should be supplied by the macro 
invocation rather than the macro, to make parsing easier for pretty-printers 
and editors.
+* Macros that wrap compound statements should be enclosed in a do while loop 
to ensure safe use in "if" statements.
+* The semicolon terminating a statement should be provided by the macro 
invocation, not the macro itself, to improve readability for formatters and 
editors.
 
 For example:
 
@@ -138,38 +141,34 @@ Conditional Compilation
 
 .. note::
 
-   Conditional compilation should be used only when absolutely necessary,
-   as it increases the number of target binaries that need to be built and 
tested.
-   See below for details of some utility macros/defines available
-   to allow ifdefs/macros to be replaced by C conditional in some cases.
-
-Some high-level guidelines on the use of conditional compilation:
-
-* If code can compile on all platforms/systems,
-  but cannot run on some due to lack of support,
-  then regular C conditionals, as described in the next section,
-  should be used instead of conditional compilation.
-* If the code in question cannot compile on all systems,
-  but constitutes only a small fragment of a file,
-  then conditional compilation should be used, as described in this section.
-* If the code for conditional compilation implements an interface in an OS
-  or platform-specific way, then create a file for each OS or platform
-  and select the appropriate file using the Meson build system.
-  In mos

Re: [PATCH v2] doc: reword flow offload and api docs

2025-04-24 Thread Nandini Persad
Thanks Stephen. I think I know how to fix it. Might have to submit a new patch

From: Stephen Hemminger 
Sent: Thursday, April 24, 2025 9:08:58 AM
To: Nandini Persad 
Cc: dev@dpdk.org 
Subject: Re: [PATCH v2] doc: reword flow offload and api docs

On Thu, 10 Apr 2025 11:39:31 -0700
Nandini Persad  wrote:

> I removed low level details of the code from the Flow Offload
> section of the Programmer's Guide so that it is more about
> the general usage of the APIs. I moved the specific details
> and definitions of the constants/function to the API docs
> if the details were not already there.
>
> Signed-off-by: Nandini Persad 
> ---
>  doc/guides/prog_guide/ethdev/flow_offload.rst | 3584 +

Looks good but does not apply cleanly to current main branch.
Tried manually fixing it but it was too hard and gave up.


[PATCH] doc: reword contributor's guidelines

2025-05-12 Thread Nandini Persad
I have revised sections 9-12 for grammar and clarity.

Signed-off-by: Nandini Persad 
---
 doc/guides/contributing/linux_uapi.rst|  32 ++--
 doc/guides/contributing/patches.rst   | 177 +++---
 doc/guides/contributing/stable.rst| 163 
 doc/guides/contributing/vulnerability.rst |  59 
 4 files changed, 193 insertions(+), 238 deletions(-)

diff --git a/doc/guides/contributing/linux_uapi.rst 
b/doc/guides/contributing/linux_uapi.rst
index 79bedb478e..72abef2133 100644
--- a/doc/guides/contributing/linux_uapi.rst
+++ b/doc/guides/contributing/linux_uapi.rst
@@ -7,14 +7,14 @@ Linux uAPI header files
 Rationale
 -
 
-The system a DPDK library or driver is built on is not necessarily running the
-same Kernel version than the system that will run it.
-Importing Linux Kernel uAPI headers enable to build features that are not
-supported yet by the build system.
+The system used to build a DPDK library or driver may not be running the same 
kernel version
+as the target system where it will be deployed. To support features that are 
not yet available
+in the build system’s kernel, Linux kernel uAPI headers can be imported.
 
-For example, the build system runs upstream Kernel v5.19 and we would like to
-build a VDUSE application that will use VDUSE_IOTLB_GET_INFO ioctl() introduced
-in Linux Kernel v6.0.
+For example, if the build system runs upstream Kernel v5.19, but you need to 
build a VDUSE application
+that uses the VDUSE_IOTLB_GET_INFO ioctl introduced in Kernel v6.0, importing 
the relevant uAPI headers allows this.
+
+The Linux kernel's syscall license exception permits the inclusion of 
unmodified uAPI header files in such cases.
 
 `Linux Kernel licence exception regarding syscalls
 
<https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/plain/LICENSES/exceptions/Linux-syscall-note>`_
@@ -23,20 +23,19 @@ enable importing unmodified Linux Kernel uAPI header files.
 Importing or updating an uAPI header file
 -
 
-In order to ensure the imported uAPI headers are both unmodified and from a
-released version of the linux Kernel, a helper script is made available and
-MUST be used.
+To ensure that imported uAPI headers are unmodified and sourced from an 
official Linux
+kernel release, a helper script is provided and must be used.
 Below is an example to import ``linux/vduse.h`` file from Linux ``v6.10``:
 
 .. code-block:: console
 
devtools/linux-uapi.sh -i linux/vduse.h -u v6.10
 
-Once imported, the header files should be committed without any other change.
-Note that all the imported headers will be updated to the requested version.
+Once imported, header files must be committed without any modifications. Note 
that all imported
+headers will be updated to match the specified kernel version.
 
-Updating imported headers to a newer released version should only be done on
-a need basis, it can be achieved using the same script:
+Updates to a newer released version should be performed only when necessary, 
and can be done
+using the same helper script.
 
 .. code-block:: console
 
@@ -60,8 +59,9 @@ Note that all the linux-uapi.sh script options can be 
combined. For example:
 Header inclusion into library or driver
 ---
 
-The library or driver willing to make use of imported uAPI headers needs to
-explicitly include the header file with ``uapi/`` prefix in C files.
+Libraries or drivers that rely on imported uAPI headers must explicitly include
+the relevant header using the ``uapi/`` prefix in their C source files.
+
 For example to include VDUSE uAPI:
 
 .. code-block:: c
diff --git a/doc/guides/contributing/patches.rst 
b/doc/guides/contributing/patches.rst
index d21ee288b2..88945b8f5d 100644
--- a/doc/guides/contributing/patches.rst
+++ b/doc/guides/contributing/patches.rst
@@ -5,56 +5,53 @@
 
 Contributing Code to DPDK
 =
+his document provides guidelines for submitting code to DPDK.
 
-This document outlines the guidelines for submitting code to DPDK.
-
-The DPDK development process is modeled (loosely) on the Linux Kernel 
development model so it is worth reading the
-Linux kernel guide on submitting patches:
+The DPDK development process is loosely based on the Linux Kernel development
+model. It is recommended to review the Linux kernel's guide:
 `How to Get Your Change Into the Linux Kernel 
<https://www.kernel.org/doc/html/latest/process/submitting-patches.html>`_.
-The rationale for many of the DPDK guidelines is explained in greater detail 
in the kernel guidelines.
+Many of DPDK's submissions guidelines draw from the kernel process,
+and the rationale behind them is often explained in greater depth there.
 
 
 The DPDK Development Process
 
 
-The DPDK development process has the following features:
+The DPDK development process includes the following key

[PATCH] doc: fix anchors namespace in guides

2025-05-27 Thread Nandini Persad
I modified the anchors names within the guides to have a
a clear prefix so that they don't collide, based on
advice from Thomas.

Signed-off-by: Nandini Persad 
---
 doc/guides/freebsd_gsg/build_dpdk.rst  |  8 
 doc/guides/freebsd_gsg/build_sample_apps.rst   |  6 +++---
 doc/guides/freebsd_gsg/install_from_ports.rst  |  2 +-
 doc/guides/linux_gsg/build_dpdk.rst|  4 ++--
 .../linux_gsg/cross_build_dpdk_for_arm64.rst   |  4 ++--
 doc/guides/linux_gsg/enable_func.rst   |  6 +++---
 doc/guides/linux_gsg/linux_drivers.rst |  6 +++---
 doc/guides/nics/af_xdp.rst |  6 +++---
 doc/guides/nics/bnx2x.rst  |  2 +-
 doc/guides/nics/build_and_test.rst |  2 +-
 doc/guides/nics/cnxk.rst   |  2 +-
 doc/guides/nics/cxgbe.rst  | 12 ++--
 doc/guides/nics/dpaa.rst   |  2 +-
 doc/guides/nics/dpaa2.rst  |  4 ++--
 doc/guides/nics/enic.rst   |  6 +++---
 doc/guides/nics/i40e.rst   |  4 ++--
 doc/guides/nics/ice.rst|  2 +-
 doc/guides/nics/intel_vf.rst   | 10 +-
 doc/guides/nics/ixgbe.rst  |  2 +-
 doc/guides/nics/mlx4.rst   | 10 +-
 doc/guides/nics/mlx5.rst   |  4 ++--
 doc/guides/nics/mvpp2.rst  | 12 ++--
 doc/guides/nics/netvsc.rst |  4 ++--
 doc/guides/nics/overview.rst   |  2 +-
 doc/guides/nics/qede.rst   |  4 ++--
 doc/guides/nics/virtio.rst |  4 ++--
 doc/guides/nics/vmxnet3.rst|  6 +++---
 doc/guides/platform/cnxk.rst   |  8 
 doc/guides/platform/dpaa.rst   |  2 +-
 doc/guides/platform/dpaa2.rst  |  2 +-
 doc/guides/platform/mlx5.rst   | 18 +-
 doc/guides/platform/octeontx.rst   |  2 +-
 doc/guides/tools/dts.rst   | 10 +-
 33 files changed, 89 insertions(+), 89 deletions(-)

diff --git a/doc/guides/freebsd_gsg/build_dpdk.rst 
b/doc/guides/freebsd_gsg/build_dpdk.rst
index f98292bf41..a14b9e9f24 100644
--- a/doc/guides/freebsd_gsg/build_dpdk.rst
+++ b/doc/guides/freebsd_gsg/build_dpdk.rst
@@ -3,7 +3,7 @@
 
 .. include:: 
 
-.. _building_from_source:
+.. _freebsd_gsg_building_from_source:
 
 Compiling the DPDK Target from Source
 =
@@ -67,7 +67,7 @@ the next section.
 variable.
 
 
-.. _loading_contigmem:
+.. _freebsd_loading_contigmem:
 
 Loading the DPDK contigmem Module
 -
@@ -148,7 +148,7 @@ available and can be verified via dmesg or 
``/var/log/messages``::
 
 To avoid this error, reduce the number of buffers or the buffer size.
 
-.. _loading_nic_uio:
+.. _freebsd_gsg_loading_nic_uio:
 
 Loading the DPDK nic_uio Module
 ---
@@ -185,7 +185,7 @@ already bound to a driver other than ``nic_uio``. The 
following sub-section desc
 how to query and modify the device ownership of the ports to be used by
 DPDK applications.
 
-.. _binding_network_ports:
+.. _freebsd_gsg_binding_network_ports:
 
 Binding Network Ports to the nic_uio Module
 ~~~
diff --git a/doc/guides/freebsd_gsg/build_sample_apps.rst 
b/doc/guides/freebsd_gsg/build_sample_apps.rst
index 7bdd88e56d..535738617b 100644
--- a/doc/guides/freebsd_gsg/build_sample_apps.rst
+++ b/doc/guides/freebsd_gsg/build_sample_apps.rst
@@ -1,7 +1,7 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
 Copyright(c) 2010-2014 Intel Corporation.
 
-.. _compiling_sample_apps:
+.. _freebsd_gsg_compiling_sample_apps:
 
 Compiling and Running Sample Applications
 =
@@ -44,7 +44,7 @@ the installation of DPDK using `meson install` as described 
previously::
 ln -sf helloworld-shared build/helloworld
 
 
-.. _running_sample_app:
+.. _freebsd_gsg_running_sample_app:
 
 Running a Sample Application
 
@@ -96,7 +96,7 @@ Other options, specific to Linux and are not supported under 
FreeBSD are as foll
 
 The ``-c`` or ``-l`` option is mandatory; the others are optional.
 
-.. _running_non_root:
+.. _freebsd_gsg_running_non_root:
 
 Running DPDK Applications Without Root Privileges
 -
diff --git a/doc/guides/freebsd_gsg/install_from_ports.rst 
b/doc/guides/freebsd_gsg/install_from_ports.rst
index 3c98c46b29..b9e9bc4bac 100644
--- a/doc/guides/freebsd_gsg/install_from_ports.rst
+++ b/doc/guides/freebsd_gsg/install_from_ports.rst
@@ -1,7 +1,7 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
 Copyright(c) 2010-2014 Intel Corporation.
 
-.. _install_from_ports:
+.. _freebsd_gsg_install_from_ports:
 
 Installing DPDK from the Ports Colle

[PATCH] doc: reword ethdev guide

2025-08-03 Thread Nandini Persad
With the help of Ajit Khaparde, I spent some time adding minor
information and rewriting this section for grammar and
clarity.

Signed-off-by: Nandini Persad 
---
 doc/guides/prog_guide/ethdev/ethdev.rst | 381 +---
 1 file changed, 207 insertions(+), 174 deletions(-)

diff --git a/doc/guides/prog_guide/ethdev/ethdev.rst 
b/doc/guides/prog_guide/ethdev/ethdev.rst
index 89eb31a48d..aca277c701 100644
--- a/doc/guides/prog_guide/ethdev/ethdev.rst
+++ b/doc/guides/prog_guide/ethdev/ethdev.rst
@@ -4,295 +4,328 @@
 Poll Mode Driver
 
 
-The DPDK includes 1 Gigabit, 10 Gigabit and 40 Gigabit and para virtualized 
virtio Poll Mode Drivers.
+The Data Plane Development Kit (DPDK) supports a wife range of Ethernet speeds,
+from 10 Megabits to 400 Gigabits.
+
+DPDK’s Poll Mode Drivers (PMDs) are high-performance, optimized drivers for 
various
+network interface cards that bypass the traditional kernel  network stack to 
reduce
+latency and improve throughput. They access RX and TX descriptors directly in 
a polling
+mode without relying on interrupts (except for Link Status Change 
notifications), enabling
+efficient packet reception and transmission in user-space applications.
+
+This section outlines the requirements of Ethernet PMDs, their design 
principles,
+and presents a high-level architecture along with a generic external API.
 
-A Poll Mode Driver (PMD) consists of APIs, provided through the BSD driver 
running in user space,
-to configure the devices and their respective queues.
-In addition, a PMD accesses the RX and TX descriptors directly without any 
interrupts
-(with the exception of Link Status Change interrupts) to quickly receive,
-process and deliver packets in the user's application.
-This section describes the requirements of the PMDs,
-their global design principles and proposes a high-level architecture and a 
generic external API for the Ethernet PMDs.
 
 Requirements and Assumptions
 
 
-The DPDK environment for packet processing applications allows for two models, 
run-to-completion and pipe-line:
+The DPDK environment for packet processing applications allows for two models: 
run-to-completion and pipe-line:
 
-*   In the *run-to-completion*  model, a specific port's RX descriptor ring is 
polled for packets through an API.
-Packets are then processed on the same core and placed on a port's TX 
descriptor ring through an API for transmission.
+*   In the *run-to-completion*  model, a specific port’s RX descriptor ring is 
polled for packets through an API.
+Packets are then processed on the same core and transmitted via the port’s 
TX descriptor ring using another API.
 
-*   In the *pipe-line*  model, one core polls one or more port's RX descriptor 
ring through an API.
-Packets are received and passed to another core via a ring.
-The other core continues to process the packet which then may be placed on 
a port's TX descriptor ring through an API for transmission.
+*   In the *pipe-line*  model, one core polls the RX descriptor ring(s) of one 
or more ports via an API.
+Received packets are then passed to another core through a ring for 
further processing,
+ which may include transmission through the TX descriptor ring using an 
API.
 
-In a synchronous run-to-completion model,
-each logical core assigned to the DPDK executes a packet processing loop that 
includes the following steps:
+In a synchronous run-to-completion model, each logical core (lcore)
+assigned to DPDK executes a packet processing loop consisting of:
 
-*   Retrieve input packets through the PMD receive API
+*   Retrieving input packets using the PMD receive API
 
-*   Process each received packet one at a time, up to its forwarding
+*   Processing each received packet individually, up to its forwarding
 
-*   Send pending output packets through the PMD transmit API
+*   Transmitting output packets using the PMD transmit API
 
-Conversely, in an asynchronous pipe-line model, some logical cores may be 
dedicated to the retrieval of received packets and
-other logical cores to the processing of previously received packets.
-Received packets are exchanged between logical cores through rings.
-The loop for packet retrieval includes the following steps:
+In contrast, the asynchronous pipeline model assigns some logical cores to 
retrieve received packets
+and others to process them. Packets are exchanged between cores via rings.
+
+The packet retrieval loop includes:
 
 *   Retrieve input packets through the PMD receive API
 
 *   Provide received packets to processing lcores through packet queues
 
-The loop for packet processing includes the following steps:
+The packet processing loop includes:
 
-*   Retrieve the received packet from the packet queue
+*   Dequeuing received packets from the packet queue
 
-*   Process the received packet, up to its retransmission if forwarded
+*   Processing packets, in