Number of direct credits, atomic inflight and history list sizes
are updated to what is supported in DLB2.0. Revised Class of Service
section is added.

Signed-off-by: Rashmi Shetty <rashmi.she...@intel.com>
---
 doc/guides/eventdevs/dlb2.rst | 32 +++++++++++---------------------
 1 file changed, 11 insertions(+), 21 deletions(-)

diff --git a/doc/guides/eventdevs/dlb2.rst b/doc/guides/eventdevs/dlb2.rst
index bce984ca08..c2887a71dc 100644
--- a/doc/guides/eventdevs/dlb2.rst
+++ b/doc/guides/eventdevs/dlb2.rst
@@ -151,7 +151,7 @@ load-balanced queues, and directed credits are used for 
directed queues.
 These pools' sizes are controlled by the nb_events_limit field in struct
 rte_event_dev_config. The load-balanced pool is sized to contain
 nb_events_limit credits, and the directed pool is sized to contain
-nb_events_limit/4 credits. The directed pool size can be overridden with the
+nb_events_limit/2 credits. The directed pool size can be overridden with the
 num_dir_credits devargs argument, like so:
 
     .. code-block:: console
@@ -239,8 +239,8 @@ queue A.
 Due to this, workers should stop retrying after a time, release the events it
 is attempting to enqueue, and dequeue more events. It is important that the
 worker release the events and don't simply set them aside to retry the enqueue
-again later, because the port has limited history list size (by default, twice
-the port's dequeue_depth).
+again later, because the port has limited history list size (by default, same
+as port's dequeue_depth).
 
 Priority
 ~~~~~~~~
@@ -309,17 +309,11 @@ scheduled. The likelihood of this case depends on the 
eventdev configuration,
 traffic behavior, event processing latency, potential for a worker to be
 interrupted or otherwise delayed, etc.
 
-By default, the PMD allocates 16 buffer entries for each load-balanced queue,
-which provides an even division across all 128 queues but potentially wastes
+By default, the PMD allocates 64 buffer entries for each load-balanced queue,
+which provides an even division across all 32 queues but potentially wastes
 buffer space (e.g. if not all queues are used, or aren't used for atomic
 scheduling).
 
-The PMD provides a dev arg to override the default per-queue allocation. To
-increase per-queue atomic-inflight allocation to (for example) 64:
-
-    .. code-block:: console
-
-       --allow ea:00.0,atm_inflights=64
 
 QID Depth Threshold
 ~~~~~~~~~~~~~~~~~~~
@@ -337,7 +331,7 @@ Per queue threshold metrics are tracked in the DLB xstats, 
and are also
 returned in the impl_opaque field of each received event.
 
 The per qid threshold can be specified as part of the device args, and
-can be applied to all queue, a range of queues, or a single queue, as
+can be applied to all queues, a range of queues, or a single queue, as
 shown below.
 
     .. code-block:: console
@@ -350,14 +344,10 @@ Class of service
 ~~~~~~~~~~~~~~~~
 
 DLB supports provisioning the DLB bandwidth into 4 classes of service.
+By default, each of the 4 classes (0-3) correspond to 25% of the DLB
+hardware bandwidth.
 
-- Class 4 corresponds to 40% of the DLB hardware bandwidth
-- Class 3 corresponds to 30% of the DLB hardware bandwidth
-- Class 2 corresponds to 20% of the DLB hardware bandwidth
-- Class 1 corresponds to 10% of the DLB hardware bandwidth
-- Class 0 corresponds to don't care
-
-The classes are applied globally to the set of ports contained in this
+The classes are applied globally to the set of ports contained in the
 scheduling domain, which is more appropriate for the bifurcated
 PMD than for the PF PMD, since the PF PMD supports just 1 scheduling
 domain.
@@ -366,7 +356,7 @@ Class of service can be specified in the devargs, as follows
 
     .. code-block:: console
 
-       --allow ea:00.0,cos=<0..4>
+       --allow ea:00.0,cos=<0..3>
 
 Use X86 Vector Instructions
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -379,4 +369,4 @@ follows
 
     .. code-block:: console
 
-       --allow ea:00.0,vector_opts_enabled=<y/Y>
+       --allow ea:00.0,vector_opts_enable=<y/Y>
-- 
2.25.1

Reply via email to