RE: [EXTERNAL] [PATCH] doc: announce cryptodev change to support EDDSA

2024-07-24 Thread Akhil Goyal
> Announce the additions in cryptodev ABI to support EDDSA algorithm.
> 
> Signed-off-by: Gowrishankar Muthukrishnan 
> --
Acked-by: Akhil Goyal 

> RFC:
>   
> https://patches.dpdk.org/project/dpdk/patch/0ae6a1afadac64050d80b0fd7712c4a6a8599e2c.1701273963.git.gmuthukri...@marvell.com/
> ---
>  doc/guides/rel_notes/deprecation.rst | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index 6948641ff6..fcbec965b1 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -147,3 +147,7 @@ Deprecation Notices
>will be deprecated and subsequently removed in DPDK 24.11 release.
>Before this, the new port library API (functions rte_swx_port_*)
>will gradually transition from experimental to stable status.
> +
> +* cryptodev: The enum ``rte_crypto_asym_xform_type`` and struct
> ``rte_crypto_asym_op``
> +  will be extended to include new values to support EDDSA. This will break
> +  ABI compatibility with existing applications that use these data types.
> --
> 2.21.0



RE: [EXTERNAL] [PATCH] doc: announce cryptodev changes to offload RSA in VirtIO

2024-07-24 Thread Akhil Goyal
> Announce cryptodev changes to offload RSA asymmetric operation in
> VirtIO PMD.
> 
> Signed-off-by: Gowrishankar Muthukrishnan 
> --
> RFC:
>   
> https://patches.dpdk.org/project/dpdk/patch/20230928095300.1353-2-gmuthukri...@marvell.com/
>   
> https://patches.dpdk.org/project/dpdk/patch/20230928095300.1353-3-gmuthukri...@marvell.com/
> ---
Acked-by: Akhil Goyal 

>  doc/guides/rel_notes/deprecation.rst | 11 +++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index 6948641ff6..26fec84aba 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -147,3 +147,14 @@ Deprecation Notices
>will be deprecated and subsequently removed in DPDK 24.11 release.
>Before this, the new port library API (functions rte_swx_port_*)
>will gradually transition from experimental to stable status.
> +
> +* cryptodev: The struct rte_crypto_rsa_padding will be moved from
> +  rte_crypto_rsa_op_param struct to rte_crypto_rsa_xform struct,
> +  breaking ABI. The new location is recommended to comply with
> +  virtio-crypto specification. Applications and drivers using
> +  this struct will be updated.
> +
> +* cryptodev: The rte_crypto_rsa_xform struct member to hold private key
> +  in either exponent or quintuple format is changed from union to struct
> +  data type. This change is to support ASN.1 syntax (RFC 3447 Appendix 
> A.1.2).
> +  This change will not break existing applications.
> --
> 2.21.0



[PATCH v2] net/gve: Fix TX/RX queue setup and stop

2024-07-24 Thread Tathagat Priyadarshi
The PR aims to update the TX/RQ queue setup/stop routines that are
unique to DQO, so that they may be called for instances that use the DQO
RDA format during dev start/stop.

Signed-off-by: Tathagat Priyadarshi 
Acked-by: Joshua Washington 
---
 drivers/net/gve/gve_ethdev.c | 29 +++--
 1 file changed, 23 insertions(+), 6 deletions(-)

diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c
index ca92277..a20092e 100644
--- a/drivers/net/gve/gve_ethdev.c
+++ b/drivers/net/gve/gve_ethdev.c
@@ -288,11 +288,16 @@ struct gve_queue_page_list *
PMD_DRV_LOG(ERR, "Failed to create %u tx queues.", num_queues);
return ret;
}
-   for (i = 0; i < num_queues; i++)
-   if (gve_tx_queue_start(dev, i) != 0) {
+   for (i = 0; i < num_queues; i++) {
+   if (gve_is_gqi(priv))
+   ret = gve_tx_queue_start(dev, i);
+   else
+   ret = gve_tx_queue_start_dqo(dev, i);
+   if (ret != 0) {
PMD_DRV_LOG(ERR, "Fail to start Tx queue %d", i);
goto err_tx;
}
+   }
 
num_queues = dev->data->nb_rx_queues;
priv->rxqs = (struct gve_rx_queue **)dev->data->rx_queues;
@@ -315,9 +320,15 @@ struct gve_queue_page_list *
return 0;
 
 err_rx:
-   gve_stop_rx_queues(dev);
+   if (gve_is_gqi(priv))
+   gve_stop_rx_queues(dev);
+   else
+   gve_stop_rx_queues_dqo(dev);
 err_tx:
-   gve_stop_tx_queues(dev);
+   if (gve_is_gqi(priv))
+   gve_stop_tx_queues(dev);
+   else
+   gve_stop_tx_queues_dqo(dev);
return ret;
 }
 
@@ -362,10 +373,16 @@ struct gve_queue_page_list *
 static int
 gve_dev_stop(struct rte_eth_dev *dev)
 {
+   struct gve_priv *priv = dev->data->dev_private;
dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
-   gve_stop_tx_queues(dev);
-   gve_stop_rx_queues(dev);
+   if (gve_is_gqi(priv)) {
+   gve_stop_tx_queues(dev);
+   gve_stop_rx_queues(dev);
+   } else {
+   gve_stop_tx_queues_dqo(dev);
+   gve_stop_rx_queues_dqo(dev);
+   }
 
dev->data->dev_started = 0;
 
-- 
1.8.3.1



[PATCH v2] doc: document mlx5 HWS actions order

2024-07-24 Thread Maayan Kashani
Add actions order supported in mlx5 PMD when HW steering flow engine
 is used.
This limitation existed since HW Steering flow engine was introduced.

Fixes: 22681deead3e ("net/mlx5/hws: enable hardware steering")
Cc: sta...@dpdk.org
Signed-off-by: Maayan Kashani 
Acked-by: Dariusz Sosnowski 
---
 doc/guides/nics/mlx5.rst | 20 
 1 file changed, 20 insertions(+)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 43fc181d8dc..4c00bb7755c 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -251,6 +251,26 @@ Limitations
 IPv6 routing extension matching is not supported in flow template relaxed
 matching mode (see ``struct 
rte_flow_pattern_template_attr::relaxed_matching``).
 
+  - The supported actions order is as below::
+
+  MARK(a)
+  *_DECAP(b)
+  OF_POP_VLAN
+  COUNT | AGE
+  METER_MARK | CONNTRACK
+  OF_PUSH_VLAN
+  MODIFY_FIELD
+  *_ENCAP(c)
+  JUMP | DROP | RSS(a) | QUEUE(a) | REPRESENTED_PORT(d)
+
+a. Only supported on ingress.
+b. Any decapsulation action, including the combination of RAW_ENCAP and 
RAW_DECAP actions
+   which results in L3 decapsulation.
+   Not supported on egress.
+c. Any encapsulation action, including the combination of RAW_ENCAP and 
RAW_DECAP actions
+   which results in L3 encap.
+d. Only in transfer (switchdev) mode.
+
 - When using Verbs flow engine (``dv_flow_en`` = 0), flow pattern without any
   specific VLAN will match for VLAN packets as well:
 
-- 
2.21.0



RE: [PATCH 3/3] examples/l3fwd: fix maximum acceptable port ID in routes

2024-07-24 Thread Konstantin Ananyev



> > > > > > > Application is accepting routes for port ID up to UINT8_MAX for
> > > > > > > LPM amd EM routes on parsing the given rule file, but only up to
> > > > > > > 32 ports can be enabled as per the variable enabled_port_mask
> > > > > > > which is defined as uint32_t.
> > > > > > >
> > > > > > > This patch restricts the rules parsing code to accept routes for
> > > > > > > port ID up to 31 only to avoid any unnecessary maintenance of
> > > > > > > rules which will never be used.
> > > > > >
> > > > > > If we want to add this extra check, probably better to do it in 
> > > > > > setup_lpm().
> > > > > > Where we already check that port is enabled, and If not, then this
> > > > > > route rule will be skipped:
> > > > > >
> > > > > > /* populate the LPM table */
> > > > > > for (i = 0; i < route_num_v4; i++) {
> > > > > > struct in_addr in;
> > > > > >
> > > > > > /* skip unused ports */
> > > > > > if ((1 << route_base_v4[i].if_out &
> > > > > > enabled_port_mask) == 0)
> > > > > > continue;
> > > > > >
> > > > > > Same for EM.
> > > > > I am trying to update the check for MAX if_out value in rules config
> > > > > file parsing
> > > > which will be before setup_lpm().
> > > > > The reason is, restricting and adding only those rules which can be
> > > > > used by the application while populating the route_base_v4/v6 at
> > > > > first step and avoid unnecessary memory allocation for local
> > > > > variables to store more
> > > > not required rules.
> > > >
> > > > Hmm... but why it is a problem?
> > > Not really a problem, Just trying to optimize wherever it Is possible.
> > >
> > > >
> > > > >
> > > > > > ((1 << route_base_v4[i].if_out &
> > > > > > enabled_port_mask)
> > > > > By looking into this check, it seems restriction to maximum 31 port
> > > > > ID while parsing rule file becomes more valid as this check can pass
> > > > > due to overflow in case value of route_base_v4[i].if_out Is 31+.
> > > >
> > > > Agree, I think we need both, and it probably need to be in setup_lpm().
> > > > Something like:
> > > >
> > > > if (route_base_v4[i].if_out >= sizeof(enabled_port_mask) * CHAR_BIT ||
> > > >((1 << route_base_v4[i].if_out & enabled_port_mask) == 0) {
> > > >  /* print some error message here*/
> > > >  rte_exiit(...);  /* or return an error */ }
> > > >
> > > Yes, I can change it to this.
> >
> > I re-checked the code, IMO we should restrict the rules in " 
> > read_config_files"
> > May be we can move this check to read_config_files.
> > As having this check in the setup can result in rte_exit() call when no 
> > user rule file
> > Is given and application is using the default rules. In that case 
> > route_base_v4 will
> > Have 16 rules for 16 ports (default rules).
> > So this check will fails always unless user enable all the 16 ports with -p 
> > option.
> 
> Ah yes, you are right.
> That's why probably right now we probably just do 'continue;' here...
> Yeh, probably the easiest way is to put this check before setup_lpm() -
> in parsing code, or straight after that.
> Can I ask you for one more thing: can we add a new function that would
> do this check and use it everywhere (lpm/em/acl).

As alternative thought - we might add to setup_lpm() an extra parameter
to indicate what do we want to do on rule with invalid/disabled port - just 
skip it or fail.
Another alternative - remove default route ability at all, though that one
is a change in behavior and probably there would be some complaints.  

> 
> > >
> > > > >
> > > > > > Another question here - why we just silently skip the rule with 
> > > > > > invalid port?
> > > > > In read_config_files_lpm() we are calling the rte_exit in case port 
> > > > > ID is 31+.
> > > > > In setup_lpm, skipping the rules for the ports which are not enabled
> > > > > and not giving error, I guess probably because of ease of use.
> > > > > e.g. user has only single ipv4_routes config file with route rules
> > > > > for port ID 0,1,2,3,4 and want to use same file for multiple test
> > > > > cases like 1. when only port 0 enabled 2. when only port 0 and 1
> > > > > enabled and so on.
> > > > > In this case, user can avoid to have separate route files for each of 
> > > > > the test
> > > case.
> > > >
> > > > The problem as I see it - we are not consistent here.
> > > > In some cases we just silently skip rules with invalid (or disabled)
> > > > port numbers, in other cases we generate an error and exit.
> > > > For me it would be better, if we follow one simple policy (abort with
> > > > error) here for all cases.
> > > Ok, I will add the rte_exit if route port is invalid or not enabled.
> > > With this change onwards It will be assumed user will add only those 
> > > routes With
> > > port IDs which are valid and enabled in the application.
> > >
> > > >
> > > > >
> > > > > > Probably need

[PATCH v5 3/6] fib: add missing vector API header include

2024-07-24 Thread Mattias Rönnblom
The trie implementation of the fib library relied on , but
failed to provide a direct include of this file.

Signed-off-by: Mattias Rönnblom 
Acked-by: Bruce Richardson 
Acked-by: Stephen Hemminger 
---
 lib/fib/trie.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/lib/fib/trie.c b/lib/fib/trie.c
index 09470e7287..74db8863df 100644
--- a/lib/fib/trie.c
+++ b/lib/fib/trie.c
@@ -9,6 +9,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
-- 
2.34.1



[PATCH v5 0/6] Optionally have rte_memcpy delegate to compiler memcpy

2024-07-24 Thread Mattias Rönnblom
Note: These changes are meant for 24.11, not 24.07.

This patch set make DPDK library, driver, and application code
optionally use the compiler/libc memcpy() when functions in
 are invoked.

The new compiler memcpy-based rte_memcpy() implementations may be
enabled by means of a build-time option.

This patch set only make a difference on x86, PPC and ARM. Loongarch
and RISCV already used compiler/libc memcpy().

This patch set includes a number of fixes in drivers and libraries
which errornously relied on  including header files
(i.e., ) required by its implementation.

Mattias Rönnblom (6):
  net/octeon_ep: add missing vector API header include
  distributor: add missing vector API header include
  fib: add missing vector API header include
  eal: provide option to use compiler memcpy instead of RTE
  ci: test compiler memcpy
  vhost: optimize memcpy routines when cc memcpy is used

 .ci/linux-build.sh |  5 +++
 .github/workflows/build.yml|  7 +++
 config/meson.build |  1 +
 devtools/test-meson-builds.sh  |  4 +-
 doc/guides/rel_notes/release_24_07.rst | 21 +
 drivers/net/octeon_ep/otx_ep_ethdev.c  |  2 +
 lib/distributor/rte_distributor.c  |  1 +
 lib/eal/arm/include/rte_memcpy.h   |  9 
 lib/eal/include/generic/rte_memcpy.h   | 61 +++---
 lib/eal/loongarch/include/rte_memcpy.h | 54 +--
 lib/eal/ppc/include/rte_memcpy.h   |  9 
 lib/eal/riscv/include/rte_memcpy.h | 54 +--
 lib/eal/x86/include/meson.build|  1 +
 lib/eal/x86/include/rte_memcpy.h   |  9 
 lib/fib/trie.c |  1 +
 lib/vhost/virtio_net.c | 37 +++-
 meson_options.txt  |  2 +
 17 files changed, 164 insertions(+), 114 deletions(-)

-- 
2.34.1



[PATCH v5 6/6] vhost: optimize memcpy routines when cc memcpy is used

2024-07-24 Thread Mattias Rönnblom
In build where use_cc_memcpy is set to true, the vhost user PMD
suffers a large performance drop on Intel P-cores for small packets,
at least when built by GCC and (to a much lesser extent) clang.

This patch addresses that issue by using a custom virtio
memcpy()-based packet copying routine.

Performance results from a Raptor Lake @ 3,2 GHz:

GCC 12.3.0
64 bytes packets
Core  Mode  Mpps
E RTE memcpy9.5
E cc memcpy 9.7
E cc memcpy+pktcpy  9.0

P RTE memcpy16.4
P cc memcpy 13.5
P cc memcpy+pktcpy  16.2

GCC 12.3.0
1500 bytes packets
Core  Mode  Mpps
PRTE memcpy 5.8
Pcc memcpy  5.9
Pcc memcpy+pktcpy   5.9

clang 15.0.7
64 bytes packets
Core  Mode  Mpps
P RTE memcpy13.3
P cc memcpy 12.9
P cc memcpy+pktcpy  13.9

"RTE memcpy" is use_cc_memcpy=false, "cc memcpy" is use_cc_memcpy=true
and "pktcpy" is when this patch is applied.

Signed-off-by: Mattias Rönnblom 
---
 lib/vhost/virtio_net.c | 37 +++--
 1 file changed, 35 insertions(+), 2 deletions(-)

diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 370402d849..63571587a8 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -231,6 +231,39 @@ vhost_async_dma_check_completed(struct virtio_net *dev, 
int16_t dma_id, uint16_t
return nr_copies;
 }
 
+/* The code generated by GCC (and to a lesser extent, clang) with just
+ * a straight memcpy() to copy packets is less than optimal on Intel
+ * P-cores, for small packets. Thus the need of this specialized
+ * memcpy() in builds where use_cc_memcpy is set to true.
+ */
+#if defined(RTE_USE_CC_MEMCPY) && defined(RTE_ARCH_X86_64)
+static __rte_always_inline void
+pktcpy(void *restrict in_dst, const void *restrict in_src, size_t len)
+{
+   void *dst = __builtin_assume_aligned(in_dst, 16);
+   const void *src = __builtin_assume_aligned(in_src, 16);
+
+   if (len <= 256) {
+   size_t left;
+
+   for (left = len; left >= 32; left -= 32) {
+   memcpy(dst, src, 32);
+   dst = RTE_PTR_ADD(dst, 32);
+   src = RTE_PTR_ADD(src, 32);
+   }
+
+   memcpy(dst, src, left);
+   } else
+   memcpy(dst, src, len);
+}
+#else
+static __rte_always_inline void
+pktcpy(void *dst, const void *src, size_t len)
+{
+   rte_memcpy(dst, src, len);
+}
+#endif
+
 static inline void
 do_data_copy_enqueue(struct virtio_net *dev, struct vhost_virtqueue *vq)
__rte_shared_locks_required(&vq->iotlb_lock)
@@ -240,7 +273,7 @@ do_data_copy_enqueue(struct virtio_net *dev, struct 
vhost_virtqueue *vq)
int i;
 
for (i = 0; i < count; i++) {
-   rte_memcpy(elem[i].dst, elem[i].src, elem[i].len);
+   pktcpy(elem[i].dst, elem[i].src, elem[i].len);
vhost_log_cache_write_iova(dev, vq, elem[i].log_addr,
   elem[i].len);
PRINT_PACKET(dev, (uintptr_t)elem[i].dst, elem[i].len, 0);
@@ -257,7 +290,7 @@ do_data_copy_dequeue(struct vhost_virtqueue *vq)
int i;
 
for (i = 0; i < count; i++)
-   rte_memcpy(elem[i].dst, elem[i].src, elem[i].len);
+   pktcpy(elem[i].dst, elem[i].src, elem[i].len);
 
vq->batch_copy_nb_elems = 0;
 }
-- 
2.34.1



[PATCH v5 2/6] distributor: add missing vector API header include

2024-07-24 Thread Mattias Rönnblom
The distributor library relied on , but failed to provide
a direct include of this file.

Signed-off-by: Mattias Rönnblom 
Acked-by: Bruce Richardson 
---
 lib/distributor/rte_distributor.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/lib/distributor/rte_distributor.c 
b/lib/distributor/rte_distributor.c
index e58727cdc2..1389efc03f 100644
--- a/lib/distributor/rte_distributor.c
+++ b/lib/distributor/rte_distributor.c
@@ -15,6 +15,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "rte_distributor.h"
 #include "rte_distributor_single.h"
-- 
2.34.1



[PATCH v5 1/6] net/octeon_ep: add missing vector API header include

2024-07-24 Thread Mattias Rönnblom
The octeon_ip driver relied on , but failed to provide a
direct include of this file.

Signed-off-by: Mattias Rönnblom 
Acked-by: Stephen Hemminger 
---
 drivers/net/octeon_ep/otx_ep_ethdev.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/octeon_ep/otx_ep_ethdev.c 
b/drivers/net/octeon_ep/otx_ep_ethdev.c
index 46211361a0..b069216629 100644
--- a/drivers/net/octeon_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeon_ep/otx_ep_ethdev.c
@@ -5,6 +5,8 @@
 #include 
 #include 
 
+#include 
+
 #include "otx_ep_common.h"
 #include "otx_ep_vf.h"
 #include "otx2_ep_vf.h"
-- 
2.34.1



[PATCH v5 4/6] eal: provide option to use compiler memcpy instead of RTE

2024-07-24 Thread Mattias Rönnblom
Provide build option to have functions in  delegate to
the standard compiler/libc memcpy(), instead of using the various
custom DPDK, handcrafted, per-architecture rte_memcpy()
implementations.

A new meson build option 'use_cc_memcpy' is added. By default, the
traditional, custom DPDK rte_memcpy() implementation is used.

The performance benefits of the custom DPDK rte_memcpy()
implementations have been diminishing with every compiler release, and
with current toolchains the use of a custom memcpy() implementation
may even be a liability.

An additional benefit of this change is that compilers and static
analysis tools have an easier time detecting incorrect usage of
rte_memcpy() (e.g., buffer overruns, or overlapping source and
destination buffers).

Signed-off-by: Mattias Rönnblom 
Acked-by: Morten Brørup 

---

PATCH v5:
 o Take a more cautious approach, setting use_cc_memcpy to disabled by
   default.
 o Fix ARM build issue in case RTE_ARCH_ARM64_MEMCPY was set.
 o Use separate macros to indicate that the rte_memcpy() is implemented
   by the compiler, and that use_cc_memcpy is set, to avoid accidental
#undefs.
 o Remove redundant rte_config.h includes.

PATCH:
 o Add entry in release notes.
 o Update meson help text.

RFC v3:
 o Fix missing #endif on loongarch.
 o PPC and RISCV now implemented, meaning all architectures are supported.
 o Unnecessary  include is removed from .

RFC v2:
 * Fix bug where rte_memcpy.h was not installed on x86.
 * Made attempt to make Loongarch compile.
---
 config/meson.build |  1 +
 doc/guides/rel_notes/release_24_07.rst | 21 +
 lib/eal/arm/include/rte_memcpy.h   |  9 
 lib/eal/include/generic/rte_memcpy.h   | 61 +++---
 lib/eal/loongarch/include/rte_memcpy.h | 54 +--
 lib/eal/ppc/include/rte_memcpy.h   |  9 
 lib/eal/riscv/include/rte_memcpy.h | 54 +--
 lib/eal/x86/include/meson.build|  1 +
 lib/eal/x86/include/rte_memcpy.h   |  9 
 meson_options.txt  |  2 +
 10 files changed, 110 insertions(+), 111 deletions(-)

diff --git a/config/meson.build b/config/meson.build
index 8c8b019c25..456056628e 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -353,6 +353,7 @@ endforeach
 # set other values pulled from the build options
 dpdk_conf.set('RTE_MAX_ETHPORTS', get_option('max_ethports'))
 dpdk_conf.set('RTE_LIBEAL_USE_HPET', get_option('use_hpet'))
+dpdk_conf.set('RTE_USE_CC_MEMCPY', get_option('use_cc_memcpy'))
 dpdk_conf.set('RTE_ENABLE_STDATOMIC', get_option('enable_stdatomic'))
 dpdk_conf.set('RTE_ENABLE_TRACE_FP', get_option('enable_trace_fp'))
 dpdk_conf.set('RTE_PKTMBUF_HEADROOM', get_option('pkt_mbuf_headroom'))
diff --git a/doc/guides/rel_notes/release_24_07.rst 
b/doc/guides/rel_notes/release_24_07.rst
index eb2ed1a55f..31af6303b3 100644
--- a/doc/guides/rel_notes/release_24_07.rst
+++ b/doc/guides/rel_notes/release_24_07.rst
@@ -24,6 +24,27 @@ DPDK Release 24.07
 New Features
 
 
+* **Compiler memcpy replaces custom DPDK implementation.**
+
+  The memory copy functions of  now optionally
+  delegates to the standard memcpy() function, implemented by the
+  compiler and the C runtime (e.g., libc).
+
+  In this release of DPDK, the handcrafted, per-architecture memory
+  copy implementations are still the default. Compiler memcpy is
+  enabled by setting the new ``use_cc_memcpy`` build option to true.
+
+  The performance benefits of the custom DPDK rte_memcpy()
+  implementations have been diminishing with every new compiler
+  release, and with current toolchains the use of a custom memcpy()
+  implementation may even result in worse performance than the
+  standard memcpy().
+
+  An additional benefit of using compiler memcpy is that compilers and
+  static analysis tools have an easier time detecting incorrect usage
+  of rte_memcpy() (e.g., buffer overruns, or overlapping source and
+  destination buffers).
+
 .. This section should contain new features added in this release.
Sample format:
 
diff --git a/lib/eal/arm/include/rte_memcpy.h b/lib/eal/arm/include/rte_memcpy.h
index 47dea9a8cc..5d2ea7dbfa 100644
--- a/lib/eal/arm/include/rte_memcpy.h
+++ b/lib/eal/arm/include/rte_memcpy.h
@@ -5,10 +5,19 @@
 #ifndef _RTE_MEMCPY_ARM_H_
 #define _RTE_MEMCPY_ARM_H_
 
+#if defined(RTE_USE_CC_MEMCPY) || !defined(RTE_ARCH_ARM64_MEMCPY)
+
+#define RTE_CC_MEMCPY
+#include 
+
+#else
+
 #ifdef RTE_ARCH_64
 #include 
 #else
 #include 
 #endif
 
+#endif /* RTE_USE_CC_MEMCPY */
+
 #endif /* _RTE_MEMCPY_ARM_H_ */
diff --git a/lib/eal/include/generic/rte_memcpy.h 
b/lib/eal/include/generic/rte_memcpy.h
index e7f0f8eaa9..cfb0175bd2 100644
--- a/lib/eal/include/generic/rte_memcpy.h
+++ b/lib/eal/include/generic/rte_memcpy.h
@@ -5,12 +5,19 @@
 #ifndef _RTE_MEMCPY_H_
 #define _RTE_MEMCPY_H_
 
+#ifdef __cplusplus
+extern "C" {
+#endif
+
 /**
  * @file
  *
  * Functions for vectorised implementation of memcpy().
 

[PATCH v5 5/6] ci: test compiler memcpy

2024-07-24 Thread Mattias Rönnblom
Add compilation tests for the use_cc_memcpy build option.

Signed-off-by: Mattias Rönnblom 
---
 .ci/linux-build.sh| 5 +
 .github/workflows/build.yml   | 7 +++
 devtools/test-meson-builds.sh | 4 +++-
 3 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index 15ed51e4c1..a873f83d09 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -98,6 +98,11 @@ if [ "$STDATOMIC" = "true" ]; then
 else
OPTS="$OPTS -Dcheck_includes=true"
 fi
+if [ "$CCMEMCPY" = "true" ]; then
+   OPTS="$OPTS -Duse_cc_memcpy=true"
+else
+   OPTS="$OPTS -Duse_cc_memcpy=true"
+fi
 if [ "$MINI" = "true" ]; then
 OPTS="$OPTS -Denable_drivers=net/null"
 OPTS="$OPTS -Ddisable_libs=*"
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index dbf25626d4..cd45d6c6c1 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -31,6 +31,7 @@ jobs:
   RISCV64: ${{ matrix.config.cross == 'riscv64' }}
   RUN_TESTS: ${{ contains(matrix.config.checks, 'tests') }}
   STDATOMIC: ${{ contains(matrix.config.checks, 'stdatomic') }}
+  CCMEMCPY: ${{ contains(matrix.config.checks, 'ccmemcpy') }}
 
 strategy:
   fail-fast: false
@@ -45,6 +46,12 @@ jobs:
   - os: ubuntu-22.04
 compiler: clang
 checks: stdatomic
+  - os: ubuntu-22.04
+compiler: gcc
+checks: ccmemcpy
+  - os: ubuntu-22.04
+compiler: clang
+checks: ccmemcpy
   - os: ubuntu-22.04
 compiler: gcc
 checks: abi+debug+doc+examples+tests
diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index d71bb1ded0..e72146be3b 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -228,12 +228,14 @@ for c in gcc clang ; do
if [ $s = shared ] ; then
abicheck=ABI
stdatomic=-Denable_stdatomic=true
+   ccmemcpy=-Duse_cc_memcpy=true
else
abicheck=skipABI # save time and disk space
stdatomic=-Denable_stdatomic=false
+   ccmemcpy=-Duse_cc_memcpy=false
fi
export CC="$CCACHE $c"
-   build build-$c-$s $c $abicheck $stdatomic --default-library=$s
+   build build-$c-$s $c $abicheck $stdatomic $ccmemcpy 
--default-library=$s
unset CC
done
 done
-- 
2.34.1



RE: [PATCH v2] doc: announce rte_ipsec API changes

2024-07-24 Thread Aakash Sasidharan
> -Original Message-
> From: Konstantin Ananyev 
> Sent: Tuesday, July 23, 2024 9:35 PM
> To: Aakash Sasidharan 
> Cc: Akhil Goyal ; Jerin Jacob ;
> Anoob Joseph ; Vidya Sagar Velumuri
> ; dev@dpdk.org; konstantin.v.anan...@yandex.ru;
> vladimir.medved...@intel.com
> Subject: [EXTERNAL] RE: [PATCH v2] doc: announce rte_ipsec API changes
> 
> Hi,
> 
> > In case of event mode operations where event device can help in atomic
> > sequence number increment across cores, sequence number need to be
> > provided by the application instead of being updated in rte_ipsec or
> > the PMD. To support this, a new flag
> > ``RTE_IPSEC_SAFLAG_SQN_ASSIGN_DISABLE``
> > will be added to disable sequence number update inside IPsec library
> > and the API rte_ipsec_pkt_crypto_prepare will be extended to include
> > ``sqn`` as an additional parameter to specify sequence number to be
> > used for IPsec from the application.
> 
> Could you probably elaborate a bit more:
> Why such change is necessary for event-dev mode, what exactly will be
> affected in librte_ipsec (would it be for outbound mode, or both), etc.
> 

[Aakash] When using eventdev, it is possible to have multiple cores process 
packets from the same flow at the same time, but still have ordering maintained.

Sequence for IPsec would be like below,
1. Ethdev Rx computes flow hash and submits packets to an ORDERED eventdev 
queue.
One flow would always hit one event dev queue.
One eventdev queue can be attached to multiple eventdev ports.
2. Lcores receives packets via these eventdev ports.
Lcores can now process the packets from the same flow in parallel.
3. Lcores submit the packets to an ATOMIC queue
This is needed as IPsec seq no update needs to be done atomically.
4. After seq no update, packets are moved to ORDERED queue.
Lcores can now processes the packets in parallel again.
5. During Tx, eventdev ensures packet ordering based on ORDERED queue.

Since lib IPsec takes care of sequence number assignment, complete 
rte_ipsec_pkt_crypto_prepare() routine need to be made as ATOMIC stage.
But apart from seq no update, rest of the operations can be done in parallel.

In addition, we are also looking at another use case when a set of packets from 
a session can be IPsec processed by rte_security device and some packets from 
the same session would need to be SW processed with lib IPsec. Here again the 
sequence number assignment would need to occur at central place so that 
sequence number is not repeated.

Initially we are looking at outbound only. But similar kind of use case would 
be applicable for inbound also.

> >
> > Signed-off-by: Aakash Sasidharan 
> > ---
> >  doc/guides/rel_notes/deprecation.rst | 7 +++
> >  1 file changed, 7 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst
> > b/doc/guides/rel_notes/deprecation.rst
> > index 6948641ff6..bc1d93cca7 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -133,6 +133,13 @@ Deprecation Notices
> >Since these functions are not called directly by the application,
> >the API remains unaffected.
> >
> > +* ipsec: The rte_ipsec library is updated to support sequence number
> > +provided
> > +  by application. A new flag ``RTE_IPSEC_SAFLAG_SQN_ASSIGN_DISABLE``
> > +is introduced
> > +  to disable sequence number assignment in lib IPsec.
> > +  The API rte_ipsec_pkt_crypto_prepare is extended to include ``sqn``
> > +as an
> > +  additional parameter allowing application to specify the sequence
> > +number to be
> > +  used for the IPsec operation.
> > +
> >  * pipeline: The pipeline library legacy API (functions rte_pipeline_*)
> >will be deprecated and subsequently removed in DPDK 24.11 release.
> >Before this, the new pipeline library API (functions
> > rte_swx_pipeline_*)
> > --
> > 2.25.1



Re: [EXTERNAL] [PATCH v2] examples/ipsec-secgw: fix SA salt endianness problem

2024-07-24 Thread Radu Nicolau



On 23-Jul-24 5:57 PM, Akhil Goyal wrote:

Hi all,

This patch breaks ipsec tests with ipsec-secgw:


./examples/ipsec-secgw/test/run_test.sh -4 trs_aesctr_sha1
...
ERROR: ./examples/ipsec-secgw/test/linux_test.sh failed for dst=192.168.31.14,
sz=1
  test IPv4 trs_aesctr_sha1 finished with status 1
ERROR  test trs_aesctr_sha1 FAILED


The patch seems to be correct.
Please check endianness in the PMD you are testing.


In my opinion salt should not be affected by endianness and it should be 
used as it is in the key parameter. I think the patch is wrong to make 
it CPU endianness dependent before being passed to the PMDs, any PMD 
that needs the endianness swapped should do it in the PMD code. Indeed 
it's passed around as a 32 bit integer but it's not used as such, and 
when it's actually used it should be evaluated as a byte array.


https://datatracker.ietf.org/doc/html/rfc4106#section-4
https://datatracker.ietf.org/doc/html/rfc4106#section-8.1







On 03/07/2024 18:58, Akhil Goyal wrote:





-Original Message-
From: Akhil Goyal 

Sent: Friday, March 15, 2024 12:42 AM
To: Akhil Goyal 
 ; Chaoyong He

 ; dev@dpdk.org 
Cc: oss-driv...@corigine.com  ; Shihong Wang 
 ;
sta...@dpdk.org 
Subject: RE: [EXTERNAL] [PATCH v2] examples/ipsec-secgw: fix
SA salt
endianness problem


Subject: RE: [EXTERNAL] [PATCH v2] examples/ipsec-
secgw: fix SA salt
endianness problem


From: Shihong Wang
 

The SA salt of struct ipsec_sa is a CPU-endian
u32 variable, but it’s
value is stored in an array of encryption or
authentication keys
according to big-endian. So it maybe need to
convert the endianness
order to ensure that the value assigned to the
SA salt is CPU-endian.

Fixes: 50d75cae2a2c ("examples/ipsec-secgw:
initialize SA salt")
Fixes: 9413c3901f31 ("examples/ipsec-secgw:
support additional algorithms")
Fixes: 501e9c226adf ("examples/ipsec-secgw:
add AEAD parameters")
Cc: sta...@dpdk.org 

Signed-off-by: Shihong Wang
 
Reviewed-by: Chaoyong He
 


Acked-by: Akhil Goyal 


Applied to dpdk-next-crypto


The patch is pulled back from dpdk-next-crypto.
This change may cause all the PMDs to fail these cases.
Would need acks from PMDs.


Applied to dpdk-next-crypto
No update from PMD owners.
Applying it before RC2 so that we have time for fixes if needed.


--
Regards,
Vladimir


[DPDK/testpmd Bug 1470] Testpmd parameter --max-pkt-len does not update MTU of ports bound to a kernel driver

2024-07-24 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1470
Bug 1470 depends on bug 1483, which changed state.

Bug 1483 Summary: mlx5 PMD does not apply MTU from rte_eth_conf.rxmode.mtu on 
rte_eth_dev_configure()
https://bugs.dpdk.org/show_bug.cgi?id=1483

   What|Removed |Added

 Status|IN_PROGRESS |RESOLVED
 Resolution|--- |FIXED

-- 
You are receiving this mail because:
You are the assignee for the bug.

RE: [EXTERNAL] [PATCH v2] examples/ipsec-secgw: fix SA salt endianness problem

2024-07-24 Thread Akhil Goyal
> 
> On 23-Jul-24 5:57 PM, Akhil Goyal wrote:
> >> Hi all,
> >>
> >> This patch breaks ipsec tests with ipsec-secgw:
> >>
> >>
> >> ./examples/ipsec-secgw/test/run_test.sh -4 trs_aesctr_sha1
> >> ...
> >> ERROR: ./examples/ipsec-secgw/test/linux_test.sh failed for
> dst=192.168.31.14,
> >> sz=1
> >>   test IPv4 trs_aesctr_sha1 finished with status 1
> >> ERROR  test trs_aesctr_sha1 FAILED
> >>
> > The patch seems to be correct.
> > Please check endianness in the PMD you are testing.
> 
> In my opinion salt should not be affected by endianness and it should be
> used as it is in the key parameter. I think the patch is wrong to make
> it CPU endianness dependent before being passed to the PMDs, any PMD
> that needs the endianness swapped should do it in the PMD code. Indeed
> it's passed around as a 32 bit integer but it's not used as such, and
> when it's actually used it should be evaluated as a byte array.
> 
> https://datatracker.ietf.org/doc/html/rfc4106#section-4
> https://datatracker.ietf.org/doc/html/rfc4106#section-8.1

As per the rfc, it should be treated as byte order(i.e. big endian).
But here the problem is we treat it as uint32_t which makes it CPU endian when 
stored in ipsec_sa struct.
The keys are stored as an array of uint8_t, so keys are stored in byte 
order(Big endian).

So either we save salt as "uint8_t salt[4]" or do a conversion of cpu_to_be
So that when it is stored in PMD/HW, and we convert it from uint32_t to uint_8 
*, there wont be issue.

> 
> >
> >
> >>
> >>
> >> On 03/07/2024 18:58, Akhil Goyal wrote:
> >>
> >>
> >>
> >>
> >>
> >>-Original Message-
> >>From: Akhil Goyal 
> >> 
> >>Sent: Friday, March 15, 2024 12:42 AM
> >>To: Akhil Goyal 
> >>  ; Chaoyong He
> >>
> >>  ; dev@dpdk.org 
> >>Cc: oss-driv...@corigine.com  >> driv...@corigine.com> ; Shihong Wang 
> >>  ;
> >>sta...@dpdk.org 
> >>Subject: RE: [EXTERNAL] [PATCH v2] examples/ipsec-secgw: fix
> >> SA salt
> >>endianness problem
> >>
> >>
> >>Subject: RE: [EXTERNAL] [PATCH v2] examples/ipsec-
> >> secgw: fix SA salt
> >>endianness problem
> >>
> >>
> >>From: Shihong Wang
> >>  
> >>
> >>The SA salt of struct ipsec_sa is a CPU-endian
> >> u32 variable, but it’s
> >>value is stored in an array of encryption or
> >> authentication keys
> >>according to big-endian. So it maybe need to
> >> convert the endianness
> >>order to ensure that the value assigned to the
> >> SA salt is CPU-endian.
> >>
> >>Fixes: 50d75cae2a2c ("examples/ipsec-secgw:
> >> initialize SA salt")
> >>Fixes: 9413c3901f31 ("examples/ipsec-secgw:
> >> support additional algorithms")
> >>Fixes: 501e9c226adf ("examples/ipsec-secgw:
> >> add AEAD parameters")
> >>Cc: sta...@dpdk.org 
> >>
> >>Signed-off-by: Shihong Wang
> >>  
> >>Reviewed-by: Chaoyong He
> >>  
> >>
> >>
> >>Acked-by: Akhil Goyal 
> >> 
> >>
> >>Applied to dpdk-next-crypto
> >>
> >>
> >>The patch is pulled back from dpdk-next-crypto.
> >>This change may cause all the PMDs to fail these cases.
> >>Would need acks from PMDs.
> >>
> >>
> >>Applied to dpdk-next-crypto
> >>No update from PMD owners.
> >>Applying it before RC2 so that we have time for fixes if needed.
> >>
> >>
> >> --
> >> Regards,
> >> Vladimir


Re: [EXTERNAL] [PATCH v2] examples/ipsec-secgw: fix SA salt endianness problem

2024-07-24 Thread Radu Nicolau



On 24-Jul-24 12:20 PM, Akhil Goyal wrote:

On 23-Jul-24 5:57 PM, Akhil Goyal wrote:

Hi all,

This patch breaks ipsec tests with ipsec-secgw:


./examples/ipsec-secgw/test/run_test.sh -4 trs_aesctr_sha1
...
ERROR: ./examples/ipsec-secgw/test/linux_test.sh failed for

dst=192.168.31.14,

sz=1
   test IPv4 trs_aesctr_sha1 finished with status 1
ERROR  test trs_aesctr_sha1 FAILED


The patch seems to be correct.
Please check endianness in the PMD you are testing.

In my opinion salt should not be affected by endianness and it should be
used as it is in the key parameter. I think the patch is wrong to make
it CPU endianness dependent before being passed to the PMDs, any PMD
that needs the endianness swapped should do it in the PMD code. Indeed
it's passed around as a 32 bit integer but it's not used as such, and
when it's actually used it should be evaluated as a byte array.

https://datatracker.ietf.org/doc/html/rfc4106#section-4
https://datatracker.ietf.org/doc/html/rfc4106#section-8.1

As per the rfc, it should be treated as byte order(i.e. big endian).
But here the problem is we treat it as uint32_t which makes it CPU endian when 
stored in ipsec_sa struct.
The keys are stored as an array of uint8_t, so keys are stored in byte 
order(Big endian).

So either we save salt as "uint8_t salt[4]" or do a conversion of cpu_to_be
So that when it is stored in PMD/HW, and we convert it from uint32_t to uint_8 
*, there wont be issue.


RFC treats it as a "four octet value" - there is no endianness until 
it's treated like an integer, which it never is. Even if it code it's 
being stored and passed as an unsigned 32bit integer it is never 
evaluated as such so its endianness doesn't matter.


I agree that we should have it everywhere as "uint8_t salt[4]" but that 
implies API changes and it doesn't change how the bytes are stored, so 
the patch will still be wrong.









On 03/07/2024 18:58, Akhil Goyal wrote:





-Original Message-
From: Akhil Goyal 

Sent: Friday, March 15, 2024 12:42 AM
To: Akhil Goyal 
 ; Chaoyong He

 ; dev@dpdk.org 
Cc: oss-driv...@corigine.com  ; Shihong Wang 
 ;
sta...@dpdk.org 
Subject: RE: [EXTERNAL] [PATCH v2] examples/ipsec-secgw: fix
SA salt
endianness problem


Subject: RE: [EXTERNAL] [PATCH v2] examples/ipsec-
secgw: fix SA salt
endianness problem


From: Shihong Wang
 

The SA salt of struct ipsec_sa is a CPU-endian
u32 variable, but it’s
value is stored in an array of encryption or
authentication keys
according to big-endian. So it maybe need to
convert the endianness
order to ensure that the value assigned to the
SA salt is CPU-endian.

Fixes: 50d75cae2a2c ("examples/ipsec-secgw:
initialize SA salt")
Fixes: 9413c3901f31 ("examples/ipsec-secgw:
support additional algorithms")
Fixes: 501e9c226adf ("examples/ipsec-secgw:
add AEAD parameters")
Cc: sta...@dpdk.org 

Signed-off-by: Shihong Wang
 
Reviewed-by: Chaoyong He
 


Acked-by: Akhil Goyal 


Applied to dpdk-next-crypto


The patch is pulled back from dpdk-next-crypto.
This change may cause all the PMDs to fail these cases.
Would need acks from PMDs.


Applied to dpdk-next-crypto
No update from PMD owners.
Applying it before RC2 so that we have time for fixes if needed.


--
Regards,
Vladimir


[RFC PATCH 1/2] power: fix power library with --lcores

2024-07-24 Thread Sivaprasad Tummala
This commit fixes an issue in the power library
related to using lcores mapped to different
physical cores (--lcores option in EAL).

Previously, the power library incorrectly accessed
CPU sysfs attributes for power management, treating
lcore IDs as CPU IDs.
e.g. with --lcores '1@128', lcore_id '1' was interpreted
as CPU_id instead of '128'.

This patch corrects the cpu_id based on lcore and CPU
mappings. It also constraints power management support
for lcores mapped to multiple physical cores/threads.

When multiple lcores are mapped to the same physical core,
invoking frequency scaling APIs on any lcore will apply the
changes effectively.

Signed-off-by: Sivaprasad Tummala 
---
 lib/power/power_acpi_cpufreq.c   | 16 ++--
 lib/power/power_amd_pstate_cpufreq.c | 16 ++--
 lib/power/power_cppc_cpufreq.c   | 16 ++--
 lib/power/power_pstate_cpufreq.c | 16 ++--
 4 files changed, 56 insertions(+), 8 deletions(-)

diff --git a/lib/power/power_acpi_cpufreq.c b/lib/power/power_acpi_cpufreq.c
index 81996e1c13..97a27e09a0 100644
--- a/lib/power/power_acpi_cpufreq.c
+++ b/lib/power/power_acpi_cpufreq.c
@@ -9,6 +9,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "power_acpi_cpufreq.h"
 #include "power_common.h"
@@ -234,7 +235,8 @@ int
 power_acpi_cpufreq_init(unsigned int lcore_id)
 {
struct acpi_power_info *pi;
-   uint32_t exp_state;
+   uint32_t exp_state, cpu;
+   rte_cpuset_t lcore_cpus;
 
if (lcore_id >= RTE_MAX_LCORE) {
POWER_LOG(ERR, "Lcore id %u can not exceeds %u",
@@ -258,7 +260,17 @@ power_acpi_cpufreq_init(unsigned int lcore_id)
return -1;
}
 
-   pi->lcore_id = lcore_id;
+   lcore_cpus = rte_lcore_cpuset(lcore_id);
+   if (CPU_COUNT(&lcore_cpus) != 1) {
+   POWER_LOG(ERR, "Power library doesn't support lcore %u mapping "
+   "to %u cpus", lcore_id, CPU_COUNT(&lcore_cpus));
+   return -1;
+   }
+   for (cpu = 0; cpu < CPU_SETSIZE; cpu++) {
+   if (CPU_ISSET(cpu, &lcore_cpus))
+   break;
+   }
+   pi->lcore_id = cpu;
/* Check and set the governor */
if (power_set_governor_userspace(pi) < 0) {
POWER_LOG(ERR, "Cannot set governor of lcore %u to "
diff --git a/lib/power/power_amd_pstate_cpufreq.c 
b/lib/power/power_amd_pstate_cpufreq.c
index 090a0d96cb..8a3930eac7 100644
--- a/lib/power/power_amd_pstate_cpufreq.c
+++ b/lib/power/power_amd_pstate_cpufreq.c
@@ -8,6 +8,7 @@
 
 #include 
 #include 
+#include 
 
 #include "power_amd_pstate_cpufreq.h"
 #include "power_common.h"
@@ -352,7 +353,8 @@ int
 power_amd_pstate_cpufreq_init(unsigned int lcore_id)
 {
struct amd_pstate_power_info *pi;
-   uint32_t exp_state;
+   uint32_t exp_state, cpu;
+   rte_cpuset_t lcore_cpus;
 
if (lcore_id >= RTE_MAX_LCORE) {
POWER_LOG(ERR, "Lcore id %u can not exceeds %u",
@@ -376,7 +378,17 @@ power_amd_pstate_cpufreq_init(unsigned int lcore_id)
return -1;
}
 
-   pi->lcore_id = lcore_id;
+   lcore_cpus = rte_lcore_cpuset(lcore_id);
+   if (CPU_COUNT(&lcore_cpus) != 1) {
+   POWER_LOG(ERR, "Power library doesn't support lcore %u mapping "
+   "to %u cpus", lcore_id, CPU_COUNT(&lcore_cpus));
+   return -1;
+   }
+   for (cpu = 0; cpu < CPU_SETSIZE; cpu++) {
+   if (CPU_ISSET(cpu, &lcore_cpus))
+   break;
+   }
+   pi->lcore_id = cpu;
/* Check and set the governor */
if (power_set_governor_userspace(pi) < 0) {
POWER_LOG(ERR, "Cannot set governor of lcore %u to "
diff --git a/lib/power/power_cppc_cpufreq.c b/lib/power/power_cppc_cpufreq.c
index 32aaacb948..d51118820b 100644
--- a/lib/power/power_cppc_cpufreq.c
+++ b/lib/power/power_cppc_cpufreq.c
@@ -7,6 +7,7 @@
 
 #include 
 #include 
+#include 
 
 #include "power_cppc_cpufreq.h"
 #include "power_common.h"
@@ -338,7 +339,8 @@ int
 power_cppc_cpufreq_init(unsigned int lcore_id)
 {
struct cppc_power_info *pi;
-   uint32_t exp_state;
+   uint32_t exp_state, cpu;
+   rte_cpuset_t lcore_cpus;
 
if (lcore_id >= RTE_MAX_LCORE) {
POWER_LOG(ERR, "Lcore id %u can not exceeds %u",
@@ -362,7 +364,17 @@ power_cppc_cpufreq_init(unsigned int lcore_id)
return -1;
}
 
-   pi->lcore_id = lcore_id;
+   lcore_cpus = rte_lcore_cpuset(lcore_id);
+   if (CPU_COUNT(&lcore_cpus) != 1) {
+   POWER_LOG(ERR, "Power library doesn't support lcore %u mapping "
+   "to %u cpus", lcore_id, CPU_COUNT(&lcore_cpus));
+   return -1;
+   }
+   for (cpu = 0; cpu < CPU_SETSIZE; cpu++) {
+   if (CPU_ISSET(cpu, &lcore_cpus))
+   break;
+   }
+   pi->lcore_id = cpu;

[RFC PATCH 2/2] test/power: fix power library with --lcores

2024-07-24 Thread Sivaprasad Tummala
When user request to use lcores mapped to different physical
cores using --lcores eal option, power application accesses
incorrect cpu sysfs attribute for checking current frequency

The patch fixes the cpu_id based on the lcore and cpu mappings.

Signed-off-by: Sivaprasad Tummala 
---
 app/test/test_power_cpufreq.c | 22 +++---
 1 file changed, 19 insertions(+), 3 deletions(-)

diff --git a/app/test/test_power_cpufreq.c b/app/test/test_power_cpufreq.c
index 619b2811c6..63d13614df 100644
--- a/app/test/test_power_cpufreq.c
+++ b/app/test/test_power_cpufreq.c
@@ -9,6 +9,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "test.h"
 
@@ -46,9 +47,10 @@ test_power_caps(void)
 
 static uint32_t total_freq_num;
 static uint32_t freqs[TEST_POWER_FREQS_NUM_MAX];
+static uint32_t cpu_id;
 
 static int
-check_cur_freq(unsigned int lcore_id, uint32_t idx, bool turbo)
+check_cur_freq(__rte_unused unsigned int lcore_id, uint32_t idx, bool turbo)
 {
 #define TEST_POWER_CONVERT_TO_DECIMAL 10
 #define MAX_LOOP 100
@@ -62,13 +64,13 @@ check_cur_freq(unsigned int lcore_id, uint32_t idx, bool 
turbo)
int i;
 
if (snprintf(fullpath, sizeof(fullpath),
-   TEST_POWER_SYSFILE_CPUINFO_FREQ, lcore_id) < 0) {
+   TEST_POWER_SYSFILE_CPUINFO_FREQ, cpu_id) < 0) {
return 0;
}
f = fopen(fullpath, "r");
if (f == NULL) {
if (snprintf(fullpath, sizeof(fullpath),
-   TEST_POWER_SYSFILE_SCALING_FREQ, lcore_id) < 0) {
+   TEST_POWER_SYSFILE_SCALING_FREQ, cpu_id) < 0) {
return 0;
}
f = fopen(fullpath, "r");
@@ -497,6 +499,20 @@ test_power_cpufreq(void)
 {
int ret = -1;
enum power_management_env env;
+   rte_cpuset_t lcore_cpus;
+
+   lcore_cpus = rte_lcore_cpuset(TEST_POWER_LCORE_ID);
+   if (CPU_COUNT(&lcore_cpus) != 1) {
+   printf("Power management doesn't support "
+   "lcore %u mapping to %u cpus\n",
+   TEST_POWER_LCORE_ID,
+   CPU_COUNT(&lcore_cpus));
+   return TEST_SKIPPED;
+   }
+   for (cpu_id = 0; cpu_id < CPU_SETSIZE; cpu_id++) {
+   if (CPU_ISSET(cpu_id, &lcore_cpus))
+   break;
+   }
 
/* Test initialisation of a valid lcore */
ret = rte_power_init(TEST_POWER_LCORE_ID);
-- 
2.34.1



RE: [EXTERNAL] [PATCH v2] examples/ipsec-secgw: fix SA salt endianness problem

2024-07-24 Thread Akhil Goyal
> On 24-Jul-24 12:20 PM, Akhil Goyal wrote:
> >> On 23-Jul-24 5:57 PM, Akhil Goyal wrote:
>  Hi all,
> 
>  This patch breaks ipsec tests with ipsec-secgw:
> 
> 
>  ./examples/ipsec-secgw/test/run_test.sh -4 trs_aesctr_sha1
>  ...
>  ERROR: ./examples/ipsec-secgw/test/linux_test.sh failed for
> >> dst=192.168.31.14,
>  sz=1
> test IPv4 trs_aesctr_sha1 finished with status 1
>  ERROR  test trs_aesctr_sha1 FAILED
> 
> >>> The patch seems to be correct.
> >>> Please check endianness in the PMD you are testing.
> >> In my opinion salt should not be affected by endianness and it should be
> >> used as it is in the key parameter. I think the patch is wrong to make
> >> it CPU endianness dependent before being passed to the PMDs, any PMD
> >> that needs the endianness swapped should do it in the PMD code. Indeed
> >> it's passed around as a 32 bit integer but it's not used as such, and
> >> when it's actually used it should be evaluated as a byte array.
> >>
> > As per the rfc, it should be treated as byte order(i.e. big endian).
> > But here the problem is we treat it as uint32_t which makes it CPU endian
> when stored in ipsec_sa struct.
> > The keys are stored as an array of uint8_t, so keys are stored in byte 
> > order(Big
> endian).
> >
> > So either we save salt as "uint8_t salt[4]" or do a conversion of cpu_to_be
> > So that when it is stored in PMD/HW, and we convert it from uint32_t to 
> > uint_8
> *, there wont be issue.
> 
> RFC treats it as a "four octet value" - there is no endianness until
> it's treated like an integer, which it never is. Even if it code it's
> being stored and passed as an unsigned 32bit integer it is never
> evaluated as such so its endianness doesn't matter.

The endianness matters the moment it is stored as uint32_t in ipsec_sa
It means the value is stored in CPU endianness in that integer unless it is 
specified.

Now looking at the code again, I see the patch is incomplete for the case of 
lookaside crypto
Where the salt is copied as cnt_blk in the mbuf priv without conversion.

So, this patch can be reverted and a simple fix can be added to mark ipsec_sa-> 
salt as rte_be32_t
diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
index a83fd2283b..1fe6b97168 100644
--- a/examples/ipsec-secgw/ipsec.h
+++ b/examples/ipsec-secgw/ipsec.h
@@ -117,7 +117,7 @@ struct __rte_cache_aligned ipsec_sa {
uint32_t spi;
struct cdev_qp *cqp[RTE_MAX_LCORE];
uint64_t seq;
-   uint32_t salt;
+   rte_be32_t salt;
uint32_t fallback_sessions;
enum rte_crypto_cipher_algorithm cipher_algo;
enum rte_crypto_auth_algorithm auth_algo;

Can you verify and send the patch?
And this may be updated in cryptodev and security lib as well in next release.


> 
> I agree that we should have it everywhere as "uint8_t salt[4]" but that
> implies API changes and it doesn't change how the bytes are stored, so
> the patch will still be wrong.
> 
> 
> >
> >>>
> 
>  On 03/07/2024 18:58, Akhil Goyal wrote:
> 
> 
> 
> 
> 
>   -Original Message-
>   From: Akhil Goyal 
>  
>   Sent: Friday, March 15, 2024 12:42 AM
>   To: Akhil Goyal 
>   ; Chaoyong He
>   
>   ; dev@dpdk.org
> 
>   Cc: oss-driv...@corigine.com   driv...@corigine.com> ; Shihong Wang 
>   ;
>   sta...@dpdk.org 
>   Subject: RE: [EXTERNAL] [PATCH v2] examples/ipsec-secgw: fix
>  SA salt
>   endianness problem
> 
> 
>   Subject: RE: [EXTERNAL] [PATCH v2] examples/ipsec-
>  secgw: fix SA salt
>   endianness problem
> 
> 
>   From: Shihong Wang
>   
> 
>   The SA salt of struct ipsec_sa is a CPU-endian
>  u32 variable, but it’s
>   value is stored in an array of encryption or
>  authentication keys
>   according to big-endian. So it maybe need to
>  convert the endianness
>   order to ensure that the value assigned to the
>  SA salt is CPU-endian.
> 
>   Fixes: 50d75cae2a2c ("examples/ipsec-secgw:
>  initialize SA salt")
>   Fixes: 9413c3901f31 ("examples/ipsec-secgw:
>  support additional algorithms")
>   Fixes: 501e9c226adf ("examples/ipsec-secgw:
>  add AEAD parameters")
>   Cc: sta...@dpdk.org 
> 
>   Signed-off-by: Shihong Wang
>   

[PATCH v1] net/ntnic: add missing SPDX tag

2024-07-24 Thread Mykola Kostenok
Add missing SPDX tag.
Remove exception added to './devtools/check-spdx-tag.sh' for this files.

Signed-off-by: Mykola Kostenok 
---
 devtools/check-spdx-tag.sh   | 1 -
 .../clock_profiles/NT200A02_U23_Si5340_adr0_v5-Registers.h   | 4 
 .../net/ntnic/nthw/supported/nthw_fpga_9563_055_039_.c   | 5 +
 drivers/net/ntnic/nthw/supported/nthw_fpga_instances.c   | 4 
 drivers/net/ntnic/nthw/supported/nthw_fpga_instances.h   | 4 
 drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h| 5 +
 drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c | 4 
 drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.h | 4 
 drivers/net/ntnic/nthw/supported/nthw_fpga_param_defs.h  | 5 +
 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h| 5 +
 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_gfg.h| 5 +
 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_gmf.h| 5 +
 .../net/ntnic/nthw/supported/nthw_fpga_reg_defs_gpio_phy.h   | 5 +
 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_hif.h| 5 +
 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_i2cm.h   | 5 +
 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_iic.h| 5 +
 .../net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_pcs.h| 5 +
 .../net/ntnic/nthw/supported/nthw_fpga_reg_defs_pci_rd_tg.h  | 5 +
 .../net/ntnic/nthw/supported/nthw_fpga_reg_defs_pci_wr_tg.h  | 5 +
 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_pcie3.h  | 5 +
 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rac.h| 5 +
 .../net/ntnic/nthw/supported/nthw_fpga_reg_defs_rst9563.h| 5 +
 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sdc.h| 5 +
 23 files changed, 105 insertions(+), 1 deletion(-)

diff --git a/devtools/check-spdx-tag.sh b/devtools/check-spdx-tag.sh
index 6b04395db6..1ffccdaecc 100755
--- a/devtools/check-spdx-tag.sh
+++ b/devtools/check-spdx-tag.sh
@@ -27,7 +27,6 @@ check_spdx() {
':^*.cocci' ':^*.abignore' \
':^*.map' ':^*.ini' ':^*.data' ':^*.json' ':^*.cfg' ':^*.txt' \
':^*.svg' ':^*.png' \
-   ':^drivers/net/ntnic/nthw/supported/*' 
':^drivers/net/ntnic/*Registers.h' \
> $tmpfile
 
 errors=$(wc -l < $tmpfile)
diff --git 
a/drivers/net/ntnic/nthw/core/nt200a0x/clock_profiles/NT200A02_U23_Si5340_adr0_v5-Registers.h
 
b/drivers/net/ntnic/nthw/core/nt200a0x/clock_profiles/NT200A02_U23_Si5340_adr0_v5-Registers.h
index 956d725b7f..a6422e321c 100644
--- 
a/drivers/net/ntnic/nthw/core/nt200a0x/clock_profiles/NT200A02_U23_Si5340_adr0_v5-Registers.h
+++ 
b/drivers/net/ntnic/nthw/core/nt200a0x/clock_profiles/NT200A02_U23_Si5340_adr0_v5-Registers.h
@@ -1,3 +1,7 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
 
 /*
  * Si5340 Rev D Configuration Register Export Header File
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_039_.c 
b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_039_.c
index 03b827bc10..e26a275958 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_039_.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_039_.c
@@ -1,3 +1,8 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
 /*
  * Auto-generated file - do *NOT* edit
  */
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_instances.c 
b/drivers/net/ntnic/nthw/supported/nthw_fpga_instances.c
index ff6efc6e75..4fe7f7ea94 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_instances.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_instances.c
@@ -1,3 +1,7 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
 
 
 #include "nthw_fpga_instances.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_instances.h 
b/drivers/net/ntnic/nthw/supported/nthw_fpga_instances.h
index 05c0b94bbe..d0e6f7b429 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_instances.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_instances.h
@@ -1,3 +1,7 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
 
 
 #include "fpga_model.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h 
b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 0c746f8f53..0b69d09cde 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -1,3 +1,8 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
 /*
  * nthw_fpga_mod_defs.h
  *
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c 
b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
index d7bc686fe8..150b9dd976 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
@@ -1,3 +1,7 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * 

[PATCH] net/gve: Fix TX/RX queue setup and stop

2024-07-24 Thread Tathagat Priyadarshi
The PR aims to update the TX/RQ queue setup/stop routines that are
unique to DQO, so that they may be called for instances that use the
DQO RDA format during dev start/stop.

Signed-off-by: Tathagat Priyadarshi 
Acked-by: Joshua Washington 
---
 drivers/net/gve/gve_ethdev.c | 29 +++--
 1 file changed, 23 insertions(+), 6 deletions(-)

diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c
index ca92277..a20092e 100644
--- a/drivers/net/gve/gve_ethdev.c
+++ b/drivers/net/gve/gve_ethdev.c
@@ -288,11 +288,16 @@ struct gve_queue_page_list *
PMD_DRV_LOG(ERR, "Failed to create %u tx queues.", num_queues);
return ret;
}
-   for (i = 0; i < num_queues; i++)
-   if (gve_tx_queue_start(dev, i) != 0) {
+   for (i = 0; i < num_queues; i++) {
+   if (gve_is_gqi(priv))
+   ret = gve_tx_queue_start(dev, i);
+   else
+   ret = gve_tx_queue_start_dqo(dev, i);
+   if (ret != 0) {
PMD_DRV_LOG(ERR, "Fail to start Tx queue %d", i);
goto err_tx;
}
+   }
 
num_queues = dev->data->nb_rx_queues;
priv->rxqs = (struct gve_rx_queue **)dev->data->rx_queues;
@@ -315,9 +320,15 @@ struct gve_queue_page_list *
return 0;
 
 err_rx:
-   gve_stop_rx_queues(dev);
+   if (gve_is_gqi(priv))
+   gve_stop_rx_queues(dev);
+   else
+   gve_stop_rx_queues_dqo(dev);
 err_tx:
-   gve_stop_tx_queues(dev);
+   if (gve_is_gqi(priv))
+   gve_stop_tx_queues(dev);
+   else
+   gve_stop_tx_queues_dqo(dev);
return ret;
 }
 
@@ -362,10 +373,16 @@ struct gve_queue_page_list *
 static int
 gve_dev_stop(struct rte_eth_dev *dev)
 {
+   struct gve_priv *priv = dev->data->dev_private;
dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
-   gve_stop_tx_queues(dev);
-   gve_stop_rx_queues(dev);
+   if (gve_is_gqi(priv)) {
+   gve_stop_tx_queues(dev);
+   gve_stop_rx_queues(dev);
+   } else {
+   gve_stop_tx_queues_dqo(dev);
+   gve_stop_rx_queues_dqo(dev);
+   }
 
dev->data->dev_started = 0;
 
-- 
1.8.3.1



[PATCH v5 0/3] Improve interactive shell output gathering and logging

2024-07-24 Thread jspewock
From: Jeremy Spewock 

v5:
 * rebased on main
 * pulled tags forward from previous versions since there has been no
   change to the series outside of rebases since then. 

Jeremy Spewock (3):
  dts: Improve output gathering in interactive shells
  dts: Add missing docstring from XML-RPC server
  dts: Improve logging for interactive shells

 dts/framework/exception.py| 66 ---
 dts/framework/remote_session/dpdk_shell.py|  3 +-
 .../single_active_interactive_shell.py| 58 +++-
 dts/framework/remote_session/testpmd_shell.py |  2 +
 .../testbed_model/traffic_generator/scapy.py  | 50 +-
 5 files changed, 138 insertions(+), 41 deletions(-)

-- 
2.45.2



[PATCH v5 1/3] dts: Improve output gathering in interactive shells

2024-07-24 Thread jspewock
From: Jeremy Spewock 

The current implementation of consuming output from interactive shells
relies on being able to find an expected prompt somewhere within the
output buffer after sending the command. This is useful in situations
where the prompt does not appear in the output itself, but in some
practical cases (such as the starting of an XML-RPC server for scapy)
the prompt exists in one of the commands sent to the shell and this can
cause the command to exit early and creates a race condition between the
server starting and the first command being sent to the server.

This patch addresses this problem by searching for a line that strictly
ends with the provided prompt, rather than one that simply contains it,
so that the detection that a command is finished is more consistent. It
also adds a catch to detect when a command times out before finding the
prompt or the underlying SSH session dies so that the exception can be
wrapped into a more explicit one and be more consistent with the
non-interactive shells.

Bugzilla ID: 1359
Fixes: 88489c0501af ("dts: add smoke tests")

Signed-off-by: Jeremy Spewock 
Reviewed-by: Juraj Linkeš 
Reviewed-by: Luca Vizzarro 
Reviewed-by: Nicholas Pratte 
---
 dts/framework/exception.py| 66 ---
 .../single_active_interactive_shell.py| 49 ++
 2 files changed, 79 insertions(+), 36 deletions(-)

diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 74fd2af3b6..f45f789825 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -51,26 +51,6 @@ class DTSError(Exception):
 severity: ClassVar[ErrorSeverity] = ErrorSeverity.GENERIC_ERR
 
 
-class SSHTimeoutError(DTSError):
-"""The SSH execution of a command timed out."""
-
-#:
-severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
-_command: str
-
-def __init__(self, command: str):
-"""Define the meaning of the first argument.
-
-Args:
-command: The executed command.
-"""
-self._command = command
-
-def __str__(self) -> str:
-"""Add some context to the string representation."""
-return f"{self._command} execution timed out."
-
-
 class SSHConnectionError(DTSError):
 """An unsuccessful SSH connection."""
 
@@ -98,8 +78,42 @@ def __str__(self) -> str:
 return message
 
 
-class SSHSessionDeadError(DTSError):
-"""The SSH session is no longer alive."""
+class _SSHTimeoutError(DTSError):
+"""The execution of a command via SSH timed out.
+
+This class is private and meant to be raised as its interactive and 
non-interactive variants.
+"""
+
+#:
+severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
+_command: str
+
+def __init__(self, command: str):
+"""Define the meaning of the first argument.
+
+Args:
+command: The executed command.
+"""
+self._command = command
+
+def __str__(self) -> str:
+"""Add some context to the string representation."""
+return f"{self._command} execution timed out."
+
+
+class SSHTimeoutError(_SSHTimeoutError):
+"""The execution of a command on a non-interactive SSH session timed 
out."""
+
+
+class InteractiveSSHTimeoutError(_SSHTimeoutError):
+"""The execution of a command on an interactive SSH session timed out."""
+
+
+class _SSHSessionDeadError(DTSError):
+"""The SSH session is no longer alive.
+
+This class is private and meant to be raised as its interactive and 
non-interactive variants.
+"""
 
 #:
 severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
@@ -118,6 +132,14 @@ def __str__(self) -> str:
 return f"SSH session with {self._host} has died."
 
 
+class SSHSessionDeadError(_SSHSessionDeadError):
+"""Non-interactive SSH session has died."""
+
+
+class InteractiveSSHSessionDeadError(_SSHSessionDeadError):
+"""Interactive SSH session as died."""
+
+
 class ConfigurationError(DTSError):
 """An invalid configuration."""
 
diff --git a/dts/framework/remote_session/single_active_interactive_shell.py 
b/dts/framework/remote_session/single_active_interactive_shell.py
index 38094c0fe2..0e5a04885f 100644
--- a/dts/framework/remote_session/single_active_interactive_shell.py
+++ b/dts/framework/remote_session/single_active_interactive_shell.py
@@ -27,7 +27,11 @@
 from paramiko import Channel, channel  # type: ignore[import-untyped]
 from typing_extensions import Self
 
-from framework.exception import InteractiveCommandExecutionError
+from framework.exception import (
+InteractiveCommandExecutionError,
+InteractiveSSHSessionDeadError,
+InteractiveSSHTimeoutError,
+)
 from framework.logger import DTSLogger
 from framework.params import Params
 from framework.settings import SETTINGS
@@ -71,7 +75,10 @@ class SingleActiveInteractiveShell(ABC):
 
 #: Extra characters to add to the end of every command
 #: before sending them. This is often overrid

[PATCH v5 2/3] dts: Add missing docstring from XML-RPC server

2024-07-24 Thread jspewock
From: Jeremy Spewock 

When this XML-RPC server implementation was added, the docstring had to
be shortened in order to reduce the chances of this race condition being
encountered. Now that this race condition issue is resolved, the full
docstring can be restored.

Signed-off-by: Jeremy Spewock 
Reviewed-by: Juraj Linkeš 
Reviewed-by: Luca Vizzarro 
Reviewed-by: Nicholas Pratte 
---
 .../testbed_model/traffic_generator/scapy.py  | 46 ++-
 1 file changed, 45 insertions(+), 1 deletion(-)

diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py 
b/dts/framework/testbed_model/traffic_generator/scapy.py
index 7f0cc2bc18..08e1f4ae7e 100644
--- a/dts/framework/testbed_model/traffic_generator/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -128,9 +128,53 @@ def scapy_send_packets(xmlrpc_packets: 
list[xmlrpc.client.Binary], send_iface: s
 
 
 class QuittableXMLRPCServer(SimpleXMLRPCServer):
-"""Basic XML-RPC server.
+r"""Basic XML-RPC server.
 
 The server may be augmented by functions serializable by the 
:mod:`marshal` module.
+
+Example:
+::
+
+def hello_world():
+# to be sent to the XML-RPC server
+print("Hello World!")
+
+# start the XML-RPC server on the remote node
+# this is done by starting a Python shell on the remote node
+from framework.remote_session import PythonShell
+# the example assumes you're already connected to a tg_node
+session = tg_node.create_interactive_shell(PythonShell, timeout=5, 
privileged=True)
+
+# then importing the modules needed to run the server
+# and the modules for any functions later added to the server
+session.send_command("import xmlrpc")
+session.send_command("from xmlrpc.server import 
SimpleXMLRPCServer")
+
+# sending the source code of this class to the Python shell
+from xmlrpc.server import SimpleXMLRPCServer
+src = inspect.getsource(QuittableXMLRPCServer)
+src = "\n".join([l for l in src.splitlines() if not l.isspace() 
and l != ""])
+spacing = "\n" * 4
+session.send_command(spacing + src + spacing)
+
+# then starting the server with:
+command = "s = QuittableXMLRPCServer(('0.0.0.0', 
{listen_port}));s.serve_forever()"
+session.send_command(command, "XMLRPC OK")
+
+# now the server is running on the remote node and we can add 
functions to it
+# first connect to the server from the execution node
+import xmlrpc.client
+server_url = f"http://{tg_node.config.hostname}:8000";
+rpc_server_proxy = xmlrpc.client.ServerProxy(server_url)
+
+# get the function bytes to send
+import marshal
+function_bytes = marshal.dumps(hello_world.__code__)
+rpc_server_proxy.add_rpc_function(hello_world.__name__, 
function_bytes)
+
+# now we can execute the function on the server
+xmlrpc_binary_recv: xmlrpc.client.Binary = 
rpc_server_proxy.hello_world()
+print(str(xmlrpc_binary_recv))
 """
 
 def __init__(self, *args, **kwargs):
-- 
2.45.2



[PATCH v5 3/3] dts: Improve logging for interactive shells

2024-07-24 Thread jspewock
From: Jeremy Spewock 

The messages being logged by interactive shells currently are using the
same logger as the node they were created from. Because of this, when
sending interactive commands, the logs make no distinction between when
you are sending a command directly to the host and when you are using an
interactive shell on the host. This change adds names to interactive
shells so that they are able to use their own loggers with distinct
names.

Signed-off-by: Jeremy Spewock 
Reviewed-by: Juraj Linkeš 
Tested-by: Nicholas Pratte 
Reviewed-by: Nicholas Pratte 
Reviewed-by: Luca Vizzarro 
---
 dts/framework/remote_session/dpdk_shell.py   | 3 ++-
 .../remote_session/single_active_interactive_shell.py| 9 +++--
 dts/framework/remote_session/testpmd_shell.py| 2 ++
 dts/framework/testbed_model/traffic_generator/scapy.py   | 4 +++-
 4 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/dts/framework/remote_session/dpdk_shell.py 
b/dts/framework/remote_session/dpdk_shell.py
index 950c6ca670..c5f5c2d116 100644
--- a/dts/framework/remote_session/dpdk_shell.py
+++ b/dts/framework/remote_session/dpdk_shell.py
@@ -82,6 +82,7 @@ def __init__(
 ascending_cores: bool = True,
 append_prefix_timestamp: bool = True,
 app_params: EalParams = EalParams(),
+name: str | None = None,
 ) -> None:
 """Extends :meth:`~.interactive_shell.InteractiveShell.__init__`.
 
@@ -96,7 +97,7 @@ def __init__(
 append_prefix_timestamp,
 )
 
-super().__init__(node, privileged, timeout, app_params)
+super().__init__(node, privileged, timeout, app_params, name)
 
 def _update_real_path(self, path: PurePath) -> None:
 """Extends 
:meth:`~.interactive_shell.InteractiveShell._update_real_path`.
diff --git a/dts/framework/remote_session/single_active_interactive_shell.py 
b/dts/framework/remote_session/single_active_interactive_shell.py
index 0e5a04885f..701d0c 100644
--- a/dts/framework/remote_session/single_active_interactive_shell.py
+++ b/dts/framework/remote_session/single_active_interactive_shell.py
@@ -32,7 +32,7 @@
 InteractiveSSHSessionDeadError,
 InteractiveSSHTimeoutError,
 )
-from framework.logger import DTSLogger
+from framework.logger import DTSLogger, get_dts_logger
 from framework.params import Params
 from framework.settings import SETTINGS
 from framework.testbed_model.node import Node
@@ -92,6 +92,7 @@ def __init__(
 privileged: bool = False,
 timeout: float = SETTINGS.timeout,
 app_params: Params = Params(),
+name: str | None = None,
 ) -> None:
 """Create an SSH channel during initialization.
 
@@ -102,9 +103,13 @@ def __init__(
 shell. This timeout is for collecting output, so if reading 
from the buffer
 and no output is gathered within the timeout, an exception is 
thrown.
 app_params: The command line parameters to be passed to the 
application on startup.
+name: Name for the interactive shell to use for logging. This name 
will be appended to
+the name of the underlying node which it is running on.
 """
 self._node = node
-self._logger = node._logger
+if name is None:
+name = type(self).__name__
+self._logger = get_dts_logger(f"{node.name}.{name}")
 self._app_params = app_params
 self._privileged = privileged
 self._timeout = timeout
diff --git a/dts/framework/remote_session/testpmd_shell.py 
b/dts/framework/remote_session/testpmd_shell.py
index eda6eb320f..43e9f56517 100644
--- a/dts/framework/remote_session/testpmd_shell.py
+++ b/dts/framework/remote_session/testpmd_shell.py
@@ -604,6 +604,7 @@ def __init__(
 lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = 
LogicalCoreCount(),
 ascending_cores: bool = True,
 append_prefix_timestamp: bool = True,
+name: str | None = None,
 **app_params: Unpack[TestPmdParamsDict],
 ) -> None:
 """Overrides :meth:`~.dpdk_shell.DPDKShell.__init__`. Changes 
app_params to kwargs."""
@@ -615,6 +616,7 @@ def __init__(
 ascending_cores,
 append_prefix_timestamp,
 TestPmdParams(**app_params),
+name,
 )
 
 def start(self, verify: bool = True) -> None:
diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py 
b/dts/framework/testbed_model/traffic_generator/scapy.py
index 08e1f4ae7e..13fc1107aa 100644
--- a/dts/framework/testbed_model/traffic_generator/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -261,7 +261,9 @@ def __init__(self, tg_node: Node, config: 
ScapyTrafficGeneratorConfig):
 self._tg_node.config.os == OS.linux
 ), "Linux is the only supported OS for scapy traffic generation"
 
-self.session = PythonShell(self._tg_node, timeout=5, privileged=True)
+self.session 

Re: [RFC PATCH 1/2] power: fix power library with --lcores

2024-07-24 Thread Stephen Hemminger
On Wed, 24 Jul 2024 13:03:35 +
Sivaprasad Tummala  wrote:

> + POWER_LOG(ERR, "Power library doesn't support lcore %u mapping "
> + "to %u cpus", lcore_id, CPU_COUNT(&lcore_cpus));

Please don't break single format error string across lines. It makes it harder
for users to find the error in the source. Instead wrap after Er,,


Re: [RFC PATCH 1/2] power: fix power library with --lcores

2024-07-24 Thread Stephen Hemminger
On Wed, 24 Jul 2024 13:03:35 +
Sivaprasad Tummala  wrote:

> + lcore_cpus = rte_lcore_cpuset(lcore_id);
> + if (CPU_COUNT(&lcore_cpus) != 1) {
> + POWER_LOG(ERR, "Power library doesn't support lcore %u mapping "
> + "to %u cpus", lcore_id, CPU_COUNT(&lcore_cpus));
> + return -1;
> + }
> + for (cpu = 0; cpu < CPU_SETSIZE; cpu++) {
> + if (CPU_ISSET(cpu, &lcore_cpus))
> + break;
> + }

You are copy and pasting the same code into multiple places which
indicates it should be an API function.


Re: [EXTERNAL] [PATCH v2] examples/ipsec-secgw: fix SA salt endianness problem

2024-07-24 Thread Radu Nicolau



On 24-Jul-24 2:04 PM, Akhil Goyal wrote:

On 24-Jul-24 12:20 PM, Akhil Goyal wrote:

On 23-Jul-24 5:57 PM, Akhil Goyal wrote:

Hi all,

This patch breaks ipsec tests with ipsec-secgw:


./examples/ipsec-secgw/test/run_test.sh -4 trs_aesctr_sha1
...
ERROR: ./examples/ipsec-secgw/test/linux_test.sh failed for

dst=192.168.31.14,

sz=1
test IPv4 trs_aesctr_sha1 finished with status 1
ERROR  test trs_aesctr_sha1 FAILED


The patch seems to be correct.
Please check endianness in the PMD you are testing.

In my opinion salt should not be affected by endianness and it should be
used as it is in the key parameter. I think the patch is wrong to make
it CPU endianness dependent before being passed to the PMDs, any PMD
that needs the endianness swapped should do it in the PMD code. Indeed
it's passed around as a 32 bit integer but it's not used as such, and
when it's actually used it should be evaluated as a byte array.


As per the rfc, it should be treated as byte order(i.e. big endian).
But here the problem is we treat it as uint32_t which makes it CPU endian

when stored in ipsec_sa struct.

The keys are stored as an array of uint8_t, so keys are stored in byte order(Big

endian).

So either we save salt as "uint8_t salt[4]" or do a conversion of cpu_to_be
So that when it is stored in PMD/HW, and we convert it from uint32_t to uint_8

*, there wont be issue.

RFC treats it as a "four octet value" - there is no endianness until
it's treated like an integer, which it never is. Even if it code it's
being stored and passed as an unsigned 32bit integer it is never
evaluated as such so its endianness doesn't matter.

The endianness matters the moment it is stored as uint32_t in ipsec_sa
It means the value is stored in CPU endianness in that integer unless it is 
specified.


What matters is that the four byte value in the key ends up in the 
memory in the same order, and that was always the case before the patch, 
regardless of the endianness of the CPU because load and store 
operations are not affected by endianness. With the patch the same four 
bytes from the configuration file will be stored differently in memory 
depending on the CPU. There is no need to fix the endianness of the 
salt, just as there is no need to fix the byte order of the key itself.




Now looking at the code again, I see the patch is incomplete for the case of 
lookaside crypto
Where the salt is copied as cnt_blk in the mbuf priv without conversion.

So, this patch can be reverted and a simple fix can be added to mark ipsec_sa-> 
salt as rte_be32_t
diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
index a83fd2283b..1fe6b97168 100644
--- a/examples/ipsec-secgw/ipsec.h
+++ b/examples/ipsec-secgw/ipsec.h
@@ -117,7 +117,7 @@ struct __rte_cache_aligned ipsec_sa {
 uint32_t spi;
 struct cdev_qp *cqp[RTE_MAX_LCORE];
 uint64_t seq;
-   uint32_t salt;
+   rte_be32_t salt;
 uint32_t fallback_sessions;
 enum rte_crypto_cipher_algorithm cipher_algo;
 enum rte_crypto_auth_algorithm auth_algo;

Can you verify and send the patch?
And this may be updated in cryptodev and security lib as well in next release.



I agree that we should have it everywhere as "uint8_t salt[4]" but that
implies API changes and it doesn't change how the bytes are stored, so
the patch will still be wrong.



On 03/07/2024 18:58, Akhil Goyal wrote:





-Original Message-
From: Akhil Goyal 

Sent: Friday, March 15, 2024 12:42 AM
To: Akhil Goyal 
 ; Chaoyong He

 ; dev@dpdk.org



Cc: oss-driv...@corigine.com  ; Shihong Wang 
 ;
sta...@dpdk.org 
Subject: RE: [EXTERNAL] [PATCH v2] examples/ipsec-secgw: fix
SA salt
endianness problem


Subject: RE: [EXTERNAL] [PATCH v2] examples/ipsec-
secgw: fix SA salt
endianness problem


From: Shihong Wang
 

The SA salt of struct ipsec_sa is a CPU-endian
u32 variable, but it’s
value is stored in an array of encryption or
authentication keys
according to big-endian. So it maybe need to
convert the endianness
order to ensure that the value assigned to the
SA salt is CPU-endian.

Fixes: 50d75cae2a2c ("examples/ipsec-secgw:
initialize SA salt")
Fixes: 9413c3901f31 ("examples/ipsec-secgw:
support additional algorithms")
Fixes: 501e9c226adf ("examples/ipsec-secgw:
add AEAD parameters")
  

[PATCH v3 0/4] dts: add dynamic queue configuration test suite

2024-07-24 Thread jspewock
From: Jeremy Spewock 

v3:
 * rebase on rc3

Jeremy Spewock (4):
  dts: add send_packets to test suites and rework packet addressing
  dts: add port queue modification and forwarding stats to testpmd
  dts: add dynamic queue test suite
  dts: add dynamic queue conf to the yaml schema

 dts/framework/config/conf_yaml_schema.json|   3 +-
 dts/framework/remote_session/testpmd_shell.py | 233 +-
 dts/framework/test_suite.py   |  74 +++--
 dts/framework/testbed_model/tg_node.py|   9 +
 dts/tests/TestSuite_dynamic_queue_conf.py | 286 ++
 5 files changed, 581 insertions(+), 24 deletions(-)
 create mode 100644 dts/tests/TestSuite_dynamic_queue_conf.py

-- 
2.45.2



[PATCH v3 1/4] dts: add send_packets to test suites and rework packet addressing

2024-07-24 Thread jspewock
From: Jeremy Spewock 

Currently the only method provided in the test suite class for sending
packets sends a single packet and then captures the results. There is,
in some cases, a need to send multiple packets at once while not really
needing to capture any traffic received back. The method to do this
exists in the traffic generator already, but this patch exposes the
method to test suites.

This patch also updates the _adjust_addresses method of test suites so
that addresses of packets are only modified if the developer did not
configure them beforehand. This allows for developers to have more
control over the content of their packets when sending them through the
framework.

Signed-off-by: Jeremy Spewock 
---
 dts/framework/test_suite.py| 74 ++
 dts/framework/testbed_model/tg_node.py |  9 
 2 files changed, 62 insertions(+), 21 deletions(-)

diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 694b2eba65..0b678ed62d 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -199,7 +199,7 @@ def send_packet_and_capture(
 Returns:
 A list of received packets.
 """
-packet = self._adjust_addresses(packet)
+packet = self._adjust_addresses([packet])[0]
 return self.tg_node.send_packet_and_capture(
 packet,
 self._tg_port_egress,
@@ -208,6 +208,18 @@ def send_packet_and_capture(
 duration,
 )
 
+def send_packets(
+self,
+packets: list[Packet],
+) -> None:
+"""Send packets using the traffic generator and do not capture 
received traffic.
+
+Args:
+packets: Packets to send.
+"""
+packets = self._adjust_addresses(packets)
+self.tg_node.send_packets(packets, self._tg_port_egress)
+
 def get_expected_packet(self, packet: Packet) -> Packet:
 """Inject the proper L2/L3 addresses into `packet`.
 
@@ -219,39 +231,59 @@ def get_expected_packet(self, packet: Packet) -> Packet:
 """
 return self._adjust_addresses(packet, expected=True)
 
-def _adjust_addresses(self, packet: Packet, expected: bool = False) -> 
Packet:
+def _adjust_addresses(self, packets: list[Packet], expected: bool = False) 
-> list[Packet]:
 """L2 and L3 address additions in both directions.
 
+Only missing addresses are added to packets, existing addressed will 
not be overridden.
+
 Assumptions:
 Two links between SUT and TG, one link is TG -> SUT, the other SUT 
-> TG.
 
 Args:
-packet: The packet to modify.
+packets: The packets to modify.
 expected: If :data:`True`, the direction is SUT -> TG,
 otherwise the direction is TG -> SUT.
 """
-if expected:
-# The packet enters the TG from SUT
-# update l2 addresses
-packet.src = self._sut_port_egress.mac_address
-packet.dst = self._tg_port_ingress.mac_address
+ret_packets = []
+for packet in packets:
+default_pkt_src = type(packet)().src
+default_pkt_dst = type(packet)().dst
+default_pkt_payload_src = IP().src if hasattr(packet.payload, 
"src") else None
+default_pkt_payload_dst = IP().dst if hasattr(packet.payload, 
"dst") else None
+# If `expected` is :data:`True`, the packet enters the TG from 
SUT, otherwise the
+# packet leaves the TG towards the SUT
 
-# The packet is routed from TG egress to TG ingress
-# update l3 addresses
-packet.payload.src = self._tg_ip_address_egress.ip.exploded
-packet.payload.dst = self._tg_ip_address_ingress.ip.exploded
-else:
-# The packet leaves TG towards SUT
 # update l2 addresses
-packet.src = self._tg_port_egress.mac_address
-packet.dst = self._sut_port_ingress.mac_address
+if packet.src == default_pkt_src:
+packet.src = (
+self._sut_port_egress.mac_address
+if expected
+else self._tg_port_egress.mac_address
+)
+if packet.dst == default_pkt_dst:
+packet.dst = (
+self._tg_port_ingress.mac_address
+if expected
+else self._sut_port_ingress.mac_address
+)
+
+# The packet is routed from TG egress to TG ingress regardless of 
if it is expected or
+# not.
 
-# The packet is routed from TG egress to TG ingress
 # update l3 addresses
-packet.payload.src = self._tg_ip_address_egress.ip.exploded
-packet.payload.dst = self._tg_ip_address_ingress.ip.exploded
-
-return Ether(packet.build())
+if (
+default_pkt_payload_src is not None
+   

[PATCH v3 2/4] dts: add port queue modification and forwarding stats to testpmd

2024-07-24 Thread jspewock
From: Jeremy Spewock 

This patch adds methods for querying and modifying port queue state and
configuration. In addition to this, it also adds the ability to capture
the forwarding statistics that get outputted when you send the "stop"
command in testpmd. Querying of port queue information is handled
through a TextParser dataclass in case there is future need for using
more of the output from the command used to query the information.

Signed-off-by: Jeremy Spewock 
---
 dts/framework/remote_session/testpmd_shell.py | 233 +-
 1 file changed, 231 insertions(+), 2 deletions(-)

diff --git a/dts/framework/remote_session/testpmd_shell.py 
b/dts/framework/remote_session/testpmd_shell.py
index eda6eb320f..45b379c808 100644
--- a/dts/framework/remote_session/testpmd_shell.py
+++ b/dts/framework/remote_session/testpmd_shell.py
@@ -19,7 +19,7 @@
 from dataclasses import dataclass, field
 from enum import Flag, auto
 from pathlib import PurePath
-from typing import ClassVar
+from typing import ClassVar, cast
 
 from typing_extensions import Self, Unpack
 
@@ -541,6 +541,56 @@ class TestPmdPort(TextParser):
 )
 
 
+@dataclass
+class TestPmdPortQueue(TextParser):
+"""Dataclass representation of the common parts of the testpmd `show 
rxq/txq info` commands."""
+
+#:
+prefetch_threshold: int = field(metadata=TextParser.find_int(r"prefetch 
threshold: (\d+)"))
+#:
+host_threshold: int = field(metadata=TextParser.find_int(r"host threshold: 
(\d+)"))
+#:
+writeback_threshold: int = field(metadata=TextParser.find_int(r"writeback 
threshold: (\d+)"))
+#:
+free_threshold: int = field(metadata=TextParser.find_int(r"free threshold: 
(\d+)"))
+#:
+deferred_start: bool = field(metadata=TextParser.find("deferred start: 
on"))
+#: The number of RXD/TXDs is just the ring size of the queue.
+ring_size: int = field(metadata=TextParser.find_int(r"Number of 
(?:RXDs|TXDs): (\d+)"))
+#:
+is_queue_started: bool = field(metadata=TextParser.find("queue state: 
started"))
+#:
+burst_mode: str | None = field(
+default=None, metadata=TextParser.find(r"Burst mode: ([^\r\n]+)")
+)
+
+
+@dataclass
+class TestPmdTxPortQueue(TestPmdPortQueue):
+"""Dataclass representation for testpmd `show txq info` command."""
+
+#:
+rs_threshold: int | None = field(
+default=None, metadata=TextParser.find_int(r"RS threshold: (\d+)")
+)
+
+
+@dataclass
+class TestPmdRxPortQueue(TestPmdPortQueue):
+"""Dataclass representation for testpmd `show rxq info` command."""
+
+#:
+mempool: str | None = field(default=None, 
metadata=TextParser.find(r"Mempool: ([^\r\n]+)"))
+#:
+can_drop_packets: bool | None = field(
+default=None, metadata=TextParser.find(r"drop packets: on")
+)
+#:
+is_scattering_packets: bool | None = field(
+default=None, metadata=TextParser.find(r"scattered packets: on")
+)
+
+
 @dataclass
 class TestPmdPortStats(TextParser):
 """Port statistics."""
@@ -643,7 +693,7 @@ def start(self, verify: bool = True) -> None:
 "Not all ports came up after starting packet 
forwarding in testpmd."
 )
 
-def stop(self, verify: bool = True) -> None:
+def stop(self, verify: bool = True) -> str:
 """Stop packet forwarding.
 
 Args:
@@ -651,6 +701,9 @@ def stop(self, verify: bool = True) -> None:
 forwarding was stopped successfully or not started. If neither 
is found, it is
 considered an error.
 
+Returns:
+Output gathered from sending the stop command.
+
 Raises:
 InteractiveCommandExecutionError: If `verify` is :data:`True` and 
the command to stop
 forwarding results in an error.
@@ -663,6 +716,7 @@ def stop(self, verify: bool = True) -> None:
 ):
 self._logger.debug(f"Failed to stop packet forwarding: 
\n{stop_cmd_output}")
 raise InteractiveCommandExecutionError("Testpmd failed to stop 
packet forwarding.")
+return stop_cmd_output
 
 def get_devices(self) -> list[TestPmdDevice]:
 """Get a list of device names that are known to testpmd.
@@ -804,6 +858,181 @@ def show_port_stats(self, port_id: int) -> 
TestPmdPortStats:
 
 return TestPmdPortStats.parse(output)
 
+def show_port_queue_info(
+self, port_id: int, queue_id: int, is_rx_queue: bool
+) -> TestPmdPortQueue:
+"""Get the info for a queue on a given port.
+
+Args:
+port_id: ID of the port where the queue resides.
+queue_id: ID of the queue to query.
+is_rx_queue: Whether to check an RX or TX queue. If :data:`True` 
an RX queue will be
+queried, otherwise a TX queue will be queried.
+
+Raises:
+InteractiveCommandExecutionError: If there is a failure when 
getting the info for the
+queue.
+
+   

[PATCH v3 3/4] dts: add dynamic queue test suite

2024-07-24 Thread jspewock
From: Jeremy Spewock 

This patch adds a new test suite that is designed to test the stopping
and modification of port queues at runtime. Specifically, there are
test cases that display the ports ability to stop some queues but still
send and receive traffic on others, as well as the ability to configure
the ring size of the queue without blocking the traffic on other queues.

Signed-off-by: Jeremy Spewock 
---
 dts/tests/TestSuite_dynamic_queue_conf.py | 286 ++
 1 file changed, 286 insertions(+)
 create mode 100644 dts/tests/TestSuite_dynamic_queue_conf.py

diff --git a/dts/tests/TestSuite_dynamic_queue_conf.py 
b/dts/tests/TestSuite_dynamic_queue_conf.py
new file mode 100644
index 00..f5c667cdeb
--- /dev/null
+++ b/dts/tests/TestSuite_dynamic_queue_conf.py
@@ -0,0 +1,286 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2024 University of New Hampshire
+
+"""Dynamic configuration of port queues test suite.
+
+This test suite tests the support of being able to either stop or reconfigure 
port queues at
+runtime without stopping the entire device. Previously, to configure a DPDK 
ethdev, the application
+first specifies how many Tx and Rx queues to include in the ethdev and then 
application sets up
+each queue individually. Only once all the queues have been set up can the 
application then start
+the device, and at this point traffic can flow. If device stops, this halts 
the flow of traffic on
+all queues in the ethdev completely. Dynamic queue is a capability present on 
some NICs that
+specifies whether the NIC is able to delay the configuration of queues on its 
port. This capability
+allows for the support of stopping and reconfiguring queues on a port at 
runtime without stopping
+the entire device.
+
+Support of this capability is shown by starting the Poll Mode Driver with 
multiple Rx and Tx queues
+configured and stopping some prior to forwarding packets, then examining 
whether or not the stopped
+ports and the unmodified ports were able to handle traffic. In addition to 
just stopping the ports,
+the ports must also show that they support configuration changes on their 
queues at runtime without
+stopping the entire device. This is shown by changing the ring size of the 
queues.
+
+If the Poll Mode Driver is able to stop some queues on a port and modify them 
then handle traffic
+on the unmodified queues while the others are stopped, then it is the case 
that the device properly
+supports dynamic configuration of its queues.
+"""
+
+import random
+from typing import Callable, ClassVar, MutableSet
+
+from scapy.layers.inet import IP  # type: ignore[import-untyped]
+from scapy.layers.l2 import Ether  # type: ignore[import-untyped]
+from scapy.packet import Raw  # type: ignore[import-untyped]
+
+from framework.exception import InteractiveCommandExecutionError
+from framework.params.testpmd import PortTopology, SimpleForwardingModes
+from framework.remote_session.testpmd_shell import TestPmdShell
+from framework.test_suite import TestSuite
+
+
+def setup_and_teardown_test(
+test_meth: Callable[
+["TestDynamicQueueConf", int, MutableSet, MutableSet, TestPmdShell, 
bool], None
+],
+) -> Callable[["TestDynamicQueueConf", bool], None]:
+"""Decorator that provides a setup and teardown for testing methods.
+
+This decorator provides a method that sets up the environment for testing, 
runs the test
+method, and then does a clean-up verification step after the queues are 
started again. The
+decorated method will be provided with all the variables it should need to 
run testing
+including: The ID of the port where the queues for testing reside, 
disjoint sets of IDs for
+queues that are/aren't modified, a testpmd session to run testing with, 
and a flag that
+indicates whether or not testing should be done on Rx or Tx queues.
+
+Args:
+test_meth: The decorated method that tests configuration of port 
queues at runtime.
+This method must have the following parameters in order: An int 
that represents a
+port ID, a set of queues for testing, a set of unmodified queues, 
a testpmd
+interactive shell, and a boolean that, when :data:`True`, does Rx 
testing,
+otherwise does Tx testing. This method must also be a member of the
+:class:`TestDynamicQueueConf` class.
+
+Returns:
+A method that sets up the environment, runs the decorated method, then 
re-enables all
+queues and validates they can still handle traffic.
+"""
+
+def wrap(self: "TestDynamicQueueConf", is_rx_testing: bool) -> None:
+"""Setup environment, run test function, then cleanup.
+
+Start a testpmd shell and stop ports for testing, then call the 
decorated function that
+performs the testing. After the decorated function is finished running 
its testing,
+start the stopped queues and send packets to validate that these ports 
can prope

[PATCH v3 4/4] dts: add dynamic queue conf to the yaml schema

2024-07-24 Thread jspewock
From: Jeremy Spewock 

Adds the ability to run the test suite using the yaml configuration
file.

Signed-off-by: Jeremy Spewock 
---
 dts/framework/config/conf_yaml_schema.json | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/dts/framework/config/conf_yaml_schema.json 
b/dts/framework/config/conf_yaml_schema.json
index f02a310bb5..d83a2f51c5 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -187,7 +187,8 @@
   "enum": [
 "hello_world",
 "os_udp",
-"pmd_buffer_scatter"
+"pmd_buffer_scatter",
+"dynamic_queue_conf"
   ]
 },
 "test_target": {
-- 
2.45.2



[PATCH v12 0/3] dts: refactored VLAN test suite

2024-07-24 Thread Dean Marx
Refactored suite to be compatible with context manager, fixed bug
in show port info regex for VLAN offload flags. Minor formatting fixes
are included as well.

Dean Marx (3):
  dts: add VLAN methods to testpmd shell
  dts: VLAN test suite implementation
  dts: config schema

 dts/framework/config/conf_yaml_schema.json|   3 +-
 dts/framework/remote_session/testpmd_shell.py | 245 +-
 dts/tests/TestSuite_vlan.py   | 168 
 3 files changed, 414 insertions(+), 2 deletions(-)
 create mode 100644 dts/tests/TestSuite_vlan.py

-- 
2.44.0



[PATCH v12 1/3] dts: add VLAN methods to testpmd shell

2024-07-24 Thread Dean Marx
added the following methods to testpmd shell class:
vlan set filter on/off, rx vlan add/rm,
vlan set strip on/off, port stop/start all/port,
tx vlan set/reset, set promisc/verbose

fixed bug in vlan_offload for
show port info, removed $ at end of regex

Signed-off-by: Dean Marx 
---
 dts/framework/remote_session/testpmd_shell.py | 245 +-
 1 file changed, 244 insertions(+), 1 deletion(-)

diff --git a/dts/framework/remote_session/testpmd_shell.py 
b/dts/framework/remote_session/testpmd_shell.py
index eda6eb320f..8e5a1c084a 100644
--- a/dts/framework/remote_session/testpmd_shell.py
+++ b/dts/framework/remote_session/testpmd_shell.py
@@ -98,7 +98,7 @@ def make_parser(cls) -> ParserFn:
 r"strip (?Pon|off), "
 r"filter (?Pon|off), "
 r"extend (?Pon|off), "
-r"qinq strip (?Pon|off)$",
+r"qinq strip (?Pon|off)",
 re.MULTILINE,
 named=True,
 ),
@@ -804,6 +804,249 @@ def show_port_stats(self, port_id: int) -> 
TestPmdPortStats:
 
 return TestPmdPortStats.parse(output)
 
+def vlan_filter_set(self, port: int, on: bool, verify: bool = True) -> 
None:
+"""Set vlan filter on.
+
+Args:
+port: The port number to enable VLAN filter on, should be within 
0-32.
+on: Sets filter on if :data:`True`, otherwise turns off.
+verify: If :data:`True`, the output of the command and show port 
info
+is scanned to verify that vlan filtering was enabled 
successfully.
+If not, it is considered an error.
+
+Raises:
+InteractiveCommandExecutionError: If `verify` is :data:`True` and 
the filter
+fails to update.
+"""
+filter_cmd_output = self.send_command(f"vlan set filter {'on' if on 
else 'off'} {port}")
+if verify:
+if on ^ ("FILTER" in 
str(self.show_port_info(port_id=port).vlan_offload)):
+self._logger.debug(f"Failed to set filter on port {port}: 
\n{filter_cmd_output}")
+raise InteractiveCommandExecutionError(
+f"Testpmd failed to set VLAN filter on port {port}."
+)
+
+def rx_vlan(self, vlan: int, port: int, add: bool, verify: bool = True) -> 
None:
+"""Add specified vlan tag to the filter list on a port.
+
+Args:
+vlan: The vlan tag to add, should be within 1-1005, 1-4094 
extended.
+port: The port number to add the tag on, should be within 0-32.
+add: adds the tag if :data:`True`, otherwise removes tag.
+verify: If :data:`True`, the output of the command is scanned to 
verify that
+the vlan tag was added to the filter list on the specified 
port. If not, it is
+considered an error.
+
+Raises:
+InteractiveCommandExecutionError: If `verify` is :data:`True` and 
the tag
+is not added.
+"""
+rx_output = self.send_command(f"rx_vlan {'add' if add else 'rm'} 
{vlan} {port}")
+if verify:
+if (
+"VLAN-filtering disabled" in rx_output
+or "Invalid vlan_id" in rx_output
+or "Bad arguments" in rx_output
+):
+self._logger.debug(
+f"Failed to {'add' if add else 'remove'} tag {vlan} port 
{port}: \n{rx_output}"
+)
+raise InteractiveCommandExecutionError(
+f"Testpmd failed to {'add' if add else 'remove'} tag 
{vlan} on port {port}."
+)
+
+def vlan_strip_set(self, port: int, on: bool, verify: bool = True) -> None:
+"""Enable vlan stripping on the specified port.
+
+Args:
+port: The port number to use, should be within 0-32.
+on: If :data:`True`, will turn strip on, otherwise will turn off.
+verify: If :data:`True`, the output of the command and show port 
info
+is scanned to verify that vlan stripping was enabled on the 
specified port.
+If not, it is considered an error.
+
+Raises:
+InteractiveCommandExecutionError: If `verify` is :data:`True` and 
stripping
+fails to update.
+"""
+strip_output = self.send_command(f"vlan set strip {'on' if on else 
'off'} {port}")
+if verify:
+if on ^ ("STRIP" in 
str(self.show_port_info(port_id=port).vlan_offload)):
+self._logger.debug(
+f"Failed to set strip {'on' if on else 'off'} port {port}: 
\n{strip_output}"
+)
+raise InteractiveCommandExecutionError(
+f"Testpmd failed to set strip {'on' if on else 'off'} port 
{port}."
+)
+
+def port_stop_all(self, verify: bool = True) -> None:
+"""Stop all ports.
+
+Args:
+port: Spec

[PATCH v12 2/3] dts: VLAN test suite implementation

2024-07-24 Thread Dean Marx
Test suite for verifying VLAN filtering, stripping, and insertion
functionality on Poll Mode Driver.

Signed-off-by: Dean Marx 
---
 dts/tests/TestSuite_vlan.py | 168 
 1 file changed, 168 insertions(+)
 create mode 100644 dts/tests/TestSuite_vlan.py

diff --git a/dts/tests/TestSuite_vlan.py b/dts/tests/TestSuite_vlan.py
new file mode 100644
index 00..3193c559f3
--- /dev/null
+++ b/dts/tests/TestSuite_vlan.py
@@ -0,0 +1,168 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2024 University of New Hampshire
+
+"""Test the support of VLAN Offload Features by Poll Mode Drivers.
+
+The test suite ensures that with the correct configuration, a port
+will not drop a VLAN tagged packet. In order for this to be successful,
+packet header stripping and packet receipts must be enabled on the Poll Mode 
Driver.
+The test suite checks that when these conditions are met, the packet is 
received without issue.
+The suite also checks to ensure that when these conditions are not met, as in 
the cases where
+stripping is disabled, or VLAN packet receipts are disabled, the packet is not 
received.
+Additionally, it checks the case where VLAN header insertion is enabled in 
transmitted packets,
+which should be successful if the previous cases pass.
+
+"""
+
+from scapy.layers.l2 import Dot1Q, Ether  # type: ignore[import-untyped]
+from scapy.packet import Raw  # type: ignore[import-untyped]
+
+from framework.remote_session.testpmd_shell import SimpleForwardingModes, 
TestPmdShell
+from framework.test_suite import TestSuite
+
+
+class TestVlan(TestSuite):
+"""DPDK VLAN test suite.
+
+Ensures VLAN packet reception, stripping, and insertion on the Poll Mode 
Driver
+when the appropriate conditions are met. The suite contains four test 
cases:
+
+1. VLAN reception no stripping - verifies that a vlan packet with a tag
+within the filter list is received.
+2. VLAN reception stripping - verifies that a vlan packet with a tag
+within the filter list is received without the vlan tag.
+3. VLAN no reception - verifies that a vlan packet with a tag not within
+the filter list is dropped.
+4. VLAN insertion - verifies that a non vlan packet is received with a vlan
+tag when insertion is enabled.
+"""
+
+def set_up_suite(self) -> None:
+"""Set up the test suite.
+
+Setup:
+Verify that at least two ports are open for session.
+"""
+self.verify(len(self._port_links) > 1, "Not enough ports")
+
+def send_vlan_packet_and_verify(self, should_receive: bool, strip: bool, 
vlan_id: int) -> None:
+"""Generate a vlan packet, send and verify packet with same payload is 
received on the dut.
+
+Args:
+should_receive: Indicate whether the packet should be successfully 
received.
+strip: Indicates whether stripping is on or off, and when the vlan 
tag is
+checked for a match.
+vlan_id: Expected vlan ID.
+"""
+packet = Ether() / Dot1Q(vlan=vlan_id) / Raw(load="x")
+received_packets = self.send_packet_and_capture(packet)
+test_packet = None
+for packet in received_packets:
+if b"x" in packet.load:
+test_packet = packet
+break
+if should_receive:
+self.verify(
+test_packet is not None, "Packet was dropped when it should 
have been received"
+)
+if test_packet is not None:
+if strip:
+self.verify(
+not test_packet.haslayer(Dot1Q), "Vlan tag was not 
stripped successfully"
+)
+else:
+self.verify(
+test_packet.vlan == vlan_id,
+"The received tag did not match the expected tag",
+)
+else:
+self.verify(
+test_packet is None,
+"Packet was received when it should have been dropped",
+)
+
+def send_packet_and_verify_insertion(self, expected_id: int) -> None:
+"""Generate a packet with no vlan tag, send and verify on the dut.
+
+Args:
+expected_id: The vlan id that is being inserted through tx_offload 
configuration.
+"""
+packet = Ether() / Raw(load="x")
+received_packets = self.send_packet_and_capture(packet)
+test_packet = None
+for packet in received_packets:
+if b"x" in packet.load:
+test_packet = packet
+break
+self.verify(test_packet is not None, "Packet was dropped when it 
should have been received")
+if test_packet is not None:
+self.verify(test_packet.haslayer(Dot1Q), "The received packet did 
not have a vlan tag")
+self.verify(
+test_packet.vlan == expected_id, "The 

[PATCH v12 3/3] dts: config schema

2024-07-24 Thread Dean Marx
Configuration to run vlan test suite

Signed-off-by: Dean Marx 
---
 dts/framework/config/conf_yaml_schema.json | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/dts/framework/config/conf_yaml_schema.json 
b/dts/framework/config/conf_yaml_schema.json
index f02a310bb5..cd45902cc4 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -187,7 +187,8 @@
   "enum": [
 "hello_world",
 "os_udp",
-"pmd_buffer_scatter"
+"pmd_buffer_scatter",
+"vlan"
   ]
 },
 "test_target": {
-- 
2.44.0



RE: [PATCH v2] doc: announce rte_ipsec API changes

2024-07-24 Thread Konstantin Ananyev


> > > In case of event mode operations where event device can help in atomic
> > > sequence number increment across cores, sequence number need to be
> > > provided by the application instead of being updated in rte_ipsec or
> > > the PMD. To support this, a new flag
> > > ``RTE_IPSEC_SAFLAG_SQN_ASSIGN_DISABLE``
> > > will be added to disable sequence number update inside IPsec library
> > > and the API rte_ipsec_pkt_crypto_prepare will be extended to include
> > > ``sqn`` as an additional parameter to specify sequence number to be
> > > used for IPsec from the application.
> >
> > Could you probably elaborate a bit more:
> > Why such change is necessary for event-dev mode, what exactly will be
> > affected in librte_ipsec (would it be for outbound mode, or both), etc.
> >
> 
> [Aakash] When using eventdev, it is possible to have multiple cores process 
> packets from the same flow at the same time, but still
> have ordering maintained.
> 
> Sequence for IPsec would be like below,
> 1. Ethdev Rx computes flow hash and submits packets to an ORDERED eventdev 
> queue.
> One flow would always hit one event dev queue.
> One eventdev queue can be attached to multiple eventdev ports.
> 2. Lcores receives packets via these eventdev ports.
> Lcores can now process the packets from the same flow in parallel.
> 3. Lcores submit the packets to an ATOMIC queue
> This is needed as IPsec seq no update needs to be done atomically.
> 4. After seq no update, packets are moved to ORDERED queue.
> Lcores can now processes the packets in parallel again.
> 5. During Tx, eventdev ensures packet ordering based on ORDERED queue.
> 
> Since lib IPsec takes care of sequence number assignment, complete 
> rte_ipsec_pkt_crypto_prepare() routine need to be made as
> ATOMIC stage.
> But apart from seq no update, rest of the operations can be done in parallel.

Thanks for explanation.
Basically you are seeking ability to split rte_ipsec_pkt_crypto_prepare() for 
outbound
into two stages:
1. update sqn
2. all other preps
To be able to do step #2 in parallel, correct?
My thought always was that step #2 is not that expensive in terms of 
performance,
and there probably not much point to make it parallel.
But I suppose you measured step#2 overhead on your platform
and concluded that it worth it...

One concern I have with the way you suggested -
now we need to store/update sa.sqn by some external entity.
Another thing - don't really want to pollute crypto_prepare() API
with new parameters which meaning is a bit obscure and depends on
other API calls...  

Wouldn't it be easier and probably more straightforward to just introduce 2 new
functions here that would represent step #1 and step #2?
Then we can keep crypto_prepare() intact, and user will have a choice:
- either use  original crypto_prepare() - nothing needs to be changed
-  or instead call these new functions on his own, if he wants to.

> In addition, we are also looking at another use case when a set of packets 
> from a session can be IPsec processed by rte_security
> device and some packets from the same session would need to be SW processed 
> with lib IPsec. Here again the sequence number
> assignment would need to occur at central place so that sequence number is 
> not repeated.

Interesting, and how SW/HW SQN will be synchronized in that case? 
 
> Initially we are looking at outbound only. But similar kind of use case would 
> be applicable for inbound also.
> 
> > >
> > > Signed-off-by: Aakash Sasidharan 
> > > ---
> > >  doc/guides/rel_notes/deprecation.rst | 7 +++
> > >  1 file changed, 7 insertions(+)
> > >
> > > diff --git a/doc/guides/rel_notes/deprecation.rst
> > > b/doc/guides/rel_notes/deprecation.rst
> > > index 6948641ff6..bc1d93cca7 100644
> > > --- a/doc/guides/rel_notes/deprecation.rst
> > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > @@ -133,6 +133,13 @@ Deprecation Notices
> > >Since these functions are not called directly by the application,
> > >the API remains unaffected.
> > >
> > > +* ipsec: The rte_ipsec library is updated to support sequence number
> > > +provided
> > > +  by application. A new flag ``RTE_IPSEC_SAFLAG_SQN_ASSIGN_DISABLE``
> > > +is introduced
> > > +  to disable sequence number assignment in lib IPsec.
> > > +  The API rte_ipsec_pkt_crypto_prepare is extended to include ``sqn``
> > > +as an
> > > +  additional parameter allowing application to specify the sequence
> > > +number to be
> > > +  used for the IPsec operation.
> > > +
> > >  * pipeline: The pipeline library legacy API (functions rte_pipeline_*)
> > >will be deprecated and subsequently removed in DPDK 24.11 release.
> > >Before this, the new pipeline library API (functions
> > > rte_swx_pipeline_*)
> > > --
> > > 2.25.1
> 



[PATCH v3 0/2] dts: add test suite for dual VLANs

2024-07-24 Thread jspewock
From: Jeremy Spewock 

v3:
 * rebase on rc3

Jeremy Spewock (2):
  dts: add dual_vlan testing suite
  dts: add dual_vlan test suite to the yaml schema

 dts/framework/config/conf_yaml_schema.json |   3 +-
 dts/tests/TestSuite_dual_vlan.py   | 268 +
 2 files changed, 270 insertions(+), 1 deletion(-)
 create mode 100644 dts/tests/TestSuite_dual_vlan.py

-- 
2.45.2



[PATCH v3 1/2] dts: add dual_vlan testing suite

2024-07-24 Thread jspewock
From: Jeremy Spewock 

This patch ports over the functionality of the dual_vlan suite from old
DTS to the new framework. This test suite exists to test the
functionality of VLAN functions such as stripping, inserting, and
filerting in the presence of two VLAN headers.

There are some test cases which were left out in this refactored version
including test cases that test the functionality of VLAN functions on a
packet with only one VLAN header, as this is something that is tested in
another test suite which is currently in development. Additionally,
this series does not include test cases for testing the adjustment of
TPID or extended VLAN ranges, as these things were included in the old
test suite specifically for testing on Intel hardware and they are not
universally supported on every NIC. There could be further reason to add
these test cases in the future once the capabilities feature is fully
implemented. Extended mode for VLANs seems to be exposed through offload
capabilities of the port, but there doesn't seem to be anything as
obvious for TPID modification.

depends-on: patch-142103 ("dts: add VLAN methods to testpmd shell")

Signed-off-by: Jeremy Spewock 
---
 dts/tests/TestSuite_dual_vlan.py | 268 +++
 1 file changed, 268 insertions(+)
 create mode 100644 dts/tests/TestSuite_dual_vlan.py

diff --git a/dts/tests/TestSuite_dual_vlan.py b/dts/tests/TestSuite_dual_vlan.py
new file mode 100644
index 00..3a8a52afeb
--- /dev/null
+++ b/dts/tests/TestSuite_dual_vlan.py
@@ -0,0 +1,268 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2024 University of New Hampshire
+
+"""Dual VLAN functionality testing suite.
+
+The main objective of this test suite is to ensure that standard VLAN 
functions such as stripping
+and filtering both still carry out their expected behavior in the presence of 
a packet which
+contains two VLAN headers. These functions should carry out said behavior not 
just in isolation,
+but also when other VLAN functions are configured on the same port. In 
addition to this, the
+priority attributes of VLAN headers should be unchanged in the case of 
multiple VLAN headers
+existing on a single packet, and a packet with only a single VLAN header 
should be able to have one
+additional VLAN inserted into it.
+"""
+from enum import Flag, auto
+from typing import ClassVar
+
+from scapy.layers.l2 import Dot1Q, Ether  # type: ignore[import-untyped]
+from scapy.packet import Packet, Raw  # type: ignore[import-untyped]
+
+from framework.params.testpmd import SimpleForwardingModes
+from framework.remote_session.testpmd_shell import TestPmdShell
+from framework.test_suite import TestSuite
+
+
+class TestDualVlan(TestSuite):
+"""DPDK Dual VLAN test suite.
+
+This suite tests the behavior of VLAN functions and properties in the 
presence of two VLAN
+headers. All VLAN functions which are tested in this suite are specified 
using the inner class
+:class:`TestCaseOptions` and should have cases for configuring them in
+:meth:`configure_testpmd` as well as cases for testing their behavior in
+:meth:`verify_vlan_functions`. Every combination of VLAN functions being 
enabled should be
+tested. Additionally, attributes of VLAN headers, such as priority, are 
tested to ensure they
+are not modified in the case of two VLAN headers.
+"""
+
+class TestCaseOptions(Flag):
+"""Flag for specifying which VLAN functions to configure."""
+
+#:
+VLAN_STRIP = auto()
+#:
+VLAN_FILTER_INNER = auto()
+#:
+VLAN_FILTER_OUTER = auto()
+
+#: ID to set on inner VLAN tags.
+inner_vlan_tag: ClassVar[int] = 2
+#: ID to set on outer VLAN tags.
+outer_vlan_tag: ClassVar[int] = 1
+#: ID to use when inserting VLAN tags.
+vlan_insert_tag: ClassVar[int] = 3
+#:
+rx_port: ClassVar[int] = 0
+#:
+tx_port: ClassVar[int] = 1
+
+def is_relevant_packet(self, pkt: Packet) -> bool:
+"""Check if a packet was sent by functions in this suite.
+
+All functions in this test suite send packets with a payload that is 
packed with 20 "X"
+characters. This method, therefore, can determine if the packet was 
sent by this test suite
+by just checking to see if this payload exists on the received packet.
+
+Args:
+pkt: Packet to check for relevancy.
+
+Returns:
+:data:`True` if the packet contains the expected payload, 
:data:`False` otherwise.
+"""
+return hasattr(pkt, "load") and "X" * 20 in str(pkt.load)
+
+def pkt_payload_contains_layers(self, pkt: Packet, *expected_layers: 
Dot1Q) -> bool:
+"""Verify that the payload of the packet matches `expected_layers`.
+
+The layers in the payload of `pkt` must match the type and the 
user-defined fields of the
+layers in `expected_layers` in order.
+
+Args:
+pkt: Packet to check the payload of.
+   

[PATCH v3 2/2] dts: add dual_vlan test suite to the yaml schema

2024-07-24 Thread jspewock
From: Jeremy Spewock 

Adds the test suite name to the yaml schema to allow for it to be run.

Signed-off-by: Jeremy Spewock 
---
 dts/framework/config/conf_yaml_schema.json | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/dts/framework/config/conf_yaml_schema.json 
b/dts/framework/config/conf_yaml_schema.json
index f02a310bb5..b8ad5b37b3 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -187,7 +187,8 @@
   "enum": [
 "hello_world",
 "os_udp",
-"pmd_buffer_scatter"
+"pmd_buffer_scatter",
+"dual_vlan"
   ]
 },
 "test_target": {
-- 
2.45.2



[PATCH v8 0/3] dts: refactored queue start/stop suite

2024-07-24 Thread Dean Marx
Refactored queue start/stop test suite to be compatible with context
manager. Updated queue setup, stop/start, and show queue info
methods/dataclasses in testpmd shell.

Dean Marx (3):
  dts: add functions to testpmd shell
  dts: initial queue start/stop suite implementation
  dts: queue suite conf schema

 dts/framework/config/conf_yaml_schema.json|   3 +-
 dts/framework/remote_session/testpmd_shell.py | 232 +-
 dts/tests/TestSuite_queue_start_stop.py   |  91 +++
 3 files changed, 324 insertions(+), 2 deletions(-)
 create mode 100644 dts/tests/TestSuite_queue_start_stop.py

-- 
2.44.0



[PATCH v8 1/3] dts: add functions to testpmd shell

2024-07-24 Thread Dean Marx
added set promisc, set verbose, and port stop
commands to testpmd shell.

Signed-off-by: Dean Marx 
---
 dts/framework/remote_session/testpmd_shell.py | 232 +-
 1 file changed, 231 insertions(+), 1 deletion(-)

diff --git a/dts/framework/remote_session/testpmd_shell.py 
b/dts/framework/remote_session/testpmd_shell.py
index eda6eb320f..bbc495bdba 100644
--- a/dts/framework/remote_session/testpmd_shell.py
+++ b/dts/framework/remote_session/testpmd_shell.py
@@ -19,7 +19,7 @@
 from dataclasses import dataclass, field
 from enum import Flag, auto
 from pathlib import PurePath
-from typing import ClassVar
+from typing import ClassVar, cast
 
 from typing_extensions import Self, Unpack
 
@@ -577,6 +577,56 @@ class TestPmdPortStats(TextParser):
 tx_bps: int = field(metadata=TextParser.find_int(r"Tx-bps:\s+(\d+)"))
 
 
+@dataclass
+class TestPmdPortQueue(TextParser):
+"""Dataclass representation of the common parts of the testpmd `show 
rxq/txq info` commands."""
+
+#:
+prefetch_threshold: int = field(metadata=TextParser.find_int(r"prefetch 
threshold: (\d+)"))
+#:
+host_threshold: int = field(metadata=TextParser.find_int(r"host threshold: 
(\d+)"))
+#:
+writeback_threshold: int = field(metadata=TextParser.find_int(r"writeback 
threshold: (\d+)"))
+#:
+free_threshold: int = field(metadata=TextParser.find_int(r"free threshold: 
(\d+)"))
+#:
+deferred_start: bool = field(metadata=TextParser.find("deferred start: 
on"))
+#: The number of RXD/TXDs is just the ring size of the queue.
+ring_size: int = field(metadata=TextParser.find_int(r"Number of 
(?:RXDs|TXDs): (\d+)"))
+#:
+is_queue_started: bool = field(metadata=TextParser.find("queue state: 
started"))
+#:
+burst_mode: str | None = field(
+default=None, metadata=TextParser.find(r"Burst mode: ([^\r\n]+)")
+)
+
+
+@dataclass
+class TestPmdTxPortQueue(TestPmdPortQueue):
+"""Dataclass representation for testpmd `show txq info` command."""
+
+#:
+rs_threshold: int | None = field(
+default=None, metadata=TextParser.find_int(r"RS threshold: (\d+)")
+)
+
+
+@dataclass
+class TestPmdRxPortQueue(TestPmdPortQueue):
+"""Dataclass representation for testpmd `show rxq info` command."""
+
+#:
+mempool: str | None = field(default=None, 
metadata=TextParser.find(r"Mempool: ([^\r\n]+)"))
+#:
+can_drop_packets: bool | None = field(
+default=None, metadata=TextParser.find(r"drop packets: on")
+)
+#:
+is_scattering_packets: bool | None = field(
+default=None, metadata=TextParser.find(r"scattered packets: on")
+)
+
+
 class TestPmdShell(DPDKShell):
 """Testpmd interactive shell.
 
@@ -804,6 +854,186 @@ def show_port_stats(self, port_id: int) -> 
TestPmdPortStats:
 
 return TestPmdPortStats.parse(output)
 
+def show_port_queue_info(
+self, port_id: int, queue_id: int, is_rx_queue: bool
+) -> TestPmdPortQueue:
+"""Get the info for a queue on a given port.
+
+Args:
+port_id: ID of the port where the queue resides.
+queue_id: ID of the queue to query.
+is_rx_queue: Whether to check an RX or TX queue. If :data:`True` 
an RX queue will be
+queried, otherwise a TX queue will be queried.
+
+Raises:
+InteractiveCommandExecutionError: If there is a failure when 
getting the info for the
+queue.
+
+Returns:
+Information about the queue on the given port.
+"""
+queue_type = "rxq" if is_rx_queue else "txq"
+queue_info = self.send_command(
+f"show {queue_type} info {port_id} {queue_id}", 
skip_first_line=True
+)
+if queue_info.startswith("ETHDEV: Invalid"):
+raise InteractiveCommandExecutionError(
+f"Could not get the info for {queue_type} {queue_id} on port 
{port_id}"
+)
+return (
+TestPmdRxPortQueue.parse(queue_info)
+if is_rx_queue
+else TestPmdTxPortQueue.parse(queue_info)
+)
+
+def show_port_rx_queue_info(self, port_id: int, queue_id: int) -> 
TestPmdRxPortQueue:
+"""Get port queue info and cast to :class:`TestPmdRxPortQueue`.
+
+Wrapper around :meth:`show_port_queue_info` that casts the more 
generic type into the
+correct subclass.
+
+Args:
+port_id: ID of the port where the queue resides.
+queue_id: ID of the queue to query.
+
+Returns:
+Information about the Rx queue on the given port.
+"""
+return cast(TestPmdRxPortQueue, self.show_port_queue_info(port_id, 
queue_id, True))
+
+def show_port_tx_queue_info(self, port_id: int, queue_id: int) -> 
TestPmdTxPortQueue:
+"""Get port queue info and cast to :class:`TestPmdTxPortQueue`.
+
+Wrapper around :meth:`show_port_queue_info` that casts the more 
generic type into th

[PATCH v8 2/3] dts: initial queue start/stop suite implementation

2024-07-24 Thread Dean Marx
This suite tests the ability of the Poll Mode Driver to enable
and disable Rx/Tx queues on a port.

Signed-off-by: Dean Marx 
---
 dts/tests/TestSuite_queue_start_stop.py | 91 +
 1 file changed, 91 insertions(+)
 create mode 100644 dts/tests/TestSuite_queue_start_stop.py

diff --git a/dts/tests/TestSuite_queue_start_stop.py 
b/dts/tests/TestSuite_queue_start_stop.py
new file mode 100644
index 00..7533f0b395
--- /dev/null
+++ b/dts/tests/TestSuite_queue_start_stop.py
@@ -0,0 +1,91 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2024 University of New Hampshire
+
+"""Rx/Tx queue start and stop functionality suite.
+
+This suite tests the ability of the poll mode driver to start and
+stop either the Rx or Tx queue (depending on the port) during runtime,
+and verify that packets are not received when one is disabled.
+
+Given a paired port topology, the Rx queue will be disabled on port 0,
+and the Tx queue will be disabled on port 1.
+
+"""
+
+from scapy.layers.inet import IP  # type: ignore[import-untyped]
+from scapy.layers.l2 import Ether  # type: ignore[import-untyped]
+from scapy.packet import Raw  # type: ignore[import-untyped]
+
+from framework.remote_session.testpmd_shell import SimpleForwardingModes, 
TestPmdShell
+from framework.test_suite import TestSuite
+
+
+class TestQueueStartStop(TestSuite):
+"""DPDK Queue start/stop test suite.
+
+Ensures Rx/Tx queue on a port can be disabled and enabled.
+Verifies packets are not received when either queue is disabled.
+The suite contains two test cases, Rx queue start/stop and
+Tx queue start/stop, which each disable the corresponding
+queue and verify that packets are not received/forwarded.
+"""
+
+def set_up_suite(self) -> None:
+"""Set up the test suite.
+
+Setup:
+Verify that at least two ports are open for session.
+"""
+self.verify(len(self._port_links) > 1, "Not enough ports")
+
+def send_packet_and_verify(self, should_receive: bool = True) -> None:
+"""Generate a packet, send to the DUT, and verify it is forwarded back.
+
+Args:
+should_receive: Indicate whether the packet should be received.
+"""
+packet = Ether() / IP() / Raw(load="x")
+received = self.send_packet_and_capture(packet)
+contains_packet = any(
+packet.haslayer(Raw) and b"x" in packet.load for packet in 
received
+)
+self.verify(
+should_receive == contains_packet,
+f"Packet was {'dropped' if should_receive else 'received'}",
+)
+
+def test_rx_queue_start_stop(self) -> None:
+"""Verify packets are not received by port 0 when Rx queue is disabled.
+
+Test:
+Create an interactive testpmd session, stop Rx queue on port 0, 
verify
+packets are not received.
+"""
+with TestPmdShell(node=self.sut_node) as testpmd:
+testpmd.set_forward_mode(SimpleForwardingModes.mac)
+testpmd.stop_port_queue(0, 0, True)
+testpmd.start()
+self.send_packet_and_verify(should_receive=False)
+stats = testpmd.show_port_stats(port_id=0)
+self.verify(
+stats.rx_packets == 0,
+"Packets were received on Rx queue when it should've been 
disabled",
+)
+
+def test_tx_queue_start_stop(self) -> None:
+"""Verify packets are not forwarded by port 1 when Tx queue is 
disabled.
+
+Test:
+Create an interactive testpmd session, stop Tx queue on port 1, 
verify
+packets are not forwarded.
+"""
+with TestPmdShell(node=self.sut_node) as testpmd:
+testpmd.set_forward_mode(SimpleForwardingModes.mac)
+testpmd.stop_port_queue(1, 0, False)
+testpmd.start()
+self.send_packet_and_verify(should_receive=False)
+stats = testpmd.show_port_stats(port_id=1)
+self.verify(
+stats.tx_packets == 0,
+"Packets were forwarded on Tx queue when it should've been 
disabled",
+)
-- 
2.44.0



[PATCH v8 3/3] dts: queue suite conf schema

2024-07-24 Thread Dean Marx
Configuration schema for the queue_start_stop suite.

Signed-off-by: Dean Marx 
---
 dts/framework/config/conf_yaml_schema.json | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/dts/framework/config/conf_yaml_schema.json 
b/dts/framework/config/conf_yaml_schema.json
index f02a310bb5..9882ddb3d8 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -187,7 +187,8 @@
   "enum": [
 "hello_world",
 "os_udp",
-"pmd_buffer_scatter"
+"pmd_buffer_scatter",
+"queue_start_stop"
   ]
 },
 "test_target": {
-- 
2.44.0



[PATCH v6 0/3] Improve interactive shell output gathering and logging

2024-07-24 Thread jspewock
From: Jeremy Spewock 

v6:
 * Fix error catch for retries. This series changed the error that
   is thrown in the case of a timeout, but it was originally overlooked
   that the context manager patch added a catch that is looking for the
   old timeout error. This version fixes the patch by adjusting the
   error that is expected in the context manager patch to match what
   this series changes it to.

Jeremy Spewock (3):
  dts: Improve output gathering in interactive shells
  dts: Add missing docstring from XML-RPC server
  dts: Improve logging for interactive shells

 dts/framework/exception.py| 66 ---
 dts/framework/remote_session/dpdk_shell.py|  3 +-
 .../single_active_interactive_shell.py| 60 -
 dts/framework/remote_session/testpmd_shell.py |  2 +
 .../testbed_model/traffic_generator/scapy.py  | 50 +-
 5 files changed, 139 insertions(+), 42 deletions(-)

-- 
2.45.2



[PATCH v6 1/3] dts: Improve output gathering in interactive shells

2024-07-24 Thread jspewock
From: Jeremy Spewock 

The current implementation of consuming output from interactive shells
relies on being able to find an expected prompt somewhere within the
output buffer after sending the command. This is useful in situations
where the prompt does not appear in the output itself, but in some
practical cases (such as the starting of an XML-RPC server for scapy)
the prompt exists in one of the commands sent to the shell and this can
cause the command to exit early and creates a race condition between the
server starting and the first command being sent to the server.

This patch addresses this problem by searching for a line that strictly
ends with the provided prompt, rather than one that simply contains it,
so that the detection that a command is finished is more consistent. It
also adds a catch to detect when a command times out before finding the
prompt or the underlying SSH session dies so that the exception can be
wrapped into a more explicit one and be more consistent with the
non-interactive shells.

Bugzilla ID: 1359
Fixes: 88489c0501af ("dts: add smoke tests")

Signed-off-by: Jeremy Spewock 
Reviewed-by: Juraj Linkeš 
Reviewed-by: Luca Vizzarro 
Reviewed-by: Nicholas Pratte 
---
 dts/framework/exception.py| 66 ---
 .../single_active_interactive_shell.py| 51 +-
 2 files changed, 80 insertions(+), 37 deletions(-)

diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 74fd2af3b6..f45f789825 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -51,26 +51,6 @@ class DTSError(Exception):
 severity: ClassVar[ErrorSeverity] = ErrorSeverity.GENERIC_ERR
 
 
-class SSHTimeoutError(DTSError):
-"""The SSH execution of a command timed out."""
-
-#:
-severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
-_command: str
-
-def __init__(self, command: str):
-"""Define the meaning of the first argument.
-
-Args:
-command: The executed command.
-"""
-self._command = command
-
-def __str__(self) -> str:
-"""Add some context to the string representation."""
-return f"{self._command} execution timed out."
-
-
 class SSHConnectionError(DTSError):
 """An unsuccessful SSH connection."""
 
@@ -98,8 +78,42 @@ def __str__(self) -> str:
 return message
 
 
-class SSHSessionDeadError(DTSError):
-"""The SSH session is no longer alive."""
+class _SSHTimeoutError(DTSError):
+"""The execution of a command via SSH timed out.
+
+This class is private and meant to be raised as its interactive and 
non-interactive variants.
+"""
+
+#:
+severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
+_command: str
+
+def __init__(self, command: str):
+"""Define the meaning of the first argument.
+
+Args:
+command: The executed command.
+"""
+self._command = command
+
+def __str__(self) -> str:
+"""Add some context to the string representation."""
+return f"{self._command} execution timed out."
+
+
+class SSHTimeoutError(_SSHTimeoutError):
+"""The execution of a command on a non-interactive SSH session timed 
out."""
+
+
+class InteractiveSSHTimeoutError(_SSHTimeoutError):
+"""The execution of a command on an interactive SSH session timed out."""
+
+
+class _SSHSessionDeadError(DTSError):
+"""The SSH session is no longer alive.
+
+This class is private and meant to be raised as its interactive and 
non-interactive variants.
+"""
 
 #:
 severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
@@ -118,6 +132,14 @@ def __str__(self) -> str:
 return f"SSH session with {self._host} has died."
 
 
+class SSHSessionDeadError(_SSHSessionDeadError):
+"""Non-interactive SSH session has died."""
+
+
+class InteractiveSSHSessionDeadError(_SSHSessionDeadError):
+"""Interactive SSH session as died."""
+
+
 class ConfigurationError(DTSError):
 """An invalid configuration."""
 
diff --git a/dts/framework/remote_session/single_active_interactive_shell.py 
b/dts/framework/remote_session/single_active_interactive_shell.py
index 38094c0fe2..38318aa764 100644
--- a/dts/framework/remote_session/single_active_interactive_shell.py
+++ b/dts/framework/remote_session/single_active_interactive_shell.py
@@ -27,7 +27,11 @@
 from paramiko import Channel, channel  # type: ignore[import-untyped]
 from typing_extensions import Self
 
-from framework.exception import InteractiveCommandExecutionError
+from framework.exception import (
+InteractiveCommandExecutionError,
+InteractiveSSHSessionDeadError,
+InteractiveSSHTimeoutError,
+)
 from framework.logger import DTSLogger
 from framework.params import Params
 from framework.settings import SETTINGS
@@ -71,7 +75,10 @@ class SingleActiveInteractiveShell(ABC):
 
 #: Extra characters to add to the end of every command
 #: before sending them. This is often overrid

[PATCH v6 2/3] dts: Add missing docstring from XML-RPC server

2024-07-24 Thread jspewock
From: Jeremy Spewock 

When this XML-RPC server implementation was added, the docstring had to
be shortened in order to reduce the chances of this race condition being
encountered. Now that this race condition issue is resolved, the full
docstring can be restored.

Signed-off-by: Jeremy Spewock 
Reviewed-by: Juraj Linkeš 
Reviewed-by: Luca Vizzarro 
Reviewed-by: Nicholas Pratte 
---
 .../testbed_model/traffic_generator/scapy.py  | 46 ++-
 1 file changed, 45 insertions(+), 1 deletion(-)

diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py 
b/dts/framework/testbed_model/traffic_generator/scapy.py
index 7f0cc2bc18..08e1f4ae7e 100644
--- a/dts/framework/testbed_model/traffic_generator/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -128,9 +128,53 @@ def scapy_send_packets(xmlrpc_packets: 
list[xmlrpc.client.Binary], send_iface: s
 
 
 class QuittableXMLRPCServer(SimpleXMLRPCServer):
-"""Basic XML-RPC server.
+r"""Basic XML-RPC server.
 
 The server may be augmented by functions serializable by the 
:mod:`marshal` module.
+
+Example:
+::
+
+def hello_world():
+# to be sent to the XML-RPC server
+print("Hello World!")
+
+# start the XML-RPC server on the remote node
+# this is done by starting a Python shell on the remote node
+from framework.remote_session import PythonShell
+# the example assumes you're already connected to a tg_node
+session = tg_node.create_interactive_shell(PythonShell, timeout=5, 
privileged=True)
+
+# then importing the modules needed to run the server
+# and the modules for any functions later added to the server
+session.send_command("import xmlrpc")
+session.send_command("from xmlrpc.server import 
SimpleXMLRPCServer")
+
+# sending the source code of this class to the Python shell
+from xmlrpc.server import SimpleXMLRPCServer
+src = inspect.getsource(QuittableXMLRPCServer)
+src = "\n".join([l for l in src.splitlines() if not l.isspace() 
and l != ""])
+spacing = "\n" * 4
+session.send_command(spacing + src + spacing)
+
+# then starting the server with:
+command = "s = QuittableXMLRPCServer(('0.0.0.0', 
{listen_port}));s.serve_forever()"
+session.send_command(command, "XMLRPC OK")
+
+# now the server is running on the remote node and we can add 
functions to it
+# first connect to the server from the execution node
+import xmlrpc.client
+server_url = f"http://{tg_node.config.hostname}:8000";
+rpc_server_proxy = xmlrpc.client.ServerProxy(server_url)
+
+# get the function bytes to send
+import marshal
+function_bytes = marshal.dumps(hello_world.__code__)
+rpc_server_proxy.add_rpc_function(hello_world.__name__, 
function_bytes)
+
+# now we can execute the function on the server
+xmlrpc_binary_recv: xmlrpc.client.Binary = 
rpc_server_proxy.hello_world()
+print(str(xmlrpc_binary_recv))
 """
 
 def __init__(self, *args, **kwargs):
-- 
2.45.2



[PATCH v6 3/3] dts: Improve logging for interactive shells

2024-07-24 Thread jspewock
From: Jeremy Spewock 

The messages being logged by interactive shells currently are using the
same logger as the node they were created from. Because of this, when
sending interactive commands, the logs make no distinction between when
you are sending a command directly to the host and when you are using an
interactive shell on the host. This change adds names to interactive
shells so that they are able to use their own loggers with distinct
names.

Signed-off-by: Jeremy Spewock 
Reviewed-by: Juraj Linkeš 
Tested-by: Nicholas Pratte 
Reviewed-by: Nicholas Pratte 
Reviewed-by: Luca Vizzarro 
---
 dts/framework/remote_session/dpdk_shell.py   | 3 ++-
 .../remote_session/single_active_interactive_shell.py| 9 +++--
 dts/framework/remote_session/testpmd_shell.py| 2 ++
 dts/framework/testbed_model/traffic_generator/scapy.py   | 4 +++-
 4 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/dts/framework/remote_session/dpdk_shell.py 
b/dts/framework/remote_session/dpdk_shell.py
index 950c6ca670..c5f5c2d116 100644
--- a/dts/framework/remote_session/dpdk_shell.py
+++ b/dts/framework/remote_session/dpdk_shell.py
@@ -82,6 +82,7 @@ def __init__(
 ascending_cores: bool = True,
 append_prefix_timestamp: bool = True,
 app_params: EalParams = EalParams(),
+name: str | None = None,
 ) -> None:
 """Extends :meth:`~.interactive_shell.InteractiveShell.__init__`.
 
@@ -96,7 +97,7 @@ def __init__(
 append_prefix_timestamp,
 )
 
-super().__init__(node, privileged, timeout, app_params)
+super().__init__(node, privileged, timeout, app_params, name)
 
 def _update_real_path(self, path: PurePath) -> None:
 """Extends 
:meth:`~.interactive_shell.InteractiveShell._update_real_path`.
diff --git a/dts/framework/remote_session/single_active_interactive_shell.py 
b/dts/framework/remote_session/single_active_interactive_shell.py
index 38318aa764..77a4dcefdf 100644
--- a/dts/framework/remote_session/single_active_interactive_shell.py
+++ b/dts/framework/remote_session/single_active_interactive_shell.py
@@ -32,7 +32,7 @@
 InteractiveSSHSessionDeadError,
 InteractiveSSHTimeoutError,
 )
-from framework.logger import DTSLogger
+from framework.logger import DTSLogger, get_dts_logger
 from framework.params import Params
 from framework.settings import SETTINGS
 from framework.testbed_model.node import Node
@@ -92,6 +92,7 @@ def __init__(
 privileged: bool = False,
 timeout: float = SETTINGS.timeout,
 app_params: Params = Params(),
+name: str | None = None,
 ) -> None:
 """Create an SSH channel during initialization.
 
@@ -102,9 +103,13 @@ def __init__(
 shell. This timeout is for collecting output, so if reading 
from the buffer
 and no output is gathered within the timeout, an exception is 
thrown.
 app_params: The command line parameters to be passed to the 
application on startup.
+name: Name for the interactive shell to use for logging. This name 
will be appended to
+the name of the underlying node which it is running on.
 """
 self._node = node
-self._logger = node._logger
+if name is None:
+name = type(self).__name__
+self._logger = get_dts_logger(f"{node.name}.{name}")
 self._app_params = app_params
 self._privileged = privileged
 self._timeout = timeout
diff --git a/dts/framework/remote_session/testpmd_shell.py 
b/dts/framework/remote_session/testpmd_shell.py
index eda6eb320f..43e9f56517 100644
--- a/dts/framework/remote_session/testpmd_shell.py
+++ b/dts/framework/remote_session/testpmd_shell.py
@@ -604,6 +604,7 @@ def __init__(
 lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = 
LogicalCoreCount(),
 ascending_cores: bool = True,
 append_prefix_timestamp: bool = True,
+name: str | None = None,
 **app_params: Unpack[TestPmdParamsDict],
 ) -> None:
 """Overrides :meth:`~.dpdk_shell.DPDKShell.__init__`. Changes 
app_params to kwargs."""
@@ -615,6 +616,7 @@ def __init__(
 ascending_cores,
 append_prefix_timestamp,
 TestPmdParams(**app_params),
+name,
 )
 
 def start(self, verify: bool = True) -> None:
diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py 
b/dts/framework/testbed_model/traffic_generator/scapy.py
index 08e1f4ae7e..13fc1107aa 100644
--- a/dts/framework/testbed_model/traffic_generator/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -261,7 +261,9 @@ def __init__(self, tg_node: Node, config: 
ScapyTrafficGeneratorConfig):
 self._tg_node.config.os == OS.linux
 ), "Linux is the only supported OS for scapy traffic generation"
 
-self.session = PythonShell(self._tg_node, timeout=5, privileged=True)
+self.session 

[PATCH v4 0/3] Add packet dissector

2024-07-24 Thread Stephen Hemminger
While debugging TAP rte_flow discovered that test pmd verbose output
was confusing and unhelpful. Instead, made a simple dissector that
prints one line per packet like this in test-pmd with verbose level 4.

Seq#   TimePort:Que R Description
 1 0.00:0   R :: → ff02::16 ICMP 143
 2 0.00:0   R :: → ff02::1:ffbc:89e5 ICMP 135
 3 0.00:0   R :: → ff02::16 ICMP 143
...
38 0.3138682350:0   T fe80::7442:55ff:febc:89e5 → ff02::fb UDP 70 5353 
→ 5353
39 4.6351210710:0   R fe80::7442:55ff:febc:89e5 → ff02::2 ICMPv6 Router 
Solicitation
40 4.6351385350:0   T fe80::7442:55ff:febc:89e5 → ff02::2 ICMPv6 Router 
Solicitation

v4 - add direction flag to output
   - fix build on Windows

Stephen Hemminger (3):
  net: add new packet dissector
  test: add test for packet dissector
  test-pmd: add more packet verbose decode options

 app/test-pmd/cmdline_flow.c |   3 +-
 app/test-pmd/config.c   |  33 +-
 app/test-pmd/testpmd.h  |  11 +
 app/test-pmd/util.c |  77 +++-
 app/test/meson.build|   1 +
 app/test/test_dissect.c | 245 
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |   5 +-
 lib/net/meson.build |   2 +
 lib/net/rte_dissect.c   | 404 
 lib/net/rte_dissect.h   |  42 ++
 lib/net/version.map |   7 +
 11 files changed, 812 insertions(+), 18 deletions(-)
 create mode 100644 app/test/test_dissect.c
 create mode 100644 lib/net/rte_dissect.c
 create mode 100644 lib/net/rte_dissect.h

-- 
2.43.0



[PATCH v4 1/3] net: add new packet dissector

2024-07-24 Thread Stephen Hemminger
The function rte_dissect_mbuf is used to decode the contents
of an mbuf into ah uman readable format similar to what tshark uses.

For now, handles IP, IPv6, TCP, UDP, ICMP and ARP.

Signed-off-by: Stephen Hemminger 
---
 lib/net/meson.build   |   2 +
 lib/net/rte_dissect.c | 404 ++
 lib/net/rte_dissect.h |  42 +
 lib/net/version.map   |   7 +
 4 files changed, 455 insertions(+)
 create mode 100644 lib/net/rte_dissect.c
 create mode 100644 lib/net/rte_dissect.h

diff --git a/lib/net/meson.build b/lib/net/meson.build
index 0b69138949..48edf17ea3 100644
--- a/lib/net/meson.build
+++ b/lib/net/meson.build
@@ -2,6 +2,7 @@
 # Copyright(c) 2017-2020 Intel Corporation
 
 headers = files(
+'rte_dissect.h',
 'rte_ip.h',
 'rte_tcp.h',
 'rte_udp.h',
@@ -30,6 +31,7 @@ headers = files(
 
 sources = files(
 'rte_arp.c',
+'rte_dissect.c',
 'rte_ether.c',
 'rte_net.c',
 'rte_net_crc.c',
diff --git a/lib/net/rte_dissect.c b/lib/net/rte_dissect.c
new file mode 100644
index 00..acaa62ec4d
--- /dev/null
+++ b/lib/net/rte_dissect.c
@@ -0,0 +1,404 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Stephen Hemminger 
+ *
+ * Print packets in format similar to tshark.
+ * Output should be one line per mbuf
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+/* Forward declaration - Ethernet can be nested */
+static void
+dissect_eth(char *buf, size_t size, const struct rte_mbuf *mb,
+   uint32_t offset, uint32_t dump_len);
+
+/*
+ * Read data from segmented mbuf and put it into buf , but stop if would go 
past max length
+ * See rte_pktmbuf_read()
+ */
+static const void *
+dissect_read(const struct rte_mbuf *m, uint32_t offset, uint32_t len,
+void *buf, uint32_t dump_len)
+{
+   /* If this header would be past the requested length
+* then unwind back to end the string.
+*/
+   if (dump_len > 0 && offset + len > dump_len)
+   return NULL;
+
+   return rte_pktmbuf_read(m, offset, len, buf);
+}
+
+/*
+ * Print to string buffer and adjust result
+ * Returns true on success, false if buffer is exhausted.
+ */
+static __rte_format_printf(3, 4) bool
+dissect_print(char **buf, size_t *sz, const char *fmt, ...)
+{
+   va_list ap;
+   int cnt;
+
+   va_start(ap, fmt);
+   cnt = vsnprintf(*buf, *sz, fmt, ap);
+   va_end(ap);
+
+   /* error or string is full */
+   if (cnt < 0 || cnt >= (int)*sz) {
+   *sz = 0;
+   return false;
+   }
+
+   *buf += cnt;
+   *sz -= cnt;
+   return true;
+}
+
+static void
+dissect_arp(char *buf, size_t size, const struct rte_mbuf *mb, uint32_t 
offset, uint32_t dump_len)
+{
+   const struct rte_arp_hdr *arp;
+   struct rte_arp_hdr _arp;
+   char abuf[64];
+
+   arp = dissect_read(mb, offset, sizeof(_arp), &_arp, dump_len);
+   if (arp == NULL)
+   return;
+   offset += sizeof(_arp);
+
+   uint16_t ar_op = rte_be_to_cpu_16(arp->arp_opcode);
+   switch (ar_op) {
+   case RTE_ARP_OP_REQUEST:
+   inet_ntop(AF_INET, &arp->arp_data.arp_tip, abuf, sizeof(abuf));
+   if (!dissect_print(&buf, &size, "Who has %s? ", abuf))
+   return;
+
+   rte_ether_format_addr(abuf, sizeof(abuf), 
&arp->arp_data.arp_sha);
+   if (!dissect_print(&buf, &size, "Tell %s ", abuf))
+   return;
+
+   break;
+   case RTE_ARP_OP_REPLY:
+   inet_ntop(AF_INET, &arp->arp_data.arp_sip, abuf, sizeof(abuf));
+   if (!dissect_print(&buf, &size, "%s is at", abuf))
+   return;
+
+   rte_ether_format_addr(abuf, sizeof(abuf), 
&arp->arp_data.arp_sha);
+   if (!dissect_print(&buf, &size, "%s ", abuf))
+   return;
+   break;
+   case RTE_ARP_OP_INVREQUEST:
+   rte_ether_format_addr(abuf, sizeof(abuf), 
&arp->arp_data.arp_tha);
+   if (!dissect_print(&buf, &size, "Who is %s? ", abuf))
+   return;
+
+   rte_ether_format_addr(abuf, sizeof(abuf), 
&arp->arp_data.arp_sha);
+   if (!dissect_print(&buf, &size, "Tell %s ", abuf))
+   return;
+   break;
+
+   case RTE_ARP_OP_INVREPLY:
+   rte_ether_format_addr(abuf, sizeof(buf), 
&arp->arp_data.arp_sha);
+   if (!dissect_print(&buf, &size, "%s is at ", abuf))
+   return;
+
+   inet_ntop(AF_INET, &arp->arp_data.arp_sip, abuf, sizeof(abuf));
+   if (!dissect_print(&buf, &size, "%s ", abuf))
+   return;
+   break;
+   default:
+   if (!dissect_print(&buf

[PATCH v4 2/3] test: add test for packet dissector

2024-07-24 Thread Stephen Hemminger
Some tests for new packet dissector.

Signed-off-by: Stephen Hemminger 
---
 app/test/meson.build|   1 +
 app/test/test_dissect.c | 245 
 2 files changed, 246 insertions(+)
 create mode 100644 app/test/test_dissect.c

diff --git a/app/test/meson.build b/app/test/meson.build
index e29258e6ec..9cd2051320 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -62,6 +62,7 @@ source_file_deps = {
 'test_debug.c': [],
 'test_devargs.c': ['kvargs'],
 'test_dispatcher.c': ['dispatcher'],
+'test_dissect.c': ['net'],
 'test_distributor.c': ['distributor'],
 'test_distributor_perf.c': ['distributor'],
 'test_dmadev.c': ['dmadev', 'bus_vdev'],
diff --git a/app/test/test_dissect.c b/app/test/test_dissect.c
new file mode 100644
index 00..2c79acf766
--- /dev/null
+++ b/app/test/test_dissect.c
@@ -0,0 +1,245 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2024 Stephen Hemminger 
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "test.h"
+
+#ifndef LINE_MAX
+#define LINE_MAX   2048
+#endif
+
+#define TOTAL_PACKETS  100
+#define PACKET_LEN 1000
+#define ETH_IP_UDP_VXLAN_SIZE (sizeof(struct rte_ether_hdr) + \
+  sizeof(struct rte_ipv4_hdr) + \
+  sizeof(struct rte_udp_hdr) + \
+  sizeof(struct rte_vxlan_hdr))
+
+
+static uint16_t port_id;
+static const char null_dev[] = "net_null0";
+
+static void
+add_header(struct rte_mbuf *mb, uint32_t plen,
+  rte_be16_t src_port, rte_be16_t dst_port)
+{
+   struct {
+   struct rte_ether_hdr eth;
+   struct rte_ipv4_hdr ip;
+   struct rte_udp_hdr udp;
+   } pkt = {
+   .eth = {
+   .dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+   .ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4),
+   },
+   .ip = {
+   .version_ihl = RTE_IPV4_VHL_DEF,
+   .time_to_live = 1,
+   .next_proto_id = IPPROTO_UDP,
+   .src_addr = rte_cpu_to_be_32(RTE_IPV4_LOOPBACK),
+   .dst_addr = rte_cpu_to_be_32(RTE_IPV4_BROADCAST),
+   },
+   .udp = {
+   .dst_port = dst_port,
+   .src_port = src_port,
+   },
+   };
+
+   rte_eth_random_addr(pkt.eth.src_addr.addr_bytes);
+
+   plen -= sizeof(struct rte_ether_hdr);
+   pkt.ip.total_length = rte_cpu_to_be_16(plen);
+   pkt.ip.hdr_checksum = rte_ipv4_cksum(&pkt.ip);
+
+   plen -= sizeof(struct rte_ipv4_hdr);
+   pkt.udp.src_port = rte_rand();
+   pkt.udp.dgram_len = rte_cpu_to_be_16(plen);
+
+   /* Copy header into mbuf */
+   memcpy(rte_pktmbuf_append(mb, sizeof(pkt)), &pkt, sizeof(pkt));
+}
+
+static void
+add_vxlan(struct rte_mbuf *mb, rte_be32_t vni)
+{
+   struct rte_vxlan_hdr *vxlan;
+
+   vxlan = (struct rte_vxlan_hdr *)rte_pktmbuf_append(mb, sizeof(*vxlan));
+   memset(vxlan, 0, sizeof(*vxlan));
+   vxlan->flag_i = 1;
+   vxlan->vx_vni = vni;
+}
+
+
+static void
+fill_data(struct rte_mbuf *mb, uint32_t len)
+{
+   uint32_t i;
+   char *ptr = rte_pktmbuf_append(mb, len);
+   char c = '!';
+
+   /* traditional barber pole pattern */
+   for (i = 0; i < len; i++) {
+   ptr[i] = c++;
+   if (c == 0x7f)
+   c = '!';
+   }
+}
+
+static void
+mbuf_prep(struct rte_mbuf *mb, uint8_t buf[], uint32_t buf_len)
+{
+   mb->buf_addr = buf;
+   rte_mbuf_iova_set(mb, (uintptr_t)buf);
+   mb->buf_len = buf_len;
+   rte_mbuf_refcnt_set(mb, 1);
+
+   /* set pool pointer to dummy value, test doesn't use it */
+   mb->pool = (void *)buf;
+
+   rte_pktmbuf_reset(mb);
+}
+
+static int
+test_setup(void)
+{
+   port_id = rte_eth_dev_count_avail();
+
+   /* Make a dummy null device to snoop on */
+   if (rte_vdev_init(null_dev, NULL) != 0) {
+   fprintf(stderr, "Failed to create vdev '%s'\n", null_dev);
+   goto fail;
+   }
+   return 0;
+
+fail:
+   rte_vdev_uninit(null_dev);
+   return -1;
+}
+
+static void
+test_cleanup(void)
+{
+   rte_vdev_uninit(null_dev);
+}
+
+
+static int
+test_simple(void)
+{
+   struct rte_mbuf mb;
+   uint8_t buf[RTE_MBUF_DEFAULT_BUF_SIZE];
+   uint32_t data_len = PACKET_LEN;
+   uint16_t src_port = rte_rand();
+   const uint16_t dst_port = rte_cpu_to_be_16(9); /* Discard port */
+   char obuf[LINE_MAX];
+
+   /* make a dummy packet */
+   mbuf_prep(&mb, buf, sizeof(buf));
+   add_header(&mb, data_len, src_port, dst_port);
+   fill_data(&mb, data_len - mb.data_off);
+
+   rte_dissect_mbuf(obuf, 

[PATCH v4 3/3] test-pmd: add more packet verbose decode options

2024-07-24 Thread Stephen Hemminger
The existing verbose levels 1..3 provide a messy multi-line
output per packet. I found this unhelpful when diagnosing many
types of problems like packet flow.

This patch keeps the previous levels and adds two new levels:
4: one line per packet is printed in a format resembling
   tshark output. With addresses and protocol info.

5: dump packet in hex.
   Useful if the driver is messing up the data.

Signed-off-by: Stephen Hemminger 
---
 app/test-pmd/cmdline_flow.c |  3 +-
 app/test-pmd/config.c   | 33 ++---
 app/test-pmd/testpmd.h  | 11 +++
 app/test-pmd/util.c | 77 +++--
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  5 +-
 5 files changed, 111 insertions(+), 18 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index d04280eb3e..a010fcf61a 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -14143,7 +14143,8 @@ cmd_set_raw_parsed(const struct buffer *in)
upper_layer = proto;
}
}
-   if (verbose_level & 0x1)
+
+   if (verbose_level > 0)
printf("total data size is %zu\n", (*total_size));
RTE_ASSERT((*total_size) <= ACTION_RAW_ENCAP_MAX_DATA);
memmove(data, (data_tail - (*total_size)), *total_size);
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 6f0beafa27..b5b5f3b464 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -6246,26 +6246,37 @@ configure_rxtx_dump_callbacks(uint16_t verbose)
return;
 #endif
 
-   RTE_ETH_FOREACH_DEV(portid)
-   {
-   if (verbose == 1 || verbose > 2)
+   RTE_ETH_FOREACH_DEV(portid) {
+   switch (verbose) {
+   case VERBOSE_OFF:
+   remove_rx_dump_callbacks(portid);
+   remove_tx_dump_callbacks(portid);
+   break;
+   case VERBOSE_RX:
add_rx_dump_callbacks(portid);
-   else
+   remove_tx_dump_callbacks(portid);
+   break;
+   case VERBOSE_TX:
+   add_tx_dump_callbacks(portid);
remove_rx_dump_callbacks(portid);
-   if (verbose >= 2)
+   break;
+   default:
+   add_rx_dump_callbacks(portid);
add_tx_dump_callbacks(portid);
-   else
-   remove_tx_dump_callbacks(portid);
+   }
}
 }
 
 void
 set_verbose_level(uint16_t vb_level)
 {
-   printf("Change verbose level from %u to %u\n",
-  (unsigned int) verbose_level, (unsigned int) vb_level);
-   verbose_level = vb_level;
-   configure_rxtx_dump_callbacks(verbose_level);
+   if (vb_level < VERBOSE_MAX) {
+   printf("Change verbose level from %u to %u\n", verbose_level, 
vb_level);
+   verbose_level = vb_level;
+   configure_rxtx_dump_callbacks(verbose_level);
+   } else {
+   fprintf(stderr, "Verbose level %u is out of range\n", vb_level);
+   }
 }
 
 void
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 9facd7f281..3d7a2b6dac 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -489,6 +489,17 @@ enum dcb_mode_enable
 
 extern uint8_t xstats_hide_zero; /**< Hide zero values for xstats display */
 
+enum verbose_mode {
+   VERBOSE_OFF = 0,
+   VERBOSE_RX,
+   VERBOSE_TX,
+   VERBOSE_BOTH,
+   VERBOSE_DISSECT,
+   VERBOSE_HEX,
+   VERBOSE_MAX
+};
+
+
 /* globals used for configuration */
 extern uint8_t record_core_cycles; /**< Enables measurement of CPU cycles */
 extern uint8_t record_burst_stats; /**< Enables display of RX and TX bursts */
diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c
index bf9b639d95..f277e7f035 100644
--- a/app/test-pmd/util.c
+++ b/app/test-pmd/util.c
@@ -5,9 +5,11 @@
 
 #include 
 
+#include 
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -16,6 +18,7 @@
 #include "testpmd.h"
 
 #define MAX_STRING_LEN 8192
+#define MAX_DUMP_LEN   1024
 
 #define MKDUMPSTR(buf, buf_size, cur_len, ...) \
 do { \
@@ -67,9 +70,10 @@ get_timestamp(const struct rte_mbuf *mbuf)
timestamp_dynfield_offset, rte_mbuf_timestamp_t *);
 }
 
-static inline void
-dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
- uint16_t nb_pkts, int is_rx)
+/* More verbose older style packet decode */
+static void
+dump_pkt_verbose(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
+uint16_t nb_pkts, int is_rx)
 {
struct rte_mbuf  *mb;
const struct rte_ether_hdr *eth_hdr;
@@ -90,8 +94,6 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct 
rte_mbuf *pkts[],
size_t cur_len = 0;
uint64_t 

Re: [PATCH] eal: add support for TRNG with Arm RNG feature

2024-07-24 Thread Mattias Rönnblom

On 2024-07-24 18:16, Stephen Hemminger wrote:




   > +/**
   > + * Get a true random value.
   > + *
   > + * The generator is cryptographically secure.

If you want to extend  with a cryptographically secure
random number generator, that's fine.

To have an API that's only available on certain ARM CPUs is not.

NAK

A new function should be called something with "secure", rather than
"true" (which is a bit silly, since we might well live in a completely
deterministic universe). "secure" would more clearly communicate the
intent, and also doesn't imply any particular implementation.


Agree, with Mattias. What constitutes a secure random number generator
is a matter of much debate. Most of the HW random generators are taking
diode (Schottky noise) and doing transforms on it to get something uniform.

If a key generation type API was added, why not just use existing and more
researched kernel get_random()?
   


Ideally, you want to avoid system calls on lcore workers doing packet
processing. If you have to do system calls (which I believe is the case
here), it's better to a simple call, not so often.

getentropy() seems to need about 800 core clock cycles on my x86_64, on
average. (rte_rand() needs ~11 cc/call.) 800 cc is not too horrible, but
system calls tend to have some pretty bad tail latencies.

To improve efficiency, one could do a getentropy() on a relatively large
buffer, and cache the result on a per-lcore basis, amortizing the system
call overhead over many calls.

You still have the tail latency issue to deal with. We could have a
control thread providing entropy for the lcores, but that seems like
massive overkill.



Getrandom is a vsyscall on current kernels, and it manages use of entropy across
multiple sources. If you are doing lots of key generation, you don't want to
hit the hardware every time.

https://lwn.net/Articles/974468/




If I understand things correctly, the getrandom() vDSO support was 
mainlined *today*, so you need to be current indeed to have a vDSO 
getrandom(). :)


The above benchmark (rand_perf_autotest with rte_rand() implemented with 
getentropy()) was run on Linux 5.15 and glibc 2.35, so a regular system 
call was used.


(getentropy() delegates to getrandom(), so the performance is the same.)


[PATCH v5 0/3] dts: refactored dynamic config test suite

2024-07-24 Thread Dean Marx
Refactored dynamic configuration suite to be compatible with context
manager.

Dean Marx (3):
  dts: add multicast set function to shell
  dts: dynamic config conf schema
  dts: dynamic config test suite

 dts/framework/config/conf_yaml_schema.json|   3 +-
 dts/framework/remote_session/testpmd_shell.py |  46 ++
 dts/tests/TestSuite_dynamic_config.py | 145 ++
 3 files changed, 193 insertions(+), 1 deletion(-)
 create mode 100644 dts/tests/TestSuite_dynamic_config.py

-- 
2.44.0



[PATCH v5 1/3] dts: add multicast set function to shell

2024-07-24 Thread Dean Marx
added set multicast function for changing allmulticast mode within testpmd.

Signed-off-by: Dean Marx 
---
 dts/framework/remote_session/testpmd_shell.py | 46 +++
 1 file changed, 46 insertions(+)

diff --git a/dts/framework/remote_session/testpmd_shell.py 
b/dts/framework/remote_session/testpmd_shell.py
index eda6eb320f..b7af2adb14 100644
--- a/dts/framework/remote_session/testpmd_shell.py
+++ b/dts/framework/remote_session/testpmd_shell.py
@@ -804,6 +804,52 @@ def show_port_stats(self, port_id: int) -> 
TestPmdPortStats:
 
 return TestPmdPortStats.parse(output)
 
+def set_promisc(self, port: int, on: bool, verify: bool = True):
+"""Turns promiscuous mode on/off for the specified port.
+
+Args:
+port: Port number to use, should be within 0-32.
+on: If :data:`True`, turn promisc mode on, otherwise turn off.
+verify: If :data:`True` an additional command will be sent to 
verify that promisc mode
+is properly set. Defaults to :data:`True`.
+
+Raises:
+InteractiveCommandExecutionError: If `verify` is :data:`True` and 
promisc mode
+is not correctly set.
+"""
+promisc_output = self.send_command(f"set promisc {port} {'on' if on 
else 'off'}")
+if verify:
+stats = self.show_port_info(port_id=port)
+if on ^ stats.is_promiscuous_mode_enabled:
+self._logger.debug(f"Failed to set promisc mode on port 
{port}: \n{promisc_output}")
+raise InteractiveCommandExecutionError(
+f"Testpmd failed to set promisc mode on port {port}."
+)
+
+def set_multicast_all(self, on: bool, verify: bool = True):
+"""Turns multicast mode on/off for the specified port.
+
+Args:
+on: If :data:`True`, turns multicast mode on, otherwise turns off.
+verify: If :data:`True` an additional command will be sent to 
verify
+that multicast mode is properly set. Defaults to :data:`True`.
+
+Raises:
+InteractiveCommandExecutionError: If `verify` is :data:`True` and 
multicast
+mode is not properly set.
+"""
+multicast_output = self.send_command(f"set allmulti all {'on' if on 
else 'off'}")
+if verify:
+stats0 = self.show_port_info(port_id=0)
+stats1 = self.show_port_info(port_id=1)
+if on ^ (stats0.is_allmulticast_mode_enabled and 
stats1.is_allmulticast_mode_enabled):
+self._logger.debug(
+f"Failed to set multicast mode on all ports.: 
\n{multicast_output}"
+)
+raise InteractiveCommandExecutionError(
+"Testpmd failed to set multicast mode on all ports."
+)
+
 def _close(self) -> None:
 """Overrides :meth:`~.interactive_shell.close`."""
 self.stop()
-- 
2.44.0



[PATCH v5 2/3] dts: dynamic config conf schema

2024-07-24 Thread Dean Marx
configuration schema to run dynamic configuration test suite.

Signed-off-by: Dean Marx 
---
 dts/framework/config/conf_yaml_schema.json | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/dts/framework/config/conf_yaml_schema.json 
b/dts/framework/config/conf_yaml_schema.json
index f02a310bb5..d7b4afed7d 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -187,7 +187,8 @@
   "enum": [
 "hello_world",
 "os_udp",
-"pmd_buffer_scatter"
+"pmd_buffer_scatter",
+"dynamic_config"
   ]
 },
 "test_target": {
-- 
2.44.0



[PATCH v5 3/3] dts: dynamic config test suite

2024-07-24 Thread Dean Marx
Suite for testing ability of Poll Mode Driver to turn promiscuous
mode on/off, allmulticast mode on/off, and show expected behavior
when sending packets with known, unknown, broadcast, and multicast
destination MAC addresses.

Depends-on: patch-1142113 ("add send_packets to test suites and rework
packet addressing")

Signed-off-by: Dean Marx 
---
 dts/tests/TestSuite_dynamic_config.py | 145 ++
 1 file changed, 145 insertions(+)
 create mode 100644 dts/tests/TestSuite_dynamic_config.py

diff --git a/dts/tests/TestSuite_dynamic_config.py 
b/dts/tests/TestSuite_dynamic_config.py
new file mode 100644
index 00..55e2a2ba12
--- /dev/null
+++ b/dts/tests/TestSuite_dynamic_config.py
@@ -0,0 +1,145 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2024 University of New Hampshire
+
+"""Dynamic configuration capabilities test suite.
+
+This suite checks that it is possible to change the configuration of a port
+dynamically. The Poll Mode Driver should be able to enable and disable
+promiscuous mode on each port, as well as check the Rx and Tx packets of
+each port. Promiscuous mode in networking passes all traffic a NIC receives
+to the CPU, rather than just frames with matching MAC addresses. Each test
+case sends a packet with a matching address, and one with an unknown address,
+to ensure this behavior is shown.
+
+If packets should be received and forwarded, or received and not forwarded,
+depending on the configuration, the port info should match the expected 
behavior.
+"""
+
+from scapy.layers.inet import IP  # type: ignore[import-untyped]
+from scapy.layers.l2 import Ether  # type: ignore[import-untyped]
+from scapy.packet import Raw  # type: ignore[import-untyped]
+
+from framework.params.testpmd import SimpleForwardingModes
+from framework.remote_session.testpmd_shell import TestPmdShell
+from framework.test_suite import TestSuite
+
+
+class TestDynamicConfig(TestSuite):
+"""Dynamic config suite.
+
+Use the show port commands to see the MAC address and promisc mode status
+of the Rx port on the DUT. The suite will check the Rx and Tx packets
+of each port after configuring promiscuous, multicast, and default mode
+on the DUT to verify the expected behavior. It consists of four test cases:
+
+1. Default mode: verify packets are received and forwarded.
+2. Disable promiscuous mode: verify that packets are received
+only for the packet with destination address matching the port address.
+3. Disable promiscuous mode broadcast: verify that packets with destination
+MAC address not matching the port are received and not forwarded, and 
verify
+that broadcast packets are received and forwarded.
+4. Disable promiscuous mode multicast: verify that packets with destination
+MAC address not matching the port are received and not forwarded, and 
verify
+that multicast packets are received and forwarded.
+"""
+
+def set_up_suite(self) -> None:
+"""Set up the test suite.
+
+Setup:
+Verify that at least two ports are open for session.
+"""
+self.verify(len(self._port_links) > 1, "Not enough ports")
+
+def send_packet_and_verify(self, should_receive: bool, mac_address: str) 
-> None:
+"""Generate, send and verify packets.
+
+Generate a packet and send to the DUT, verify that packet is forwarded 
from DUT to
+traffic generator if that behavior is expected.
+
+Args:
+should_receive: Indicate whether the packet should be received.
+mac_address: Destination MAC address to generate in packet.
+"""
+packet = Ether(dst=mac_address) / IP() / Raw(load="x")
+received = self.send_packet_and_capture(packet)
+contains_packet = any(
+packet.haslayer(Raw) and b"x" in packet.load for packet in 
received
+)
+self.verify(
+should_receive == contains_packet,
+f"Packet was {'dropped' if should_receive else 'received'}",
+)
+
+def disable_promisc_setup(self, testpmd: TestPmdShell, port_id: int) -> 
TestPmdShell:
+"""Sets up testpmd shell config for cases where promisc mode is 
disabled.
+
+Args:
+testpmd: Testpmd session that is being configured.
+port_id: Port number to disable promisc mode on.
+
+Returns:
+TestPmdShell: interactive testpmd shell object.
+"""
+testpmd.start()
+testpmd.set_promisc(port=port_id, on=False)
+testpmd.set_forward_mode(SimpleForwardingModes.io)
+return testpmd
+
+def test_default_mode(self) -> None:
+"""Tests default configuration.
+
+Creates a testpmd shell, verifies that promiscuous mode is enabled by 
default,
+and sends two packets; one matching source MAC address and one unknown.
+Verifies that both are received.
+"""
+with TestPmdShell(node=self.sut_nod

Re: [PATCH] eal: add support for TRNG with Arm RNG feature

2024-07-24 Thread Stephen Hemminger
On Wed, 24 Jul 2024 21:14:30 +0200
Mattias Rönnblom  wrote:

> >> Ideally, you want to avoid system calls on lcore workers doing packet
> >> processing. If you have to do system calls (which I believe is the case
> >> here), it's better to a simple call, not so often.
> >>
> >> getentropy() seems to need about 800 core clock cycles on my x86_64, on
> >> average. (rte_rand() needs ~11 cc/call.) 800 cc is not too horrible, but
> >> system calls tend to have some pretty bad tail latencies.
> >>
> >> To improve efficiency, one could do a getentropy() on a relatively large
> >> buffer, and cache the result on a per-lcore basis, amortizing the system
> >> call overhead over many calls.
> >>
> >> You still have the tail latency issue to deal with. We could have a
> >> control thread providing entropy for the lcores, but that seems like
> >> massive overkill.  
> > 
> > 
> > Getrandom is a vsyscall on current kernels, and it manages use of entropy 
> > across
> > multiple sources. If you are doing lots of key generation, you don't want to
> > hit the hardware every time.
> > 
> > https://lwn.net/Articles/974468/
> > 
> >   
> 
> If I understand things correctly, the getrandom() vDSO support was 
> mainlined *today*, so you need to be current indeed to have a vDSO 
> getrandom(). :)

Yes, it is headed for 6.11, but doubt that any reasonable workload
is going to be constrained by crypto key generation.

> 
> The above benchmark (rand_perf_autotest with rte_rand() implemented with 
> getentropy()) was run on Linux 5.15 and glibc 2.35, so a regular system 
> call was used.
> 
> (getentropy() delegates to getrandom(), so the performance is the same.)

I would trust the upstream kernel support for secure random more than
anything DPDK could develop. As soon as we get deeper into crypto it
opens up a whole new security domain and attack surface.



RE: [EXTERNAL] [PATCH v2] examples/ipsec-secgw: fix SA salt endianness problem

2024-07-24 Thread Akhil Goyal
> On 24-Jul-24 2:04 PM, Akhil Goyal wrote:
> >> On 24-Jul-24 12:20 PM, Akhil Goyal wrote:
>  On 23-Jul-24 5:57 PM, Akhil Goyal wrote:
> >> Hi all,
> >>
> >> This patch breaks ipsec tests with ipsec-secgw:
> >>
> >>
> >> ./examples/ipsec-secgw/test/run_test.sh -4 trs_aesctr_sha1
> >> ...
> >> ERROR: ./examples/ipsec-secgw/test/linux_test.sh failed for
>  dst=192.168.31.14,
> >> sz=1
> >> test IPv4 trs_aesctr_sha1 finished with status 1
> >> ERROR  test trs_aesctr_sha1 FAILED
> >>
> > The patch seems to be correct.
> > Please check endianness in the PMD you are testing.
>  In my opinion salt should not be affected by endianness and it should be
>  used as it is in the key parameter. I think the patch is wrong to make
>  it CPU endianness dependent before being passed to the PMDs, any PMD
>  that needs the endianness swapped should do it in the PMD code. Indeed
>  it's passed around as a 32 bit integer but it's not used as such, and
>  when it's actually used it should be evaluated as a byte array.
> 
> >>> As per the rfc, it should be treated as byte order(i.e. big endian).
> >>> But here the problem is we treat it as uint32_t which makes it CPU endian
> >> when stored in ipsec_sa struct.
> >>> The keys are stored as an array of uint8_t, so keys are stored in byte
> order(Big
> >> endian).
> >>> So either we save salt as "uint8_t salt[4]" or do a conversion of 
> >>> cpu_to_be
> >>> So that when it is stored in PMD/HW, and we convert it from uint32_t to
> uint_8
> >> *, there wont be issue.
> >>
> >> RFC treats it as a "four octet value" - there is no endianness until
> >> it's treated like an integer, which it never is. Even if it code it's
> >> being stored and passed as an unsigned 32bit integer it is never
> >> evaluated as such so its endianness doesn't matter.
> > The endianness matters the moment it is stored as uint32_t in ipsec_sa
> > It means the value is stored in CPU endianness in that integer unless it is
> specified.
> 
> What matters is that the four byte value in the key ends up in the
> memory in the same order, and that was always the case before the patch,
> regardless of the endianness of the CPU because load and store
> operations are not affected by endianness. With the patch the same four
> bytes from the configuration file will be stored differently in memory
> depending on the CPU. There is no need to fix the endianness of the
> salt, just as there is no need to fix the byte order of the key itself.
> 

When a uint32 is used, there is no clarity whether it is in BE or LE.
So that is where the confusion comes.
For any new person, uint32 means it is in CPU byte order.
This patch tried to address that but it failed to address all the cases.
So my simple suggestion is instead of fixing the byte order at every place,
Lets just change the ipsec_sa->salt as rte_be32_t from uint32_t and revert this 
patch.
This will make things clear.
I am suggesting to convert this uint32_t to rte_be32_t for library as well for 
salt.
This change is not swapping anything, but making things explicitly clear that 
salt is in BE.

> >
> > Now looking at the code again, I see the patch is incomplete for the case of
> lookaside crypto
> > Where the salt is copied as cnt_blk in the mbuf priv without conversion.
> >
> > So, this patch can be reverted and a simple fix can be added to mark 
> > ipsec_sa->
> salt as rte_be32_t
> > diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
> > index a83fd2283b..1fe6b97168 100644
> > --- a/examples/ipsec-secgw/ipsec.h
> > +++ b/examples/ipsec-secgw/ipsec.h
> > @@ -117,7 +117,7 @@ struct __rte_cache_aligned ipsec_sa {
> >  uint32_t spi;
> >  struct cdev_qp *cqp[RTE_MAX_LCORE];
> >  uint64_t seq;
> > -   uint32_t salt;
> > +   rte_be32_t salt;
> >  uint32_t fallback_sessions;
> >  enum rte_crypto_cipher_algorithm cipher_algo;
> >  enum rte_crypto_auth_algorithm auth_algo;
> >
> > Can you verify and send the patch?
> > And this may be updated in cryptodev and security lib as well in next 
> > release.
> >
> >
> >> I agree that we should have it everywhere as "uint8_t salt[4]" but that
> >> implies API changes and it doesn't change how the bytes are stored, so
> >> the patch will still be wrong.
> >>
> >>
> >> On 03/07/2024 18:58, Akhil Goyal wrote:
> >>
> >>
> >>
> >>
> >>
> >>-Original Message-
> >>From: Akhil Goyal 
> >> 
> >>Sent: Friday, March 15, 2024 12:42 AM
> >>To: Akhil Goyal 
> >>  ; Chaoyong He
> >>
> >>  ; dev@dpdk.org
> >> 
> >>Cc: oss-driv...@corigine.com  >> driv...@corigine.com> ; Shihong Wang 
> >> 

Re: [PATCH] eal: add support for TRNG with Arm RNG feature

2024-07-24 Thread Mattias Rönnblom

On 2024-07-24 22:02, Stephen Hemminger wrote:

On Wed, 24 Jul 2024 21:14:30 +0200
Mattias Rönnblom  wrote:


Ideally, you want to avoid system calls on lcore workers doing packet
processing. If you have to do system calls (which I believe is the case
here), it's better to a simple call, not so often.

getentropy() seems to need about 800 core clock cycles on my x86_64, on
average. (rte_rand() needs ~11 cc/call.) 800 cc is not too horrible, but
system calls tend to have some pretty bad tail latencies.

To improve efficiency, one could do a getentropy() on a relatively large
buffer, and cache the result on a per-lcore basis, amortizing the system
call overhead over many calls.

You still have the tail latency issue to deal with. We could have a
control thread providing entropy for the lcores, but that seems like
massive overkill.



Getrandom is a vsyscall on current kernels, and it manages use of entropy across
multiple sources. If you are doing lots of key generation, you don't want to
hit the hardware every time.

https://lwn.net/Articles/974468/

   


If I understand things correctly, the getrandom() vDSO support was
mainlined *today*, so you need to be current indeed to have a vDSO
getrandom(). :)


Yes, it is headed for 6.11, but doubt that any reasonable workload
is going to be constrained by crypto key generation.



For certain *very* latency-sensitive applications the main concern with 
a non-vDSO system call would be jitter, and not necessarily average 
latency (i.e., a throughput degradation).




The above benchmark (rand_perf_autotest with rte_rand() implemented with
getentropy()) was run on Linux 5.15 and glibc 2.35, so a regular system
call was used.

(getentropy() delegates to getrandom(), so the performance is the same.)


I would trust the upstream kernel support for secure random more than
anything DPDK could develop. As soon as we get deeper into crypto it
opens up a whole new security domain and attack surface.



I much agree here.

What potentially would be useful is an EAL-level OS wrapper. So 
getrandom() for UNIX-like OSes, and something else for Windows. In 
addition, you could make larger getrandom() calls to shave off some 
cycles on the average (at least for the non-vDSO case).


It seems to me we should defer the introduction of anything like that 
until a) it's needed by a DPDK library, or b) someone on the application 
side is asking for it.