[DPDK/core Bug 1501] rte_cpu_get_flag_enabled() segmentation fault

2024-07-26 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1501

Bug ID: 1501
   Summary: rte_cpu_get_flag_enabled() segmentation fault
   Product: DPDK
   Version: 21.11
  Hardware: All
OS: All
Status: UNCONFIRMED
  Severity: normal
  Priority: Normal
 Component: core
  Assignee: dev@dpdk.org
  Reporter: seyhmus.kar...@karel.com.tr
  Target Milestone: ---

I create a rte_fib struct for routing purposes and I free it and recreate it
periodically to update the routes. rte_fib_create() uses
rte_cpu_get_flag_enabled() to check if my cpu has AVX512F or not and selects
lookup function according to it. My cpu does not have AVX512F so
rte_cpu_get_flag_enabled() returns 0 as expected and rte_fib_create() selects
scalar lookup function. However, after a point rte_cpu_get_flag_enabled()
causes a segmentation fault with SIGILL although it runs without any problem
until that moment.

-- 
You are receiving this mail because:
You are the assignee for the bug.

[PATCH V2] doc: add tested Intel platforms with Intel NICs

2024-07-26 Thread Lingli Chen
Add tested Intel platforms with Intel NICs to v24.07 release note.

Signed-off-by: Lingli Chen 
---
 doc/guides/rel_notes/release_24_07.rst | 137 +
 1 file changed, 137 insertions(+)

diff --git a/doc/guides/rel_notes/release_24_07.rst 
b/doc/guides/rel_notes/release_24_07.rst
index eb2ed1a55f..6b0166fa95 100644
--- a/doc/guides/rel_notes/release_24_07.rst
+++ b/doc/guides/rel_notes/release_24_07.rst
@@ -286,3 +286,140 @@ Tested Platforms
This section is a comment. Do not overwrite or remove it.
Also, make sure to start the actual text at the margin.
===
+
+* Intel\ |reg| platforms with Intel\ |reg| NICs combinations
+
+  * CPU
+
+* Intel Atom\ |reg| P5342 processor
+* Intel\ |reg| Atom\ |trade| CPU C3758 @ 2.20GHz
+* Intel\ |reg| Xeon\ |reg| CPU D-1553N @ 2.30GHz
+* Intel\ |reg| Xeon\ |reg| CPU E5-2699 v4 @ 2.20GHz
+* Intel\ |reg| Xeon\ |reg| D-1747NTE CPU @ 2.50GHz
+* Intel\ |reg| Xeon\ |reg| D-2796NT CPU @ 2.00GHz
+* Intel\ |reg| Xeon\ |reg| Gold 6139 CPU @ 2.30GHz
+* Intel\ |reg| Xeon\ |reg| Gold 6140M CPU @ 2.30GHz
+* Intel\ |reg| Xeon\ |reg| Gold 6252N CPU @ 2.30GHz
+* Intel\ |reg| Xeon\ |reg| Gold 6348 CPU @ 2.60GHz
+* Intel\ |reg| Xeon\ |reg| Platinum 8180 CPU @ 2.50GHz
+* Intel\ |reg| Xeon\ |reg| Platinum 8280M CPU @ 2.70GHz
+* Intel\ |reg| Xeon\ |reg| Platinum 8380 CPU @ 2.30GHz
+* Intel\ |reg| Xeon\ |reg| Platinum 8468H
+* Intel\ |reg| Xeon\ |reg| Platinum 8490H
+
+  * OS:
+
+* CBL Mariner 2.0
+* Fedora 40
+* FreeBSD 14.0
+* OpenAnolis OS 8.8
+* openEuler 22.03 (LTS-SP3)
+* Red Hat Enterprise Linux Server release 9.0
+* Red Hat Enterprise Linux Server release 9.4
+* Ubuntu 22.04.3
+* Ubuntu 24.04
+
+  * NICs:
+
+* Intel\ |reg| Ethernet Controller E810-C for SFP (4x25G)
+
+  * Firmware version: 4.50 0x8001d8b5 1.3597.0
+  * Device id (pf/vf): 8086:1593 / 8086:1889
+  * Driver version(out-tree): 1.14.11 (ice)
+  * Driver version(in-tree): 6.8.0-31-generic (Ubuntu24.04) /
+5.14.0-427.13.1.el9_4.x86_64+rt (RHEL9.4) (ice)
+  * OS Default DDP: 1.3.36.0
+  * COMMS DDP: 1.3.46.0
+  * Wireless Edge DDP: 1.3.14.0
+
+* Intel\ |reg| Ethernet Controller E810-C for QSFP (2x100G)
+
+  * Firmware version: 4.50 0x8001d8b6 1.3597.0
+  * Device id (pf/vf): 8086:1592 / 8086:1889
+  * Driver version(out-tree): 1.14.11 (ice)
+  * Driver version(in-tree): 5.15.55.1-1.cm2-5464b22cac7+ (CBL Mariner 
2.0) (ice)
+  * OS Default DDP: 1.3.36.0
+  * COMMS DDP: 1.3.46.0
+  * Wireless Edge DDP: 1.3.14.0
+
+* Intel\ |reg| Ethernet Controller E810-XXV for SFP (2x25G)
+
+  * Firmware version: 4.50 0x8001d8c2 1.3597.0
+  * Device id (pf/vf): 8086:159b / 8086:1889
+  * Driver version: 1.14.11 (ice)
+  * OS Default DDP: 1.3.36.0
+  * COMMS DDP: 1.3.46.0
+
+* Intel\ |reg| Ethernet Connection E823-C for QSFP
+
+  * Firmware version: 3.39 0x8001db5f 1.3597.0
+  * Device id (pf/vf): 8086:188b / 8086:1889
+  * Driver version: 1.14.11 (ice)
+  * OS Default DDP: 1.3.36.0
+  * COMMS DDP: 1.3.46.0
+  * Wireless Edge DDP: 1.3.14.0
+
+* Intel\ |reg| Ethernet Connection E823-L for QSFP
+
+  * Firmware version: 3.39 0x8001da47 1.3534.0
+  * Device id (pf/vf): 8086:124c / 8086:1889
+  * Driver version: 1.14.11 (ice)
+  * OS Default DDP: 1.3.36.0
+  * COMMS DDP: 1.3.46.0
+  * Wireless Edge DDP: 1.3.14.0
+
+* Intel\ |reg| Ethernet Connection E822-L for backplane
+
+  * Firmware version: 3.39 0x8001d9b6 1.3353.0
+  * Device id (pf/vf): 8086:1897 / 8086:1889
+  * Driver version: 1.14.11 (ice)
+  * OS Default DDP: 1.3.36.0
+  * COMMS DDP: 1.3.46.0
+  * Wireless Edge DDP: 1.3.14.0
+
+* Intel\ |reg| 82599ES 10 Gigabit Ethernet Controller
+
+  * Firmware version: 0x000161bf
+  * Device id (pf/vf): 8086:10fb / 8086:10ed
+  * Driver version(out-tree): 5.20.9 (ixgbe)
+  * Driver version(in-tree): 6.8.0-31-generic (Ubuntu24.04) /
+5.14.0-427.13.1.el9_4.x86_64 (RHEL9.4)(ixgbe)
+
+* Intel\ |reg| Ethernet Converged Network Adapter X710-DA4 (4x10G)
+
+  * Firmware version: 9.50 0x8000f145 1.3597.0
+  * Device id (pf/vf): 8086:1572 / 8086:154c
+  * Driver version(out-tree): 2.25.9 (i40e)
+
+* Intel\ |reg| Corporation Ethernet Connection X722 for 10GbE SFP+ (2x10G)
+
+  * Firmware version: 6.50 0x80004216 1.3597.0
+  * Device id (pf/vf): 8086:37d0 / 8086:37cd
+  * Driver version(out-tree): 2.25.9 (i40e)
+  * Driver version(in-tree): 5.14.0-427.13.1.el9_4.x86_64 (RHEL9.4)(i40e)
+
+* Intel\ |reg| Ethernet Converged Network Adapter XXV710-DA2 (2x25G)
+
+  * Firmware version: 9.50 0x8000f167 1.3597.0
+  * Device id (pf/vf): 8086:158b / 8086:154c
+  * Driver version(out-tree): 2.25.9 (i40e)
+  * Driver version(in-tree): 6.8.0-31-

[V1] doc: announce deprecation of flow item VXLAN-GPE

2024-07-26 Thread Gavin Li
Adding the deprecation notice as reminder for removing
RTE_FLOW_ITEM_TYPE_VXLAN_GPE and its related structures,
eg. rte_vxlan_gpe_hdr, rte_flow_item_vxlan_gpe, etc.

The proposed time of the removal is DPDK release 25.11.

Signed-off-by: Gavin Li 
---
 doc/guides/rel_notes/deprecation.rst | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/doc/guides/rel_notes/deprecation.rst 
b/doc/guides/rel_notes/deprecation.rst
index 6948641ff6..5c04f88557 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -115,6 +115,13 @@ Deprecation Notices
   The legacy actions should be removed
   once ``MODIFY_FIELD`` alternative is implemented in drivers.
 
+ * ethdev,net: The flow item ``RTE_FLOW_ITEM_TYPE_VXLAN_GPE`` is replaced with 
``RTE_FLOW_ITEM_TYPE_VXLAN``.
+   The struct ``rte_flow_item_vxlan_gpe``, its mask 
``rte_flow_item_vxlan_gpe_mask`` are replaced with
+   ``rte_flow_item_vxlan and its mask`` and its mask 
``rte_flow_item_vxlan_mask``.
+   The item ``RTE_FLOW_ITEM_TYPE_VXLAN_GPE``, the struct 
``rte_flow_item_vxlan_gpe``, its mask ``rte_flow_item_vxlan_gpe_mask``,
+   and the header struct ``rte_vxlan_gpe_hdr`` with the macro 
``RTE_ETHER_VXLAN_GPE_HLEN``
+   will be removed in DPDK 25.11.
+
 * cryptodev: The function ``rte_cryptodev_cb_fn`` will be updated
   to have another parameter ``qp_id`` to return the queue pair ID
   which got error interrupt to the application,
-- 
2.34.1



[DPDK/examples Bug 1502] [dpdk-24.07] l3fwdacl/l3fwdacl_acl_rule: core dump when receiving packets

2024-07-26 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1502

Bug ID: 1502
   Summary: [dpdk-24.07] l3fwdacl/l3fwdacl_acl_rule: core dump
when receiving packets
   Product: DPDK
   Version: 24.07
  Hardware: x86
OS: Linux
Status: UNCONFIRMED
  Severity: normal
  Priority: Normal
 Component: examples
  Assignee: dev@dpdk.org
  Reporter: songx.ji...@intel.com
  Target Milestone: ---

[Environment]
DPDK version: 
dpdk24.07-rc3:82c47f005b9a0a1e3a649664b7713443d18abe43
Other software versions: N/A.
OS: Anolis OS 8.8/5.10.134-13.an8.x86_64
Compiler: gcc version 8.5.0 20210514
Hardware platform: Intel(R) Xeon(R) Gold 6139 CPU @ 2.30GHz
NIC hardware: Intel Corporation Ethernet Controller E810-C for SFP [8086:1593]
(rev 01)
NIC driver: ice-1.14.11
NIC firmware: 4.50 0x8001d8b6 1.3597.0

[Test Setup]
1.compile dpdk
rm -rf x86_64-native-linuxapp-gcc
CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static
-Db_sanitize=address -Dexamples=l3fwd x86_64-native-linuxapp-gcc
ninja -C x86_64-native-linuxapp-gcc  
2. bind pf to vfio-pci 
./usertools/dpdk-devbind.py -b vfio-pci 18:00.0 18:00.1

3.prepare acl rules 
echo '' > /root/rule_ipv4.db
echo 'R0.0.0.0/0 0.0.0.0/0 0 : 65535 0 : 65535 0x00/0x00 1' >>
/root/rule_ipv4.db
echo '' > /root/rule_ipv6.db
echo 'R0:0:0:0:0:0:0:0/0 0:0:0:0:0:0:0:0/0 0 : 65535 0 : 65535 0x00/0x00 1' >>
/root/rule_ipv6.db
echo '' > /root/rule_ipv4.db
echo @200.10.0.1/32 0.0.0.0/0 0 : 65535 0 : 65535 0x00/0x00  >>
/root/rule_ipv4.db
echo R0.0.0.0/0 0.0.0.0/0 0 : 65535 0 : 65535 0x00/0x00 1 >> /root/rule_ipv4.db

4.launch dpdk-l3fwd 
x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd -l 1-4 -n 4 -a :18:00.0 -a
:18:00.1 --force-max-simd-bitwidth=0 -- -p 0x3 --lookup acl
--config="(0,0,2),(1,0,3)" --rule_ipv4="/root/rule_ipv4.db"
--rule_ipv6="/root/rule_ipv6.db" --parse-ptype  
5.Send packets
sendp([Ether(dst="{rx_port_mac}")/IP(src="200.10.0.1",dst="100.10.0.1")/UDP(sport=11,dport=101)],iface="ens7",count=1,inter=0,verbose=False)

[Show the output from the previous commands.]

Segmentation fault

[Expected Result]

The application can receive and send packages normally.

[Regression]
Is this issue a regression: (Y/N)Y

commit aa7c6077c19bd39b48ac17cd844b91f0dd03319f
Author: Konstantin Ananyev 
Date:   Thu May 2 16:28:16 2024 +0100

examples/l3fwd: avoid packets reorder in ACL mode

In ACL mode l3fwd first do classify() and send() for ipv4 packets,
then the same procedure for ipv6.
That might cause packets reordering within one ingress queue.
Probably not a big deal, as order within each flow are still preserved,
but better to be avoided anyway.
Specially considering that in other modes (lpm, fib, em)
l3fwd does preserve the order no matter of packet's IP version.
This patch aims to make ACL mode to behave in the same manner
and preserve packet's order within the same ingress queue.
Also these changes allow ACL mode to use common
(and hopefully better optimized) send_packets_multi() function at Tx path.

Signed-off-by: Konstantin Ananyev 

 examples/l3fwd/l3fwd_acl.c| 125 --
 examples/l3fwd/l3fwd_acl_scalar.h |  71 --
 2 files changed, 118 insertions, 78 deletions

-- 
You are receiving this mail because:
You are the assignee for the bug.

Re: [PATCH v2] net/gve: Fix TX/RX queue setup and stop

2024-07-26 Thread Tathagat Priyadarshi
Hi @Ferruh Yigit 

I have updated v2
https://patches.dpdk.org/project/dpdk/patch/1721914264-2394611-1-git-send-email-tathagat.d...@gmail.com/
and sent it in reply to the previous message id (
https://patches.dpdk.org/project/dpdk/patch/1721828129-2393364-1-git-send-email-tathagat.d...@gmail.com/)
, let me know if this is fine.



On Thu, Jul 25, 2024 at 6:59 PM Tathagat Priyadarshi <
tathagat.d...@gmail.com> wrote:

> The PR aims to update the TX/RQ queue setup/stop routines that are
> unique to DQO, so that they may be called for instances that use the
> DQO RDA format during dev start/stop
>
> Signed-off-by: Tathagat Priyadarshi 
> ---
>  drivers/net/gve/gve_ethdev.c | 29 +++--
>  1 file changed, 23 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c
> index ca92277..a20092e 100644
> --- a/drivers/net/gve/gve_ethdev.c
> +++ b/drivers/net/gve/gve_ethdev.c
> @@ -288,11 +288,16 @@ struct gve_queue_page_list *
> PMD_DRV_LOG(ERR, "Failed to create %u tx queues.",
> num_queues);
> return ret;
> }
> -   for (i = 0; i < num_queues; i++)
> -   if (gve_tx_queue_start(dev, i) != 0) {
> +   for (i = 0; i < num_queues; i++) {
> +   if (gve_is_gqi(priv))
> +   ret = gve_tx_queue_start(dev, i);
> +   else
> +   ret = gve_tx_queue_start_dqo(dev, i);
> +   if (ret != 0) {
> PMD_DRV_LOG(ERR, "Fail to start Tx queue %d", i);
> goto err_tx;
> }
> +   }
>
> num_queues = dev->data->nb_rx_queues;
> priv->rxqs = (struct gve_rx_queue **)dev->data->rx_queues;
> @@ -315,9 +320,15 @@ struct gve_queue_page_list *
> return 0;
>
>  err_rx:
> -   gve_stop_rx_queues(dev);
> +   if (gve_is_gqi(priv))
> +   gve_stop_rx_queues(dev);
> +   else
> +   gve_stop_rx_queues_dqo(dev);
>  err_tx:
> -   gve_stop_tx_queues(dev);
> +   if (gve_is_gqi(priv))
> +   gve_stop_tx_queues(dev);
> +   else
> +   gve_stop_tx_queues_dqo(dev);
> return ret;
>  }
>
> @@ -362,10 +373,16 @@ struct gve_queue_page_list *
>  static int
>  gve_dev_stop(struct rte_eth_dev *dev)
>  {
> +   struct gve_priv *priv = dev->data->dev_private;
> dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
>
> -   gve_stop_tx_queues(dev);
> -   gve_stop_rx_queues(dev);
> +   if (gve_is_gqi(priv)) {
> +   gve_stop_tx_queues(dev);
> +   gve_stop_rx_queues(dev);
> +   } else {
> +   gve_stop_tx_queues_dqo(dev);
> +   gve_stop_rx_queues_dqo(dev);
> +   }
>
> dev->data->dev_started = 0;
>
> --
> 1.8.3.1
>
>


Re: [PATCH v6 0/3] Improve interactive shell output gathering and logging

2024-07-26 Thread Juraj Linkeš

For series:
Reviewed-by: Juraj Linkeš 

On 24. 7. 2024 20:39, jspew...@iol.unh.edu wrote:

From: Jeremy Spewock 

v6:
  * Fix error catch for retries. This series changed the error that
is thrown in the case of a timeout, but it was originally overlooked
that the context manager patch added a catch that is looking for the
old timeout error. This version fixes the patch by adjusting the
error that is expected in the context manager patch to match what
this series changes it to.



Here's the diff for anyone interested:
diff --git 
a/dts/framework/remote_session/single_active_interactive_shell.py 
b/dts/framework/remote_session/single_active_interactive_shell.py

index 701d0c..77a4dcefdf 100644
--- a/dts/framework/remote_session/single_active_interactive_shell.py
+++ b/dts/framework/remote_session/single_active_interactive_shell.py
@@ -150,7 +150,7 @@ def _start_application(self) -> None:
 try:
 self.send_command(start_command)
 break
-except TimeoutError:
+except InteractiveSSHTimeoutError:
 self._logger.info(
 f"Interactive shell failed to start (attempt 
{attempt+1} out of "

 f"{self._init_attempts})"

self.send_command raises InteractiveSSHTimeoutError (and not 
TimeoutError) which is why we needed this change.



Jeremy Spewock (3):
   dts: Improve output gathering in interactive shells
   dts: Add missing docstring from XML-RPC server
   dts: Improve logging for interactive shells

  dts/framework/exception.py| 66 ---
  dts/framework/remote_session/dpdk_shell.py|  3 +-
  .../single_active_interactive_shell.py| 60 -
  dts/framework/remote_session/testpmd_shell.py |  2 +
  .../testbed_model/traffic_generator/scapy.py  | 50 +-
  5 files changed, 139 insertions(+), 42 deletions(-)



Re: [V1] doc: announce deprecation of flow item VXLAN-GPE

2024-07-26 Thread Ferruh Yigit
On 7/26/2024 9:51 AM, Gavin Li wrote:
> Adding the deprecation notice as reminder for removing
> RTE_FLOW_ITEM_TYPE_VXLAN_GPE and its related structures,
> eg. rte_vxlan_gpe_hdr, rte_flow_item_vxlan_gpe, etc.
> 
> The proposed time of the removal is DPDK release 25.11.
> 
> Signed-off-by: Gavin Li 
> ---
>  doc/guides/rel_notes/deprecation.rst | 7 +++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/deprecation.rst 
> b/doc/guides/rel_notes/deprecation.rst
> index 6948641ff6..5c04f88557 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -115,6 +115,13 @@ Deprecation Notices
>The legacy actions should be removed
>once ``MODIFY_FIELD`` alternative is implemented in drivers.
>  
> + * ethdev,net: The flow item ``RTE_FLOW_ITEM_TYPE_VXLAN_GPE`` is replaced 
> with ``RTE_FLOW_ITEM_TYPE_VXLAN``.
> +   The struct ``rte_flow_item_vxlan_gpe``, its mask 
> ``rte_flow_item_vxlan_gpe_mask`` are replaced with
> +   ``rte_flow_item_vxlan and its mask`` and its mask 
> ``rte_flow_item_vxlan_mask``.
> +   The item ``RTE_FLOW_ITEM_TYPE_VXLAN_GPE``, the struct 
> ``rte_flow_item_vxlan_gpe``, its mask ``rte_flow_item_vxlan_gpe_mask``,
> +   and the header struct ``rte_vxlan_gpe_hdr`` with the macro 
> ``RTE_ETHER_VXLAN_GPE_HLEN``
> +   will be removed in DPDK 25.11.
> +
>  * cryptodev: The function ``rte_cryptodev_cb_fn`` will be updated
>to have another parameter ``qp_id`` to return the queue pair ID
>which got error interrupt to the application,
>

Acked-by: Ferruh Yigit 


Re: [PATCH v2] net/gve: Fix TX/RX queue setup and stop

2024-07-26 Thread Ferruh Yigit
On 7/26/2024 11:37 AM, Tathagat Priyadarshi wrote:
> Hi @Ferruh Yigit  
> 
> I have updated v2 https://patches.dpdk.org/project/dpdk/
> patch/1721914264-2394611-1-git-send-email-tathagat.d...@gmail.com/
>  send-email-tathagat.d...@gmail.com/>
> and sent it in reply to the previous message id (https://
> patches.dpdk.org/project/dpdk/patch/1721828129-2393364-1-git-send-email-
> tathagat.d...@gmail.com/  patch/1721828129-2393364-1-git-send-email-tathagat.d...@gmail.com/
>>) , let me know if this is fine. 
> 
> 

Looks good, thanks.

I will wait for Joshua's ack.



RE: Portable alternative to inet_ntop?

2024-07-26 Thread Morten Brørup
> From: Stephen Hemminger [mailto:step...@networkplumber.org]
> Sent: Wednesday, 24 July 2024 18.21
> 
> The function inet_ntop is useful to make printable addresses for
> debugging.
> It is available on Linux and FreeBSD but not on Windows.
> 
> There are some alternatives:
>   - add yet another OS shim in lib/eal/windows/include.
> Win32 has similar InetNtoP but it uses wide characters.
> 
>   - copy/paste code from FreeBSD into some new functions.
> 
> Hate duplicating code, but portability is a problem here.

+1 for "duplicating code" in this case.

> 
> diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h
> index 0d103d4127..a9404b4b41 100644
> --- a/lib/net/rte_ip.h
> +++ b/lib/net/rte_ip.h
> @@ -839,6 +839,27 @@ rte_ipv6_get_next_ext(const uint8_t *p, int proto,
> size_t *ext_len)
> return next_proto;
>  }
> 
> +
> +#define RTE_IPV4_ADDR_FMT_SIZE 16
> +#define RTE_IPV6_ADDR_FMT_SIZE 46
> +
> +__rte_experimental
> +int
> +rte_ipv4_format_addr(char *buf, uint16_t size, const void *addr);

This resembles rte_ether_format_addr() [1], so I agree to passing the address 
by reference instead of by value.

[1]: https://elixir.bootlin.com/dpdk/v24.07-rc3/source/lib/net/rte_ether.h#L264

I consider rte_be32_t the "official" type for an IPv4 address in network byte 
order. This is the type used in the IPv4 header [2].

[2]: https://elixir.bootlin.com/dpdk/v24.07-rc3/source/lib/net/rte_ip.h#L41

With this in mind, please change the address parameter's type from "const void 
*" to "const rte_be32_t *".
I speculate that you used "void *" to be compatible with both the types 
unsigned char[4] and rte_be32_t, and avoid alignment issues with the latter.
I fear this could set a very bad precedent; using "void *" instead of the 
proper type would make APIs difficult to read, because the actual types are 
omitted. We don't want to start using "void *" for API parameters to avoid type 
casting.

(C++ would allow providing the same function with a variety of parameter types, 
but we are stuck with C.)

> +
> +__rte_experimental
> +void
> +rte_ipv4_unformat_addr(const char *str, void *addr);

Same as above; please change output type from void* to rte_be32_t*.

> +
> +__rte_experimental
> +void
> +rte_ipv6_format_addr(char *buf, uint16_t size, const void *addr);

I suppose this outputs the IPv6 address string in packed format (using "::" if 
possible), as I suppose the IPv4 address string is output without leading 
zeroes (%u, not %03u).
Alternatively, consider adding a formatting flags parameter to specify the 
output format (leading zeroes or not, and "::" or not).

Same as above; addr parameter should be "const uint8_t addr[16]", reflecting 
the "official" IPv6 address type. This will be updated to "const struct 
rte_ipv6_addr *addr" with Robin's coming 24.11 patch series.

> +
> +__rte_experimental
> +void
> +rte_ipv4_unformat_addr(const char *str, void *addr);

Same as above; addr parameter type should reflect the "official" IPv6 address 
type.

And a typo in the function name: ipv4 -> ipv6



[RFC PATCH v1 0/1] Add Visual Studio Code configuration script

2024-07-26 Thread Anatoly Burakov
Lots of developers (myself included) uses Visual Studio Code as their primary
IDE for DPDK development. I have been successfully using various incarnations
of this script internally to quickly set up my development trees whenever I
need a new configuration, so this script is being shared in hopes that it will
be useful both to new developers starting with DPDK, and to seasoned DPDK
developers who are already using Visual Studio Code. It makes starting working
on DPDK in Visual Studio Code so much easier!

Philosophy behind this script is as follows:

- The assumption is made that a developer will not be using wildly different
  configurations from build to build - usually, they build the same things,
  work with the same set of apps/drivers for a while, then switch to something
  else, at which point a new configuration is needed
- Some configurations I consider to be "common" are included: debug build, debug
  optimized build, release build with docs, and ASan build
  (feel free to make suggestions here!)
- By default, the script will suggest enabling test, testpmd, and helloworld 
example
- No drivers are being enabled by default - use needs to explicitly enable them
  (another option could be to leave things as default and build everything, but 
I
  rather prefer minimalistic builds as they're faster to compile, and it would 
be
  semantically weird to not have any drivers selected yet all of them being 
built)
- All parameters that can be adjusted by TUI are also available as command line
  arguments, so while user interaction is the default (using whiptail), it's
  actually not required and can be bypassed.
- I usually work as a local user not as root, so by default the script will 
attempt
  to use "gdbsudo" (a "sudo gdb $@" script in /usr/local/bin) for launch tasks,
  and stop if it is not available.

Currently, it is only possible to define custom per-build configurations, while
any "global" meson settings would have to involve editing settings.json file. 
This
can be changed easily if required, but I've never needed this functionality.

Please feel free to make any suggestions!

Anatoly Burakov (1):
  devtools: add vscode configuration generator

 devtools/gen-vscode-config.py | 640 ++
 1 file changed, 640 insertions(+)
 create mode 100755 devtools/gen-vscode-config.py

-- 
2.43.5



[RFC PATCH v1 1/1] devtools: add vscode configuration generator

2024-07-26 Thread Anatoly Burakov
A lot of developers use Visual Studio Code as their primary IDE. This
script generates a configuration file for VSCode that sets up basic build
tasks, launch tasks, as well as C/C++ code analysis settings that will
take into account compile_commands.json that is automatically generated
by meson.

Files generated by script:
 - .vscode/settings.json: stores variables needed by other files
 - .vscode/tasks.json: defines build tasks
 - .vscode/launch.json: defines launch tasks
 - .vscode/c_cpp_properties.json: defines code analysis settings

The script uses a combination of globbing and meson file parsing to
discover available apps, examples, and drivers, and generates a
project-wide settings file, so that the user can later switch between
debug/release/etc. configurations while keeping their desired apps,
examples, and drivers, built by meson, and ensuring launch configurations
still work correctly whatever the configuration selected.

This script uses whiptail as TUI, which is expected to be universally
available as it is shipped by default on most major distributions.
However, the script is also designed to be scriptable and can be run
without user interaction, and have its configuration supplied from
command-line arguments.

Signed-off-by: Anatoly Burakov 
---
 devtools/gen-vscode-config.py | 640 ++
 1 file changed, 640 insertions(+)
 create mode 100755 devtools/gen-vscode-config.py

diff --git a/devtools/gen-vscode-config.py b/devtools/gen-vscode-config.py
new file mode 100755
index 00..0d291b6c17
--- /dev/null
+++ b/devtools/gen-vscode-config.py
@@ -0,0 +1,640 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2024 Intel Corporation
+#
+
+"""Visual Studio Code configuration generator script."""
+
+import os
+import json
+import argparse
+import fnmatch
+import shutil
+from typing import List, Dict, Tuple, Any
+from sys import exit as _exit, stderr
+from subprocess import run, CalledProcessError, PIPE
+from mesonbuild import mparser
+from mesonbuild.mesonlib import MesonException
+
+
+class DPDKBuildTask:
+"""A build task for DPDK"""
+
+def __init__(self, label: str, description: str, param: str):
+# label as it appears in build configuration
+self.label = label
+# description to be given in menu
+self.description = description
+# task-specific configuration parameters
+self.param = param
+
+def to_json_dict(self) -> Dict[str, Any]:
+"""Generate JSON dictionary for this task"""
+return {
+"label": f"Configure {self.label}",
+"detail": self.description,
+"type": "shell",
+"dependsOn": "Remove builddir",
+"command": f"meson setup ${{config:BUILDCONFIG}} {self.param} 
${{config:BUILDDIR}}",
+"problemMatcher": [],
+"group": "build"
+}
+
+
+class CmdlineCtx:
+"""POD class to set up command line parameters"""
+
+def __init__(self):
+self.use_ui = False
+self.use_gdbsudo = False
+self.build_dir: str = ""
+self.dpdk_dir: str = ""
+self.gdb_path: str = ""
+
+self.avail_configs: List[Tuple[str, str, str]] = []
+self.avail_apps: List[str] = []
+self.avail_examples: List[str] = []
+self.avail_drivers: List[str] = []
+
+self.enabled_configs: List[Tuple[str, str, str]] = []
+self.enabled_apps: List[str] = []
+self.enabled_examples: List[str] = []
+self.enabled_drivers: List[str] = []
+
+self.driver_dep_map: Dict[str, List[str]] = {}
+
+
+class DPDKLaunchTask:
+"""A launch task for DPDK"""
+
+def __init__(self, label: str, exe: str, gdb_path: str):
+# label as it appears in launch configuration
+self.label = label
+# path to executable
+self.exe = exe
+self.gdb_path = gdb_path
+
+def to_json_dict(self) -> Dict[str, Any]:
+"""Generate JSON dictionary for this task"""
+return {
+"name": f"Run {self.label}",
+"type": "cppdbg",
+"request": "launch",
+"program": f"${{config:BUILDDIR}}/{self.exe}",
+"args": [],
+"stopAtEntry": False,
+"cwd": "${workspaceFolder}",
+"externalConsole": False,
+"preLaunchTask": "Build",
+"MIMode": "gdb",
+"miDebuggerPath": self.gdb_path,
+"setupCommands": [
+{
+"description": "Enable pretty-printing for gdb",
+"text": "-gdb-set print pretty on",
+"ignoreFailures": True
+}
+]
+}
+
+
+class VSCodeConfig:
+"""Configuration for VSCode"""
+
+def __init__(self, builddir: str, commoncfg: str):
+# where will our build dir be located
+self.builddir = builddir
+# meson configuration common to all config

[PATCH v1] dts: add flow rule dataclass to testpmd shell

2024-07-26 Thread Dean Marx
add dataclass for passing in flow rule creation arguments, as well as a
__str__ method for converting to a sendable testpmd command. Add
flow_create method to TestPmdShell class for intializing flow rules.

Signed-off-by: Dean Marx 
---
 dts/framework/remote_session/testpmd_shell.py | 66 ---
 1 file changed, 57 insertions(+), 9 deletions(-)

diff --git a/dts/framework/remote_session/testpmd_shell.py 
b/dts/framework/remote_session/testpmd_shell.py
index 71d27c6c2a..61c1c935a6 100644
--- a/dts/framework/remote_session/testpmd_shell.py
+++ b/dts/framework/remote_session/testpmd_shell.py
@@ -19,7 +19,7 @@
 from dataclasses import dataclass, field
 from enum import Flag, auto
 from pathlib import PurePath
-from typing import ClassVar
+from typing import ClassVar, Optional
 
 from typing_extensions import Self, Unpack
 
@@ -577,6 +577,43 @@ class TestPmdPortStats(TextParser):
 tx_bps: int = field(metadata=TextParser.find_int(r"Tx-bps:\s+(\d+)"))
 
 
+@dataclass
+class flow_func:
+"""Dataclass for setting flow create parameters."""
+
+#:
+port_id: int
+#:
+ingress: bool
+#:
+pattern: str
+#:
+actions: str
+
+#:
+group_id: Optional[int] = None
+#:
+priority_level: Optional[int] = None
+#:
+user_id: Optional[int] = None
+
+def __str__(self) -> str:
+"""Returns the string representation of a flow_func instance.
+
+In this case, a properly formatted flow create command that can be 
sent to testpmd.
+"""
+ret = []
+ret.append(f"flow create {self.port_id} ")
+ret.append(f"group {self.group_id} " if self.group_id is not None else 
"")
+ret.append(f"priority {self.priority_level} " if self.priority_level 
is not None else "")
+ret.append("ingress " if self.ingress else "egress ")
+ret.append(f"user_id {self.user_id} " if self.user_id is not None else 
"")
+ret.append(f"pattern {self.pattern} ")
+ret.append(" / end actions ")
+ret.append(f"{self.actions} / end")
+return "".join(ret)
+
+
 class TestPmdShell(DPDKShell):
 """Testpmd interactive shell.
 
@@ -804,16 +841,27 @@ def show_port_stats(self, port_id: int) -> 
TestPmdPortStats:
 
 return TestPmdPortStats.parse(output)
 
+def flow_create(self, cmd: str, verify: bool = True) -> None:
+"""Creates a flow rule in the testpmd session.
+
+Args:
+cmd: String from flow_func instance to send as a flow rule.
+verify: If :data:`True`, the output of the command is scanned
+to ensure the flow rule was created successfully.
+
+Raises:
+InteractiveCommandExecutionError: If flow rule is invalid.
+"""
+flow_output = self.send_command(cmd)
+if verify:
+if "created" not in flow_output:
+self._logger.debug(f"Failed to create flow 
rule:\n{flow_output}")
+raise InteractiveCommandExecutionError(
+f"Failed to create flow rule:\n{flow_output}"
+)
+
 def _close(self) -> None:
 """Overrides :meth:`~.interactive_shell.close`."""
 self.stop()
-<<< HEAD
-<<< HEAD
-self.send_command("quit", "Bye...")
-===
-self.send_command("quit", "")
->>> dec6a393bf (dts: add context manager for interactive shells)
-===
 self.send_command("quit", "Bye...")
->>> c4dc8483a8 (dts: improve starting and stopping interactive shells)
 return super()._close()
-- 
2.44.0



[RFC PATCH v3 0/2] Initial Implementation For Jumbo Frames

2024-07-26 Thread Nicholas Pratte
v3:
  * Refactored to use TestPMDShell context manager.

NOTE: Assessing the boundaries and discern the correct assumption
for ethernet overhead is still to be discussed. Thus, while each
individual test case may pass, the test cases may not yet be precise.

Nicholas Pratte (2):
  dts: add port config mtu options to testpmd shell
  dts: Initial Implementation For Jumbo Frames Test Suite

 dts/framework/config/conf_yaml_schema.json|   3 +-
 dts/framework/remote_session/testpmd_shell.py |  20 +-
 dts/tests/TestSuite_jumboframes.py| 182 ++
 3 files changed, 203 insertions(+), 2 deletions(-)
 create mode 100644 dts/tests/TestSuite_jumboframes.py

-- 
2.44.0



[RFC PATCH v3 1/2] dts: add port config mtu options to testpmd shell

2024-07-26 Thread Nicholas Pratte
Testpmd offers mtu configuration options that omit ethernet overhead
calculations when set. This patch adds easy-of-use methods to leverage
these runtime options.

Bugzilla ID: 1421

Signed-off-by: Nicholas Pratte 
---
 dts/framework/remote_session/testpmd_shell.py | 20 ++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/dts/framework/remote_session/testpmd_shell.py 
b/dts/framework/remote_session/testpmd_shell.py
index eda6eb320f..83f7961359 100644
--- a/dts/framework/remote_session/testpmd_shell.py
+++ b/dts/framework/remote_session/testpmd_shell.py
@@ -804,7 +804,25 @@ def show_port_stats(self, port_id: int) -> 
TestPmdPortStats:
 
 return TestPmdPortStats.parse(output)
 
-def _close(self) -> None:
+def configure_port_mtu(self, port_id: int, mtu_length: int) -> None:
+"""Set the MTU length on a designated port.
+
+Args:
+port_id: The ID of the port being configured.
+mtu_length: The length, in bytes, of the MTU being set.
+"""
+self.send_command(f"port config mtu {port_id} {mtu_length}")
+
+def configure_port_mtu_all(self, mtu_length: int) -> None:
+"""Set the MTU length on all designated ports.
+
+Args:
+mtu_length: The MTU length to be set on all ports.
+"""
+for port in self.show_port_info_all():
+self.send_command(f"port config mtu {port.id} {mtu_length}")
+
+def close(self) -> None:
 """Overrides :meth:`~.interactive_shell.close`."""
 self.stop()
 self.send_command("quit", "Bye...")
-- 
2.44.0



[RFC PATCH v3 2/2] dts: Initial Implementation For Jumbo Frames Test Suite

2024-07-26 Thread Nicholas Pratte
The following test suite reflects the fundamental outline for how the
jumbo frames test suite may be designed. The test suite consists of five
individual test cases, each of which assesses the behavior of packet
transmissions for both 1518 byte and 9000 byte frames.

The edge cases are ripped directly from the old DTS framework, and the
general methodology is the same as well. The process, at this point, has
been refactored to operate within the new DTS framework.

Bugzilla ID: 1421

Signed-off-by: Nicholas Pratte 
---
 dts/framework/config/conf_yaml_schema.json |   3 +-
 dts/tests/TestSuite_jumboframes.py | 182 +
 2 files changed, 184 insertions(+), 1 deletion(-)
 create mode 100644 dts/tests/TestSuite_jumboframes.py

diff --git a/dts/framework/config/conf_yaml_schema.json 
b/dts/framework/config/conf_yaml_schema.json
index f02a310bb5..a1028f128b 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -187,7 +187,8 @@
   "enum": [
 "hello_world",
 "os_udp",
-"pmd_buffer_scatter"
+"pmd_buffer_scatter",
+"jumboframes"
   ]
 },
 "test_target": {
diff --git a/dts/tests/TestSuite_jumboframes.py 
b/dts/tests/TestSuite_jumboframes.py
new file mode 100644
index 00..dd8092f2a4
--- /dev/null
+++ b/dts/tests/TestSuite_jumboframes.py
@@ -0,0 +1,182 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023-2024 University of New Hampshire
+"""Jumbo frame consistency and compatibility test suite.
+
+The test suite ensures the consistency of jumbo frames transmission within
+Poll Mode Drivers using a series of individual test cases. If a Poll Mode
+Driver receives a packet that is greater than its assigned MTU length, then
+that packet will be dropped, and thus not received. Likewise, if a Poll Mode 
Driver
+receives a packet that is less than or equal to a its designated MTU length, 
then the
+packet should be transmitted by the Poll Mode Driver, completing a cycle 
within the
+testbed and getting received by the traffic generator. Thus, the following 
test suite
+evaluates the behavior within all possible edge cases, ensuring that a test 
Poll
+Mode Driver strictly abides by the above implications.
+"""
+
+from scapy.layers.inet import IP  # type: ignore[import-untyped]
+from scapy.layers.l2 import Ether  # type: ignore[import-untyped]
+from scapy.packet import Raw  # type: ignore[import-untyped]
+
+from framework.remote_session.testpmd_shell import TestPmdShell
+from framework.test_suite import TestSuite
+
+IP_HEADER_LEN = 20
+ETHER_STANDARD_FRAME = 1500
+ETHER_JUMBO_FRAME_MTU = 9000
+
+
+class TestJumboframes(TestSuite):
+"""DPDK PMD jumbo frames test suite.
+
+Asserts the expected behavior of frames greater than, less then, or equal 
to
+a designated MTU size in the testpmd application. If a packet size greater
+than the designated testpmd MTU length is retrieved, the test fails. If a
+packet size less than or equal to the designated testpmd MTU length is 
retrieved,
+the test passes.
+"""
+
+def set_up_suite(self) -> None:
+"""Set up the test suite.
+
+Setup:
+Set traffic generator MTU lengths to a size greater than scope of 
all
+test cases.
+"""
+self.tg_node.main_session.configure_port_mtu(
+ETHER_JUMBO_FRAME_MTU + 200, self._tg_port_egress
+)
+self.tg_node.main_session.configure_port_mtu(
+ETHER_JUMBO_FRAME_MTU + 200, self._tg_port_ingress
+)
+
+def send_packet_and_verify(self, pktsize: int, should_receive: bool = 
True) -> None:
+"""Generate, send, and capture packets to verify that the sent packet 
was received or not.
+
+Generates a packet based on a specified size and sends it to the SUT. 
The desired packet's
+payload size is calculated, and arbitrary, byte-sized characters are 
inserted into the
+packet before sending. Packets are captured, and depending on the test 
case, packet
+payloads are checked to determine if the sent payload was received.
+
+Args:
+pktsize: Size of packet to be generated and sent.
+should_receive: Indicate whether the test case expects to receive 
the packet or not.
+"""
+padding = pktsize - IP_HEADER_LEN
+# Insert extra space for placeholder 'CRC' Error correction.
+packet = Ether() / Raw("") / IP(len=pktsize) / Raw(load="X" * 
padding)
+received_packets = self.send_packet_and_capture(packet)
+found = any(
+("X" * padding) in str(packets.load)
+for packets in received_packets
+if hasattr(packets, "load")
+)
+
+if should_receive:
+self.verify(found, "Did not receive packet")
+else:
+self.verify(not found, "Received packet")
+
+def test_jumboframes_normal_nojumbo(self) -> None:
+   

[RFC PATCH v3 0/2] Initial Implementation For Jumbo Frames

2024-07-26 Thread Nicholas Pratte
v3:
  * Refactored to use TestPMDShell context manager.

NOTE: Assessing the boundaries and discern the correct assumption
for ethernet overhead is still to be discussed. Thus, while each
individual test case may pass, the test cases may not yet be precise.

Nicholas Pratte (2):
  dts: add port config mtu options to testpmd shell
  dts: Initial Implementation For Jumbo Frames Test Suite

 dts/framework/config/conf_yaml_schema.json|   3 +-
 dts/framework/remote_session/testpmd_shell.py |  20 +-
 dts/tests/TestSuite_jumboframes.py| 182 ++
 3 files changed, 203 insertions(+), 2 deletions(-)
 create mode 100644 dts/tests/TestSuite_jumboframes.py

-- 
2.44.0



[PATCH v2] dts: add flow rule dataclass to testpmd shell

2024-07-26 Thread Dean Marx
add dataclass for passing in flow rule creation arguments, as well as a
__str__ method for converting to a sendable testpmd command. Add
flow_create method to TestPmdShell class for initializing flow rules.

Signed-off-by: Dean Marx 
---
 dts/framework/remote_session/testpmd_shell.py | 58 ++-
 1 file changed, 57 insertions(+), 1 deletion(-)

diff --git a/dts/framework/remote_session/testpmd_shell.py 
b/dts/framework/remote_session/testpmd_shell.py
index eda6eb320f..d6c111da0a 100644
--- a/dts/framework/remote_session/testpmd_shell.py
+++ b/dts/framework/remote_session/testpmd_shell.py
@@ -19,7 +19,7 @@
 from dataclasses import dataclass, field
 from enum import Flag, auto
 from pathlib import PurePath
-from typing import ClassVar
+from typing import ClassVar, Optional
 
 from typing_extensions import Self, Unpack
 
@@ -577,6 +577,43 @@ class TestPmdPortStats(TextParser):
 tx_bps: int = field(metadata=TextParser.find_int(r"Tx-bps:\s+(\d+)"))
 
 
+@dataclass
+class flow_func:
+"""Dataclass for setting flow rule parameters."""
+
+#:
+port_id: int
+#:
+ingress: bool
+#:
+pattern: str
+#:
+actions: str
+
+#:
+group_id: Optional[int] = None
+#:
+priority_level: Optional[int] = None
+#:
+user_id: Optional[int] = None
+
+def __str__(self) -> str:
+"""Returns the string representation of a flow_func instance.
+
+In this case, a properly formatted flow create command that can be 
sent to testpmd.
+"""
+ret = []
+ret.append(f"flow create {self.port_id} ")
+ret.append(f"group {self.group_id} " if self.group_id is not None else 
"")
+ret.append(f"priority {self.priority_level} " if self.priority_level 
is not None else "")
+ret.append("ingress " if self.ingress else "egress ")
+ret.append(f"user_id {self.user_id} " if self.user_id is not None else 
"")
+ret.append(f"pattern {self.pattern} ")
+ret.append(" / end actions ")
+ret.append(f"{self.actions} / end")
+return "".join(ret)
+
+
 class TestPmdShell(DPDKShell):
 """Testpmd interactive shell.
 
@@ -804,6 +841,25 @@ def show_port_stats(self, port_id: int) -> 
TestPmdPortStats:
 
 return TestPmdPortStats.parse(output)
 
+def flow_create(self, cmd: str, verify: bool = True) -> None:
+"""Creates a flow rule in the testpmd session.
+
+Args:
+cmd: String from flow_func instance to send as a flow rule.
+verify: If :data:`True`, the output of the command is scanned
+to ensure the flow rule was created successfully.
+
+Raises:
+InteractiveCommandExecutionError: If flow rule is invalid.
+"""
+flow_output = self.send_command(cmd)
+if verify:
+if "created" not in flow_output:
+self._logger.debug(f"Failed to create flow 
rule:\n{flow_output}")
+raise InteractiveCommandExecutionError(
+f"Failed to create flow rule:\n{flow_output}"
+)
+
 def _close(self) -> None:
 """Overrides :meth:`~.interactive_shell.close`."""
 self.stop()
-- 
2.44.0



[PATCH v1] dts: add flow rule dataclass to testpmd shell

2024-07-26 Thread Dean Marx
add dataclass for passing in flow rule creation arguments, as well as a
__str__ method for converting to a sendable testpmd command. Add
flow_create method to TestPmdShell class for initializing flow rules.

Signed-off-by: Dean Marx 
---
 dts/framework/remote_session/testpmd_shell.py | 58 ++-
 1 file changed, 57 insertions(+), 1 deletion(-)

diff --git a/dts/framework/remote_session/testpmd_shell.py 
b/dts/framework/remote_session/testpmd_shell.py
index eda6eb320f..d6c111da0a 100644
--- a/dts/framework/remote_session/testpmd_shell.py
+++ b/dts/framework/remote_session/testpmd_shell.py
@@ -19,7 +19,7 @@
 from dataclasses import dataclass, field
 from enum import Flag, auto
 from pathlib import PurePath
-from typing import ClassVar
+from typing import ClassVar, Optional
 
 from typing_extensions import Self, Unpack
 
@@ -577,6 +577,43 @@ class TestPmdPortStats(TextParser):
 tx_bps: int = field(metadata=TextParser.find_int(r"Tx-bps:\s+(\d+)"))
 
 
+@dataclass
+class flow_func:
+"""Dataclass for setting flow rule parameters."""
+
+#:
+port_id: int
+#:
+ingress: bool
+#:
+pattern: str
+#:
+actions: str
+
+#:
+group_id: Optional[int] = None
+#:
+priority_level: Optional[int] = None
+#:
+user_id: Optional[int] = None
+
+def __str__(self) -> str:
+"""Returns the string representation of a flow_func instance.
+
+In this case, a properly formatted flow create command that can be 
sent to testpmd.
+"""
+ret = []
+ret.append(f"flow create {self.port_id} ")
+ret.append(f"group {self.group_id} " if self.group_id is not None else 
"")
+ret.append(f"priority {self.priority_level} " if self.priority_level 
is not None else "")
+ret.append("ingress " if self.ingress else "egress ")
+ret.append(f"user_id {self.user_id} " if self.user_id is not None else 
"")
+ret.append(f"pattern {self.pattern} ")
+ret.append(" / end actions ")
+ret.append(f"{self.actions} / end")
+return "".join(ret)
+
+
 class TestPmdShell(DPDKShell):
 """Testpmd interactive shell.
 
@@ -804,6 +841,25 @@ def show_port_stats(self, port_id: int) -> 
TestPmdPortStats:
 
 return TestPmdPortStats.parse(output)
 
+def flow_create(self, cmd: str, verify: bool = True) -> None:
+"""Creates a flow rule in the testpmd session.
+
+Args:
+cmd: String from flow_func instance to send as a flow rule.
+verify: If :data:`True`, the output of the command is scanned
+to ensure the flow rule was created successfully.
+
+Raises:
+InteractiveCommandExecutionError: If flow rule is invalid.
+"""
+flow_output = self.send_command(cmd)
+if verify:
+if "created" not in flow_output:
+self._logger.debug(f"Failed to create flow 
rule:\n{flow_output}")
+raise InteractiveCommandExecutionError(
+f"Failed to create flow rule:\n{flow_output}"
+)
+
 def _close(self) -> None:
 """Overrides :meth:`~.interactive_shell.close`."""
 self.stop()
-- 
2.44.0



Re: [PATCH v3 1/4] dts: add send_packets to test suites and rework packet addressing

2024-07-26 Thread Nicholas Pratte
This is great, I'll be using this in-favor of the boolean solution
that I implemented! Just to bring this to your attention. I am
currently working on some Generic Routing Encapsulation suites the
require multiple IP layers at packet creation; they look something
like:

Ether() / IP() / GRE() / IP() / UDP() / Raw(load='x'*80)

I have to take a deeper look to see how multiple IP layers affect the
declaration of src and dst variables, I'll let you know what I find as
some changes might be needed on this implementation to avoid future
bugs. Once I figure it out, I'll leave a review tag for you.

On Wed, Jul 24, 2024 at 11:07 AM  wrote:
>
> From: Jeremy Spewock 
>
> Currently the only method provided in the test suite class for sending
> packets sends a single packet and then captures the results. There is,
> in some cases, a need to send multiple packets at once while not really
> needing to capture any traffic received back. The method to do this
> exists in the traffic generator already, but this patch exposes the
> method to test suites.
>
> This patch also updates the _adjust_addresses method of test suites so
> that addresses of packets are only modified if the developer did not
> configure them beforehand. This allows for developers to have more
> control over the content of their packets when sending them through the
> framework.
>
> Signed-off-by: Jeremy Spewock 
> ---
>  dts/framework/test_suite.py| 74 ++
>  dts/framework/testbed_model/tg_node.py |  9 
>  2 files changed, 62 insertions(+), 21 deletions(-)
>
> diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
> index 694b2eba65..0b678ed62d 100644
> --- a/dts/framework/test_suite.py
> +++ b/dts/framework/test_suite.py
> @@ -199,7 +199,7 @@ def send_packet_and_capture(
>  Returns:
>  A list of received packets.
>  """
> -packet = self._adjust_addresses(packet)
> +packet = self._adjust_addresses([packet])[0]
>  return self.tg_node.send_packet_and_capture(
>  packet,
>  self._tg_port_egress,
> @@ -208,6 +208,18 @@ def send_packet_and_capture(
>  duration,
>  )
>
> +def send_packets(
> +self,
> +packets: list[Packet],
> +) -> None:
> +"""Send packets using the traffic generator and do not capture 
> received traffic.
> +
> +Args:
> +packets: Packets to send.
> +"""
> +packets = self._adjust_addresses(packets)
> +self.tg_node.send_packets(packets, self._tg_port_egress)
> +
>  def get_expected_packet(self, packet: Packet) -> Packet:
>  """Inject the proper L2/L3 addresses into `packet`.
>
> @@ -219,39 +231,59 @@ def get_expected_packet(self, packet: Packet) -> Packet:
>  """
>  return self._adjust_addresses(packet, expected=True)
>
> -def _adjust_addresses(self, packet: Packet, expected: bool = False) -> 
> Packet:
> +def _adjust_addresses(self, packets: list[Packet], expected: bool = 
> False) -> list[Packet]:
>  """L2 and L3 address additions in both directions.
>
> +Only missing addresses are added to packets, existing addressed will 
> not be overridden.
> +
>  Assumptions:
>  Two links between SUT and TG, one link is TG -> SUT, the other 
> SUT -> TG.
>
>  Args:
> -packet: The packet to modify.
> +packets: The packets to modify.
>  expected: If :data:`True`, the direction is SUT -> TG,
>  otherwise the direction is TG -> SUT.
>  """
> -if expected:
> -# The packet enters the TG from SUT
> -# update l2 addresses
> -packet.src = self._sut_port_egress.mac_address
> -packet.dst = self._tg_port_ingress.mac_address
> +ret_packets = []
> +for packet in packets:
> +default_pkt_src = type(packet)().src
> +default_pkt_dst = type(packet)().dst
> +default_pkt_payload_src = IP().src if hasattr(packet.payload, 
> "src") else None
> +default_pkt_payload_dst = IP().dst if hasattr(packet.payload, 
> "dst") else None
> +# If `expected` is :data:`True`, the packet enters the TG from 
> SUT, otherwise the
> +# packet leaves the TG towards the SUT
>
> -# The packet is routed from TG egress to TG ingress
> -# update l3 addresses
> -packet.payload.src = self._tg_ip_address_egress.ip.exploded
> -packet.payload.dst = self._tg_ip_address_ingress.ip.exploded
> -else:
> -# The packet leaves TG towards SUT
>  # update l2 addresses
> -packet.src = self._tg_port_egress.mac_address
> -packet.dst = self._sut_port_ingress.mac_address
> +if packet.src == default_pkt_src:
> +packet.src = (
> +self._sut_port_egress.mac_address
> +   

[DPDK/DTS Bug 1503] Port over vxlan gpe support suite from old DTS

2024-07-26 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1503

Bug ID: 1503
   Summary: Port over vxlan gpe support suite from old DTS
   Product: DPDK
   Version: unspecified
  Hardware: All
OS: All
Status: UNCONFIRMED
  Severity: normal
  Priority: Normal
 Component: DTS
  Assignee: dev@dpdk.org
  Reporter: dm...@iol.unh.edu
CC: juraj.lin...@pantheon.tech, pr...@iol.unh.edu
  Target Milestone: ---

-- 
You are receiving this mail because:
You are the assignee for the bug.

Re: [RFC PATCH v1 1/1] devtools: add vscode configuration generator

2024-07-26 Thread Stephen Hemminger
On Fri, 26 Jul 2024 13:42:56 +0100
Anatoly Burakov  wrote:

> A lot of developers use Visual Studio Code as their primary IDE. This
> script generates a configuration file for VSCode that sets up basic build
> tasks, launch tasks, as well as C/C++ code analysis settings that will
> take into account compile_commands.json that is automatically generated
> by meson.
> 
> Files generated by script:
>  - .vscode/settings.json: stores variables needed by other files
>  - .vscode/tasks.json: defines build tasks
>  - .vscode/launch.json: defines launch tasks
>  - .vscode/c_cpp_properties.json: defines code analysis settings
> 
> The script uses a combination of globbing and meson file parsing to
> discover available apps, examples, and drivers, and generates a
> project-wide settings file, so that the user can later switch between
> debug/release/etc. configurations while keeping their desired apps,
> examples, and drivers, built by meson, and ensuring launch configurations
> still work correctly whatever the configuration selected.
> 
> This script uses whiptail as TUI, which is expected to be universally
> available as it is shipped by default on most major distributions.
> However, the script is also designed to be scriptable and can be run
> without user interaction, and have its configuration supplied from
> command-line arguments.
> 
> Signed-off-by: Anatoly Burakov 

The TUI doesn't matter much since I would expect this gets run
100% on Windows.

In general looks good, you might want to address
$ flake8 ./devtools/gen-vscode-config.py  --max-line 100
./devtools/gen-vscode-config.py:352:47: E741 ambiguous variable name 'l'
./devtools/gen-vscode-config.py:499:16: E713 test for membership should be 'not 
in'
./devtools/gen-vscode-config.py:546:101: E501 line too long (120 > 100 
characters)


Re: [RFC PATCH v1 1/1] devtools: add vscode configuration generator

2024-07-26 Thread Burakov, Anatoly

On 7/26/2024 5:36 PM, Stephen Hemminger wrote:

On Fri, 26 Jul 2024 13:42:56 +0100
Anatoly Burakov  wrote:


A lot of developers use Visual Studio Code as their primary IDE. This
script generates a configuration file for VSCode that sets up basic build
tasks, launch tasks, as well as C/C++ code analysis settings that will
take into account compile_commands.json that is automatically generated
by meson.

Files generated by script:
  - .vscode/settings.json: stores variables needed by other files
  - .vscode/tasks.json: defines build tasks
  - .vscode/launch.json: defines launch tasks
  - .vscode/c_cpp_properties.json: defines code analysis settings

The script uses a combination of globbing and meson file parsing to
discover available apps, examples, and drivers, and generates a
project-wide settings file, so that the user can later switch between
debug/release/etc. configurations while keeping their desired apps,
examples, and drivers, built by meson, and ensuring launch configurations
still work correctly whatever the configuration selected.

This script uses whiptail as TUI, which is expected to be universally
available as it is shipped by default on most major distributions.
However, the script is also designed to be scriptable and can be run
without user interaction, and have its configuration supplied from
command-line arguments.

Signed-off-by: Anatoly Burakov 


The TUI doesn't matter much since I would expect this gets run
100% on Windows.


I run it on Linux using Remote SSH, and that's the primary target 
audience as far as I'm concerned (a lot of people do the same at our 
office). Just in case it wasn't clear, this is not for *Visual Studio* 
the Windows IDE, this is for *Visual Studio Code* the cross-platform 
code editor.


I didn't actually think of testing this on Windows. I assume Windows 
doesn't have whiptail, so this will most likely refuse to run in TUI 
mode (unless run under WSL - I assume WSL ships whiptail).




In general looks good, you might want to address
$ flake8 ./devtools/gen-vscode-config.py  --max-line 100
./devtools/gen-vscode-config.py:352:47: E741 ambiguous variable name 'l'
./devtools/gen-vscode-config.py:499:16: E713 test for membership should be 'not 
in'
./devtools/gen-vscode-config.py:546:101: E501 line too long (120 > 100 
characters)


Thanks, I had Pylance linter but not flake8.

--
Thanks,
Anatoly



[PATCH] doc: announce fib configuration structure changes

2024-07-26 Thread Vladimir Medvedkin
Announce addition of the flags field into rte_fib_conf structure.

Signed-off-by: Vladimir Medvedkin 
---
 doc/guides/rel_notes/deprecation.rst | 4 
 1 file changed, 4 insertions(+)

diff --git a/doc/guides/rel_notes/deprecation.rst 
b/doc/guides/rel_notes/deprecation.rst
index 6948641ff6..d4d6290288 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -147,3 +147,7 @@ Deprecation Notices
   will be deprecated and subsequently removed in DPDK 24.11 release.
   Before this, the new port library API (functions rte_swx_port_*)
   will gradually transition from experimental to stable status.
+
+* fib: A new flags field will be introduced in rte_fib_conf structure
+  in DPDK 24.11. This field will be used to pass extra configuration
+  settings when creating rte_fib.
-- 
2.34.1



Re: How does CI system get updated?

2024-07-26 Thread Patrick Robb
Okay I understand better now how we ended up with an older mingw64
version. The DPDK Docs for windows compile direct folks over to
(https://sourceforge.net/projects/mingw-w64/files/) to get the
prebuilt binaries, but the latest toolchain published there is Mingw64
v8.*, whereas the current version is v11.*. So, when we upgraded to
the "latest" published version, we upgraded to that v8.* from years
ago. If you look at the mingw64 website downloads page
(https://www.mingw-w64.org/downloads/), it directs people over to
winlibs.com to download the prebuilt binaries for v11.

I have replaced the Windows Server 2019 CI VM's old mingw64 binaries
with the new (v11.*) ones downloaded from winlibs.com, and I see that
Stephen's patch now passes the compile test. I can issue a retest for
your series once I am all done making the update for the server 2022
machine too.

I guess this also raises the question of whether the DPDK docs for the
windows mingw64 compile process should be updated to point to
winlibs.com instead of sourceforge.net (only has the source code).

https://doc.dpdk.org/guides/windows_gsg/build_dpdk.html#option-2-mingw-w64-toolchain

On Thu, Jul 25, 2024 at 3:06 PM Patrick Robb  wrote:
>
> Hi Stephen,
>
> This is a UNH Lab system.
>
> We review our systems for updates once every 4 months. The idea is we
> do it early in each DPDK release's development cycle. So, we update
> Dockerfiles (for container environments), we apply updates where
> needed to persistent systems (for VMs, or baremetal servers).
> Obviously the Mingw version for the windows system was a check we have
> not been doing. Thank you for spotting this and letting us know.
>
> We will apply the update and let you know when it's ready.
>
>
> On Thu, Jul 25, 2024 at 11:03 AM Stephen Hemminger
>  wrote:
> >
> >
> >
> > This warning is due to a very old version of Mingw installed in CI system.
> >
> >  20 line log output for Windows Server 2019 (dpdk_mingw64_compile): 
> > In file included from ..\lib\net/rte_ip.h:21,
> > from ../lib/net/rte_dissect.c:20:
> > C:/mingw64/mingw64/x86_64-w64-mingw32/include/ws2tcpip.h:447:63: note: 
> > expected 'PVOID' {aka 'void *'} but argument is of type 'const uint8_t *' 
> > {aka 'const unsigned char *'}
> > WINSOCK_API_LINKAGE LPCSTR WSAAPI InetNtopA(INT Family, PVOID pAddr, LPSTR 
> > pStringBuf, size_t StringBufSize);
> > ~~^
> > ../lib/net/rte_dissect.c:292:29: error: passing argument 2 of 'inet_ntop' 
> > discards 'const' qualifier from pointer target type 
> > [-Werror=discarded-qualifiers]
> > inet_ntop(AF_INET6, ip6_hdr->dst_addr, dbuf, sizeof(dbuf));
> > ~~~^~
> > In file included from ..\lib\net/rte_ip.h:21,
> > from ../lib/net/rte_dissect.c:20:
> > C:/mingw64/mingw64/x86_64-w64-mingw32/include/ws2tcpip.h:447:63: note: 
> > expected 'PVOID' {aka 'void *'} but argument is of type 'const uint8_t *' 
> > {aka 'const unsigned char *'}
> > WINSOCK_API_LINKAGE LPCSTR WSAAPI InetNtopA(INT Family, PVOID pAddr, LPSTR 
> > pStringBuf, size_t StringBufSize);
> > ~~^
> >
> > It was fixed upstream in Mingw 4 years ago.


Re: [PATCH] doc: announce fib configuration structure changes

2024-07-26 Thread Robin Jarry

Vladimir Medvedkin, Jul 26, 2024 at 18:13:

Announce addition of the flags field into rte_fib_conf structure.

Signed-off-by: Vladimir Medvedkin 


Acked-by: Robin Jarry 



[PATCH v4 0/2] Mac Filter Port to New DTS

2024-07-26 Thread Nicholas Pratte
v4:
  * Refactored test suite to use context manager.
  * Added dependencies for vlan testpmd methods and adjust
addresses.

Nicholas Pratte (2):
  dts: add methods for setting mac and multicast addresses
  dts: mac filter test suite refactored for new dts

 dts/framework/config/conf_yaml_schema.json|   3 +-
 dts/framework/remote_session/testpmd_shell.py |  59 +
 dts/tests/TestSuite_mac_filter.py | 217 ++
 3 files changed, 278 insertions(+), 1 deletion(-)
 create mode 100644 dts/tests/TestSuite_mac_filter.py

-- 
2.44.0



[PATCH v4 1/2] dts: add methods for setting mac and multicast addresses

2024-07-26 Thread Nicholas Pratte
New methods have been added to TestPMDShell in order to produce the mac
filter's individual test cases:
 - set_mac_addr
 - set_multicast_mac_addr

set_mac_addr and set_multicast_addr were created for the mac filter test
suite, enabling users to both add or remove mac and multicast
addresses based on a boolean 'add or remove' parameter. The success or
failure of each call can be verified if a user deems it necessary.

Bugzilla ID: 1454
Signed-off-by: Nicholas Pratte 
---
 dts/framework/remote_session/testpmd_shell.py | 59 +++
 1 file changed, 59 insertions(+)

diff --git a/dts/framework/remote_session/testpmd_shell.py 
b/dts/framework/remote_session/testpmd_shell.py
index 8e5a1c084a..64ffb23439 100644
--- a/dts/framework/remote_session/testpmd_shell.py
+++ b/dts/framework/remote_session/testpmd_shell.py
@@ -765,6 +765,65 @@ def show_port_info(self, port_id: int) -> TestPmdPort:
 
 return TestPmdPort.parse(output)
 
+def set_mac_addr(self, port_id: int, mac_address: str, add: bool, verify: 
bool = True) -> None:
+"""Add or remove a mac address on a given port's Allowlist.
+
+Args:
+port_id: The port ID the mac address is set on.
+mac_address: The mac address to be added or removed to the 
specified port.
+add: If :data:`True`, add the specified mac address. If 
:data:`False`, remove specified
+mac address.
+verify: If :data:'True', assert that the 'mac_addr' operation was 
successful. If
+:data:'False', run the command and skip this assertion.
+
+Raises:
+InteractiveCommandExecutionError: If the set mac address operation 
fails.
+"""
+mac_cmd = "add" if add else "remove"
+output = self.send_command(f"mac_addr {mac_cmd} {port_id} 
{mac_address}")
+if "Bad arguments" in output:
+self._logger.debug("Invalid argument provided to mac_addr")
+raise InteractiveCommandExecutionError("Invalid argument provided")
+
+if verify:
+if "mac_addr_cmd error:" in output:
+self._logger.debug(f"Failed to {mac_cmd} {mac_address} on port 
{port_id}")
+raise InteractiveCommandExecutionError(
+f"Failed to {mac_cmd} {mac_address} on port {port_id} 
\n{output}"
+)
+
+def set_multicast_mac_addr(
+self, port_id: int, multi_addr: str, add: bool, verify: bool = True
+) -> None:
+"""Add or remove multicast mac address to a specified port's filter.
+
+Args:
+port_id: The port ID the multicast address is set on.
+multi_addr: The multicast address to be added to the filter.
+add: If :data:'True', add the specified multicast address to the 
port filter.
+If :data:'False', remove the specified multicast address from 
the port filter.
+verify: If :data:'True', assert that the 'mcast_addr' operations 
was successful.
+If :data:'False', execute the 'mcast_addr' operation and skip 
the assertion.
+
+Raises:
+InteractiveCommandExecutionError: If either the 'add' or 'remove' 
operations fails.
+"""
+mcast_cmd = "add" if add else "remove"
+output = self.send_command(f"mcast_addr {mcast_cmd} {port_id} 
{multi_addr}")
+if "Bad arguments" in output:
+self._logger.debug("Invalid arguments provided to mcast_addr")
+raise InteractiveCommandExecutionError("Invalid argument provided")
+
+if verify:
+if (
+"Invalid multicast_addr" in output
+or f'multicast address {"already" if add else "not"} filtered 
by port' in output
+):
+self._logger.debug(f"Failed to {mcast_cmd} {multi_addr} on 
port {port_id}")
+raise InteractiveCommandExecutionError(
+f"Failed to {mcast_cmd} {multi_addr} on port {port_id} 
\n{output}"
+)
+
 def show_port_stats_all(self) -> list[TestPmdPortStats]:
 """Returns the statistics of all the ports.
 
-- 
2.44.0



[PATCH v4 2/2] dts: mac filter test suite refactored for new dts

2024-07-26 Thread Nicholas Pratte
The mac address filter test suite, whose test cases are based on old
DTS's test cases, has been refactored to interface with the new DTS
framework.

In porting over this test suite into the new framework, some
adjustments were made, namely in the EAL and TestPMD parameter provided
before executing the application. While the original test plan was
referenced, by and large, only for the individual test cases, I'll leave
the parameters the original test plan was asking for below for the sake
of discussion:

--burst=1 --rxpt=0 --rxht=0 --rxwt=0 --txpt=36 --txht=0 --txwt=0
--txfreet=32 --rxfreet=64 --mbcache=250 --portmask=0x3

depends-on: patch-142691 ("dts: add send_packets to test suites and
rework packet addressing")
depends-on: patch-142696 ("dts: add VLAN methods to testpmd shell")

Bugzilla ID: 1454
Signed-off-by: Nicholas Pratte 

---
v2:
 * Refactored the address pool capacity tests to use all available
   octets in the mac address.
 * Change the payload to 'X' characters instead of 'P' characters.
v4:
 * Refactored TestPMD sessions to interface with context manager.
---
 dts/framework/config/conf_yaml_schema.json |   3 +-
 dts/tests/TestSuite_mac_filter.py  | 217 +
 2 files changed, 219 insertions(+), 1 deletion(-)
 create mode 100644 dts/tests/TestSuite_mac_filter.py

diff --git a/dts/framework/config/conf_yaml_schema.json 
b/dts/framework/config/conf_yaml_schema.json
index f02a310bb5..ad1f3757f7 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -187,7 +187,8 @@
   "enum": [
 "hello_world",
 "os_udp",
-"pmd_buffer_scatter"
+"pmd_buffer_scatter",
+"mac_filter"
   ]
 },
 "test_target": {
diff --git a/dts/tests/TestSuite_mac_filter.py 
b/dts/tests/TestSuite_mac_filter.py
new file mode 100644
index 00..9d61eb514d
--- /dev/null
+++ b/dts/tests/TestSuite_mac_filter.py
@@ -0,0 +1,217 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023-2024 University of New Hampshire
+"""Mac address filtering test suite.
+
+This test suite ensures proper and expected behavior of Allowlist filtering 
via mac
+addresses on devices bound to the Poll Mode Driver. If a packet received on a 
device
+contains a mac address not contained with its mac address pool, the packet 
should
+be dropped. Alternatively, if a packet is received that contains a destination 
mac
+within the devices address pool, the packet should be accepted and forwarded. 
This
+behavior should remain consistent across all packets, namely those containing 
dot1q
+tags or otherwise.
+
+The following test suite assesses behaviors based on the aforementioned logic.
+Additionally, testing is done within the PMD itself to ensure that the mac 
address
+allow list is behaving as expected.
+"""
+
+from scapy.layers.inet import IP  # type: ignore[import-untyped]
+from scapy.layers.l2 import Dot1Q, Ether  # type: ignore[import-untyped]
+from scapy.packet import Raw  # type: ignore[import-untyped]
+
+from framework.exception import InteractiveCommandExecutionError
+from framework.remote_session.testpmd_shell import TestPmdShell
+from framework.test_suite import TestSuite
+
+
+class TestMacFilter(TestSuite):
+"""Mac address allowlist filtering test suite.
+
+Configure mac address filtering on a given port, and test the port's 
filtering behavior
+using both a given port's hardware address as well as dummy addresses. If 
a port accepts
+a packet that is not contained within its mac address allowlist, then a 
given test case
+fails. Alternatively, if a port drops a packet that is designated within 
its mac address
+allowlist, a given test case will fail.
+
+Moreover, a given port should demonstrate proper behavior when bound to 
the Poll Mode
+Driver. A port should not have a mac address allowlist that exceeds its 
designated size.
+A port's default hardware address should not be removed from its address 
pool, and invalid
+addresses should not be included in the allowlist. If a port abides by the 
above rules, the
+test case passes.
+"""
+
+def send_packet_and_verify(
+self,
+mac_address: str,
+add_vlan: bool = False,
+should_receive: bool = True,
+) -> None:
+"""Generate, send, and verify a packet based on specified parameters.
+
+Test cases within this suite utilize this method to create, send, and 
verify
+packets based on criteria relating to the packet's destination mac 
address,
+vlan tag, and whether or not the packet should be received or not. 
Packets
+are verified using an inserted payload. Assuming the test case expects 
to
+receive a specified packet, if the list of received packets contains 
this
+payload within any of its packets, the test case passes. 
Alternatively, if
+the designed packet should not be received, and the packet payload is 
not,
+  

[PATCH v4 1/2] dts: add methods for setting mac and multicast addresses

2024-07-26 Thread Nicholas Pratte
New methods have been added to TestPMDShell in order to produce the mac
filter's individual test cases:
 - set_mac_addr
 - set_multicast_mac_addr

set_mac_addr and set_multicast_addr were created for the mac filter test
suite, enabling users to both add or remove mac and multicast
addresses based on a boolean 'add or remove' parameter. The success or
failure of each call can be verified if a user deems it necessary.

Bugzilla ID: 1454
Signed-off-by: Nicholas Pratte 
---
 dts/framework/remote_session/testpmd_shell.py | 59 +++
 1 file changed, 59 insertions(+)

diff --git a/dts/framework/remote_session/testpmd_shell.py 
b/dts/framework/remote_session/testpmd_shell.py
index 8e5a1c084a..64ffb23439 100644
--- a/dts/framework/remote_session/testpmd_shell.py
+++ b/dts/framework/remote_session/testpmd_shell.py
@@ -765,6 +765,65 @@ def show_port_info(self, port_id: int) -> TestPmdPort:
 
 return TestPmdPort.parse(output)
 
+def set_mac_addr(self, port_id: int, mac_address: str, add: bool, verify: 
bool = True) -> None:
+"""Add or remove a mac address on a given port's Allowlist.
+
+Args:
+port_id: The port ID the mac address is set on.
+mac_address: The mac address to be added or removed to the 
specified port.
+add: If :data:`True`, add the specified mac address. If 
:data:`False`, remove specified
+mac address.
+verify: If :data:'True', assert that the 'mac_addr' operation was 
successful. If
+:data:'False', run the command and skip this assertion.
+
+Raises:
+InteractiveCommandExecutionError: If the set mac address operation 
fails.
+"""
+mac_cmd = "add" if add else "remove"
+output = self.send_command(f"mac_addr {mac_cmd} {port_id} 
{mac_address}")
+if "Bad arguments" in output:
+self._logger.debug("Invalid argument provided to mac_addr")
+raise InteractiveCommandExecutionError("Invalid argument provided")
+
+if verify:
+if "mac_addr_cmd error:" in output:
+self._logger.debug(f"Failed to {mac_cmd} {mac_address} on port 
{port_id}")
+raise InteractiveCommandExecutionError(
+f"Failed to {mac_cmd} {mac_address} on port {port_id} 
\n{output}"
+)
+
+def set_multicast_mac_addr(
+self, port_id: int, multi_addr: str, add: bool, verify: bool = True
+) -> None:
+"""Add or remove multicast mac address to a specified port's filter.
+
+Args:
+port_id: The port ID the multicast address is set on.
+multi_addr: The multicast address to be added to the filter.
+add: If :data:'True', add the specified multicast address to the 
port filter.
+If :data:'False', remove the specified multicast address from 
the port filter.
+verify: If :data:'True', assert that the 'mcast_addr' operations 
was successful.
+If :data:'False', execute the 'mcast_addr' operation and skip 
the assertion.
+
+Raises:
+InteractiveCommandExecutionError: If either the 'add' or 'remove' 
operations fails.
+"""
+mcast_cmd = "add" if add else "remove"
+output = self.send_command(f"mcast_addr {mcast_cmd} {port_id} 
{multi_addr}")
+if "Bad arguments" in output:
+self._logger.debug("Invalid arguments provided to mcast_addr")
+raise InteractiveCommandExecutionError("Invalid argument provided")
+
+if verify:
+if (
+"Invalid multicast_addr" in output
+or f'multicast address {"already" if add else "not"} filtered 
by port' in output
+):
+self._logger.debug(f"Failed to {mcast_cmd} {multi_addr} on 
port {port_id}")
+raise InteractiveCommandExecutionError(
+f"Failed to {mcast_cmd} {multi_addr} on port {port_id} 
\n{output}"
+)
+
 def show_port_stats_all(self) -> list[TestPmdPortStats]:
 """Returns the statistics of all the ports.
 
-- 
2.44.0



[PATCH v4 2/2] dts: mac filter test suite refactored for new dts

2024-07-26 Thread Nicholas Pratte
The mac address filter test suite, whose test cases are based on old
DTS's test cases, has been refactored to interface with the new DTS
framework.

In porting over this test suite into the new framework, some
adjustments were made, namely in the EAL and TestPMD parameter provided
before executing the application. While the original test plan was
referenced, by and large, only for the individual test cases, I'll leave
the parameters the original test plan was asking for below for the sake
of discussion:

--burst=1 --rxpt=0 --rxht=0 --rxwt=0 --txpt=36 --txht=0 --txwt=0
--txfreet=32 --rxfreet=64 --mbcache=250 --portmask=0x3

depends-on: patch-142691 ("dts: add send_packets to test suites and
rework packet addressing")
depends-on: patch-142696 ("dts: add VLAN methods to testpmd shell")

Bugzilla ID: 1454
Signed-off-by: Nicholas Pratte 

---
v2:
 * Refactored the address pool capacity tests to use all available
   octets in the mac address.
 * Change the payload to 'X' characters instead of 'P' characters.
v4:
 * Refactored TestPMD sessions to interface with context manager.
---
 dts/framework/config/conf_yaml_schema.json |   3 +-
 dts/tests/TestSuite_mac_filter.py  | 217 +
 2 files changed, 219 insertions(+), 1 deletion(-)
 create mode 100644 dts/tests/TestSuite_mac_filter.py

diff --git a/dts/framework/config/conf_yaml_schema.json 
b/dts/framework/config/conf_yaml_schema.json
index f02a310bb5..ad1f3757f7 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -187,7 +187,8 @@
   "enum": [
 "hello_world",
 "os_udp",
-"pmd_buffer_scatter"
+"pmd_buffer_scatter",
+"mac_filter"
   ]
 },
 "test_target": {
diff --git a/dts/tests/TestSuite_mac_filter.py 
b/dts/tests/TestSuite_mac_filter.py
new file mode 100644
index 00..9d61eb514d
--- /dev/null
+++ b/dts/tests/TestSuite_mac_filter.py
@@ -0,0 +1,217 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023-2024 University of New Hampshire
+"""Mac address filtering test suite.
+
+This test suite ensures proper and expected behavior of Allowlist filtering 
via mac
+addresses on devices bound to the Poll Mode Driver. If a packet received on a 
device
+contains a mac address not contained with its mac address pool, the packet 
should
+be dropped. Alternatively, if a packet is received that contains a destination 
mac
+within the devices address pool, the packet should be accepted and forwarded. 
This
+behavior should remain consistent across all packets, namely those containing 
dot1q
+tags or otherwise.
+
+The following test suite assesses behaviors based on the aforementioned logic.
+Additionally, testing is done within the PMD itself to ensure that the mac 
address
+allow list is behaving as expected.
+"""
+
+from scapy.layers.inet import IP  # type: ignore[import-untyped]
+from scapy.layers.l2 import Dot1Q, Ether  # type: ignore[import-untyped]
+from scapy.packet import Raw  # type: ignore[import-untyped]
+
+from framework.exception import InteractiveCommandExecutionError
+from framework.remote_session.testpmd_shell import TestPmdShell
+from framework.test_suite import TestSuite
+
+
+class TestMacFilter(TestSuite):
+"""Mac address allowlist filtering test suite.
+
+Configure mac address filtering on a given port, and test the port's 
filtering behavior
+using both a given port's hardware address as well as dummy addresses. If 
a port accepts
+a packet that is not contained within its mac address allowlist, then a 
given test case
+fails. Alternatively, if a port drops a packet that is designated within 
its mac address
+allowlist, a given test case will fail.
+
+Moreover, a given port should demonstrate proper behavior when bound to 
the Poll Mode
+Driver. A port should not have a mac address allowlist that exceeds its 
designated size.
+A port's default hardware address should not be removed from its address 
pool, and invalid
+addresses should not be included in the allowlist. If a port abides by the 
above rules, the
+test case passes.
+"""
+
+def send_packet_and_verify(
+self,
+mac_address: str,
+add_vlan: bool = False,
+should_receive: bool = True,
+) -> None:
+"""Generate, send, and verify a packet based on specified parameters.
+
+Test cases within this suite utilize this method to create, send, and 
verify
+packets based on criteria relating to the packet's destination mac 
address,
+vlan tag, and whether or not the packet should be received or not. 
Packets
+are verified using an inserted payload. Assuming the test case expects 
to
+receive a specified packet, if the list of received packets contains 
this
+payload within any of its packets, the test case passes. 
Alternatively, if
+the designed packet should not be received, and the packet payload is 
not,
+  

[PATCH v4 2/4] config/arm: adds Arm Neoverse N3 SoC

2024-07-26 Thread Wathsala Vithanage
Add Arm Neoverse N3 part number to build configuration.

Signed-off-by: Wathsala Vithanage 
Reviewed-by: Dhruv Tripathi 

---
 config/arm/meson.build | 31 ++-
 1 file changed, 30 insertions(+), 1 deletion(-)

diff --git a/config/arm/meson.build b/config/arm/meson.build
index 012935d5d7..acf8e933ab 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -116,6 +116,27 @@ part_number_config_arm = {
 ['RTE_MAX_LCORE', 144],
 ['RTE_MAX_NUMA_NODES', 2]
 ]
+},
+'0xd8e': {
+# Only when -march=armv9-a+wfxt is used will the WFET
+# feature be compiled with armv9 instructions.
+# However, +wfxt is not supported by GCC at the moment.
+# Although armv9-a is the fitting version of Arm ISA for
+# Neoverse N3, it cannot be used when enabling wfxt for
+# the above reasons.
+# The workaround for this is to use armv8.7-a, which
+# doesn't require +wfxt for binutils version 2.36 or
+# greater.
+'march': 'armv8.7-a',
+'march_features': ['sve2'],
+'fallback_march': 'armv8.5-a',
+'flags': [
+['RTE_MACHINE', '"neoverse-n3"'],
+['RTE_ARM_FEATURE_ATOMICS', true],
+['RTE_ARM_FEATURE_WFXT', true],
+['RTE_MAX_LCORE', 192],
+['RTE_MAX_NUMA_NODES', 2]
+]
 }
 }
 implementer_arm = {
@@ -572,6 +593,13 @@ soc_n2 = {
 'numa': false
 }
 
+soc_n3 = {
+'description': 'Arm Neoverse N3',
+'implementer': '0x41',
+'part_number': '0xd8e',
+'numa': false
+}
+
 soc_odyssey = {
 'description': 'Marvell Odyssey',
 'implementer': '0x41',
@@ -699,6 +727,7 @@ socs = {
 'kunpeng930': soc_kunpeng930,
 'n1sdp': soc_n1sdp,
 'n2': soc_n2,
+'n3': soc_n3,
 'odyssey' : soc_odyssey,
 'stingray': soc_stingray,
 'thunderx2': soc_thunderx2,
@@ -852,7 +881,7 @@ if update_flags
 if part_number_config.get('force_march', false)
 candidate_march = part_number_config['march']
 else
-supported_marchs = ['armv9-a', 'armv8.6-a', 'armv8.5-a', 
'armv8.4-a', 'armv8.3-a',
+supported_marchs = ['armv9-a', 'armv8.7-a', 'armv8.6-a', 
'armv8.5-a', 'armv8.4-a', 'armv8.3-a',
 'armv8.2-a', 'armv8.1-a', 'armv8-a']
 check_compiler_support = false
 foreach supported_march: supported_marchs
-- 
2.34.1



[PATCH v4 3/4] eal: add Arm WFET in power management intrinsics

2024-07-26 Thread Wathsala Vithanage
Wait for event with timeout (WFET) puts the CPU in a low power
mode and stays there until an event is signalled (SEV), loss of
an exclusive monitor or a timeout.
WFET is enabled selectively by checking FEAT_WFxT in Linux
auxiliary vector. If FEAT_WFxT is not available power management
will fallback to WFE.
WFE is available on all the Arm platforms supported by DPDK.
Therefore, the RTE_ARM_USE_WFE macro is not required to enable
the WFE feature for PMD power monitoring. 
RTE_ARM_USE_WFE is used at the build time to use the WFE instruction
where applicable in the code at the developer's discretion rather
than as an indicator of the instruction's availability.

Signed-off-by: Wathsala Vithanage 
Reviewed-by: Dhruv Tripathi 
Reviewed-by: Honnappa Nagarahalli 
Reviewed-by: Jack Bond-Preston 
Reviewed-by: Nick Connolly 
Reviewed-by: Vinod Krishna 

---
 .mailmap  |  1 +
 app/test/test_cpuflags.c  |  3 +++
 lib/eal/arm/include/rte_cpuflags_64.h |  3 +++
 lib/eal/arm/include/rte_pause_64.h| 16 +--
 lib/eal/arm/rte_cpuflags.c|  1 +
 lib/eal/arm/rte_power_intrinsics.c| 39 ++-
 6 files changed, 49 insertions(+), 14 deletions(-)

diff --git a/.mailmap b/.mailmap
index 9c28b74655..a5c49d3702 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1540,6 +1540,7 @@ Vincent Li 
 Vincent S. Cojot 
 Vinh Tran 
 Vipin Padmam Ramesh 
+Vinod Krishna 
 Vipin Varghese  
 Vipul Ashri 
 Visa Hankala 
diff --git a/app/test/test_cpuflags.c b/app/test/test_cpuflags.c
index a0ff74720c..22ab4dff0a 100644
--- a/app/test/test_cpuflags.c
+++ b/app/test/test_cpuflags.c
@@ -156,6 +156,9 @@ test_cpuflags(void)
 
printf("Check for SVEBF16:\t");
CHECK_FOR_FLAG(RTE_CPUFLAG_SVEBF16);
+
+   printf("Check for WFXT:\t");
+   CHECK_FOR_FLAG(RTE_CPUFLAG_WFXT);
 #endif
 
 #if defined(RTE_ARCH_X86_64) || defined(RTE_ARCH_I686)
diff --git a/lib/eal/arm/include/rte_cpuflags_64.h 
b/lib/eal/arm/include/rte_cpuflags_64.h
index afe70209c3..993d980a02 100644
--- a/lib/eal/arm/include/rte_cpuflags_64.h
+++ b/lib/eal/arm/include/rte_cpuflags_64.h
@@ -36,6 +36,9 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_SVEF64MM,
RTE_CPUFLAG_SVEBF16,
RTE_CPUFLAG_AARCH64,
+
+   /* WFET and WFIT instructions */
+   RTE_CPUFLAG_WFXT,
 };
 
 #include "generic/rte_cpuflags.h"
diff --git a/lib/eal/arm/include/rte_pause_64.h 
b/lib/eal/arm/include/rte_pause_64.h
index 8224f09ba7..809403bffa 100644
--- a/lib/eal/arm/include/rte_pause_64.h
+++ b/lib/eal/arm/include/rte_pause_64.h
@@ -24,15 +24,27 @@ static inline void rte_pause(void)
asm volatile("yield" ::: "memory");
 }
 
-/* Send a local event to quit WFE. */
+/* Send a local event to quit WFE/WFxT. */
 #define __RTE_ARM_SEVL() { asm volatile("sevl" : : : "memory"); }
 
-/* Send a global event to quit WFE for all cores. */
+/* Send a global event to quit WFE/WFxT for all cores. */
 #define __RTE_ARM_SEV() { asm volatile("sev" : : : "memory"); }
 
 /* Put processor into low power WFE(Wait For Event) state. */
 #define __RTE_ARM_WFE() { asm volatile("wfe" : : : "memory"); }
 
+/* Put processor into low power WFET (WFE with Timeout) state. */
+#ifdef RTE_ARM_FEATURE_WFXT
+#define __RTE_ARM_WFET(t) {  \
+   asm volatile("wfet %x[to]"\
+   : \
+   : [to] "r" (t)\
+   : "memory");  \
+   }
+#else
+#define __RTE_ARM_WFET(t) { RTE_SET_USED(t); }
+#endif
+
 /*
  * Atomic exclusive load from addr, it returns the 8-bit content of
  * *addr while making it 'monitored', when it is written by someone
diff --git a/lib/eal/arm/rte_cpuflags.c b/lib/eal/arm/rte_cpuflags.c
index 29884c285f..88e10c6da0 100644
--- a/lib/eal/arm/rte_cpuflags.c
+++ b/lib/eal/arm/rte_cpuflags.c
@@ -115,6 +115,7 @@ const struct feature_entry rte_cpu_feature_table[] = {
FEAT_DEF(SVEF32MM,  REG_HWCAP2,   10)
FEAT_DEF(SVEF64MM,  REG_HWCAP2,   11)
FEAT_DEF(SVEBF16,   REG_HWCAP2,   12)
+   FEAT_DEF(WFXT,  REG_HWCAP2,   31)
FEAT_DEF(AARCH64,   REG_PLATFORM,  0)
 };
 #endif /* RTE_ARCH */
diff --git a/lib/eal/arm/rte_power_intrinsics.c 
b/lib/eal/arm/rte_power_intrinsics.c
index b0056cce8b..6475bbca04 100644
--- a/lib/eal/arm/rte_power_intrinsics.c
+++ b/lib/eal/arm/rte_power_intrinsics.c
@@ -4,19 +4,32 @@
 
 #include 
 
+#include "rte_cpuflags.h"
 #include "rte_power_intrinsics.h"
 
 /**
- * This function uses WFE instruction to make lcore suspend
+ *  Set wfet_en if WFET is supported
+ */
+#ifdef RTE_ARCH_64
+static uint8_t wfet_en;
+#endif /* RTE_ARCH_64 */
+
+RTE_INIT(rte_power_intrinsics_init)
+{
+#ifdef RTE_ARCH_64
+   if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_WFXT))
+   wfet_en = 1;
+#endif /* RTE_ARCH_64 */
+}
+
+/**
+ * This function uses WFE/WFET instruction to make lcore sus

[PATCH v4 4/4] eal: describe Arm CPU features including WFXT

2024-07-26 Thread Wathsala Vithanage
Add descriptive comments to each Arm feature listed in rte_cpu_flag_t.

Signed-off-by: Wathsala Vithanage 
Reviewed-by: Honnappa Nagarahalli 
Reviewed-by: Dhruv Tripathi 

---
 lib/eal/arm/include/rte_cpuflags_64.h | 48 +++
 1 file changed, 48 insertions(+)

diff --git a/lib/eal/arm/include/rte_cpuflags_64.h 
b/lib/eal/arm/include/rte_cpuflags_64.h
index 993d980a02..eed67bf6ec 100644
--- a/lib/eal/arm/include/rte_cpuflags_64.h
+++ b/lib/eal/arm/include/rte_cpuflags_64.h
@@ -13,28 +13,76 @@ extern "C" {
  * Enumeration of all CPU features supported
  */
 enum rte_cpu_flag_t {
+   /* Floating point capability */
RTE_CPUFLAG_FP = 0,
+
+   /* Arm Neon extension */
RTE_CPUFLAG_NEON,
+
+   /* Generic timer event stream */
RTE_CPUFLAG_EVTSTRM,
+
+   /* AES instructions */
RTE_CPUFLAG_AES,
+
+   /* Polynomial multiply long instruction */
RTE_CPUFLAG_PMULL,
+
+   /* SHA1 instructions */
RTE_CPUFLAG_SHA1,
+
+   /* SHA2 instructions */
RTE_CPUFLAG_SHA2,
+
+   /* CRC32 instruction */
RTE_CPUFLAG_CRC32,
+
+   /*
+* LDADD, LDCLR, LDEOR, LDSET, LDSMAX, LDSMIN, LDUMAX, LDUMIN, CAS,
+* CASP, and SWP instructions
+*/
RTE_CPUFLAG_ATOMICS,
+
+   /* Arm SVE extension */
RTE_CPUFLAG_SVE,
+
+   /* Arm SVE2 extension */
RTE_CPUFLAG_SVE2,
+
+   /* SVE-AES instructions */
RTE_CPUFLAG_SVEAES,
+
+   /* SVE-PMULL instruction */
RTE_CPUFLAG_SVEPMULL,
+
+   /* SVE bit permute instructions */
RTE_CPUFLAG_SVEBITPERM,
+
+   /* SVE-SHA3 instructions */
RTE_CPUFLAG_SVESHA3,
+
+   /* SVE-SM4 instructions */
RTE_CPUFLAG_SVESM4,
+
+   /* CFINV, RMIF, SETF16, SETF8, AXFLAG, and XAFLAG instructions */
RTE_CPUFLAG_FLAGM2,
+
+   /* FRINT32Z, FRINT32X, FRINT64Z, and FRINT64X instructions */
RTE_CPUFLAG_FRINT,
+
+   /* SVE Int8 matrix multiplication instructions */
RTE_CPUFLAG_SVEI8MM,
+
+   /* SVE FP32 floating-point matrix multiplication instructions */
RTE_CPUFLAG_SVEF32MM,
+
+   /* SVE FP64 floating-point matrix multiplication instructions */
RTE_CPUFLAG_SVEF64MM,
+
+   /* SVE BFloat16 instructions */
RTE_CPUFLAG_SVEBF16,
+
+   /* 64 bit execution state of the Arm architecture */
RTE_CPUFLAG_AARCH64,
 
/* WFET and WFIT instructions */
-- 
2.34.1



[PATCH v4 1/4] eal: expand the availability of WFE and related instructions

2024-07-26 Thread Wathsala Vithanage
The availability of __RTE_ARM_WFE, __RTE_ARM_SEV, __RTE_ARM_SEVL,
and  __RTE_ARM_LOAD_EXC_* macros for other applications, such as
PMD power management, should not depend on the choice of use of
these instructions in rte_wait_until_equal_N functions.
Therefore, this patch moves these macros out of control of the
RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED macro.

Signed-off-by: Wathsala Vithanage 
Reviewed-by: Dhruv Tripathi 

---
 .mailmap   | 1 +
 lib/eal/arm/include/rte_pause_64.h | 4 ++--
 lib/eal/arm/rte_cpuflags.c | 4 ++--
 lib/eal/arm/rte_power_intrinsics.c | 9 -
 4 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/.mailmap b/.mailmap
index f1e64286a1..9c28b74655 100644
--- a/.mailmap
+++ b/.mailmap
@@ -338,6 +338,7 @@ Dexia Li 
 Dexuan Cui 
 Dharmik Thakkar  
 Dheemanth Mallikarjun 
+Dhruv Tripathi 
 Diana Wang 
 Didier Pallard 
 Dilshod Urazov 
diff --git a/lib/eal/arm/include/rte_pause_64.h 
b/lib/eal/arm/include/rte_pause_64.h
index 9e2dbf3531..8224f09ba7 100644
--- a/lib/eal/arm/include/rte_pause_64.h
+++ b/lib/eal/arm/include/rte_pause_64.h
@@ -24,8 +24,6 @@ static inline void rte_pause(void)
asm volatile("yield" ::: "memory");
 }
 
-#ifdef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED
-
 /* Send a local event to quit WFE. */
 #define __RTE_ARM_SEVL() { asm volatile("sevl" : : : "memory"); }
 
@@ -148,6 +146,8 @@ static inline void rte_pause(void)
__RTE_ARM_LOAD_EXC_128(src, dst, memorder) \
 }
 
+#ifdef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED
+
 static __rte_always_inline void
 rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,
rte_memory_order memorder)
diff --git a/lib/eal/arm/rte_cpuflags.c b/lib/eal/arm/rte_cpuflags.c
index 7ba4f8ba97..29884c285f 100644
--- a/lib/eal/arm/rte_cpuflags.c
+++ b/lib/eal/arm/rte_cpuflags.c
@@ -163,7 +163,7 @@ void
 rte_cpu_get_intrinsics_support(struct rte_cpu_intrinsics *intrinsics)
 {
memset(intrinsics, 0, sizeof(*intrinsics));
-#ifdef RTE_ARM_USE_WFE
+#ifdef RTE_ARCH_64
intrinsics->power_monitor = 1;
-#endif
+#endif /* RTE_ARCH_64 */
 }
diff --git a/lib/eal/arm/rte_power_intrinsics.c 
b/lib/eal/arm/rte_power_intrinsics.c
index f54cf59e80..b0056cce8b 100644
--- a/lib/eal/arm/rte_power_intrinsics.c
+++ b/lib/eal/arm/rte_power_intrinsics.c
@@ -17,7 +17,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 {
RTE_SET_USED(tsc_timestamp);
 
-#ifdef RTE_ARM_USE_WFE
+#ifdef RTE_ARCH_64
const unsigned int lcore_id = rte_lcore_id();
uint64_t cur_value;
 
@@ -57,7 +57,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
RTE_SET_USED(pmc);
 
return -ENOTSUP;
-#endif
+#endif /* RTE_ARCH_64 */
 }
 
 /**
@@ -81,13 +81,12 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 {
RTE_SET_USED(lcore_id);
 
-#ifdef RTE_ARM_USE_WFE
+#ifdef RTE_ARCH_64
__RTE_ARM_SEV()
-
return 0;
 #else
return -ENOTSUP;
-#endif
+#endif /* RTE_ARCH_64 */
 }
 
 int
-- 
2.34.1



Re: How does CI system get updated?

2024-07-26 Thread Stephen Hemminger
On Fri, 26 Jul 2024 12:34:25 -0400
Patrick Robb  wrote:

> Okay I understand better now how we ended up with an older mingw64
> version. The DPDK Docs for windows compile direct folks over to
> (https://sourceforge.net/projects/mingw-w64/files/) to get the
> prebuilt binaries, but the latest toolchain published there is Mingw64
> v8.*, whereas the current version is v11.*. So, when we upgraded to
> the "latest" published version, we upgraded to that v8.* from years
> ago. If you look at the mingw64 website downloads page
> (https://www.mingw-w64.org/downloads/), it directs people over to
> winlibs.com to download the prebuilt binaries for v11.
> 
> I have replaced the Windows Server 2019 CI VM's old mingw64 binaries
> with the new (v11.*) ones downloaded from winlibs.com, and I see that
> Stephen's patch now passes the compile test. I can issue a retest for
> your series once I am all done making the update for the server 2022
> machine too.
> 
> I guess this also raises the question of whether the DPDK docs for the
> windows mingw64 compile process should be updated to point to
> winlibs.com instead of sourceforge.net (only has the source code).
> 
> https://doc.dpdk.org/guides/windows_gsg/build_dpdk.html#option-2-mingw-w64-toolchain

Yes, projects move we need to keep links up to date.


RE: [PATCH] eal: add support for TRNG with Arm RNG feature

2024-07-26 Thread Shunzhi Wen
> I'm missing a rationale here. Why is this useful?
>
This creates an API for HW that supports cryptographically secure random number 
generation.

> If you want to extend  with a cryptographically secure
> random number generator, that's fine.
>
> To have an API that's only available on certain ARM CPUs is not.
>
> NAK
>
The primary goal of this patch is to provide a direct interface to HW,
instead of letting kernel handle it. This is not an API just for Arm
CPUs, as other vendors also have similar HW features. For instance,
Intel and AMD has support for x86 RDRAND and RDSEED instructions, thus
can easily implement this API.

> A new function should be called something with "secure", rather than "true"
> (which is a bit silly, since we might well live in a completely deterministic
> universe). "secure" would more clearly communicate the intent, and also
> doesn't imply any particular implementation.
>
Regarding the terminology, “cryptographically secure random number”
is a more accurate and meaningful term than “true random number.”
This change will be made in the description, and the function name will
be replaced with rte_csrand.

IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.


Re: [PATCH v3 1/4] dts: add send_packets to test suites and rework packet addressing

2024-07-26 Thread Nicholas Pratte
I'll make sure to look over the other parts of this series and leave
reviews at some point next week, but I prioritized this since I will
be using this patch at some point in my GRE suites.


> diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
> index 694b2eba65..0b678ed62d 100644
> --- a/dts/framework/test_suite.py
> +++ b/dts/framework/test_suite.py
> @@ -199,7 +199,7 @@ def send_packet_and_capture(
>  Returns:
>  A list of received packets.
>  """
> -packet = self._adjust_addresses(packet)
> +packet = self._adjust_addresses([packet])[0]
>  return self.tg_node.send_packet_and_capture(
>  packet,
>  self._tg_port_egress,
> @@ -208,6 +208,18 @@ def send_packet_and_capture(
>  duration,
>  )
>
> +def send_packets(
> +self,
> +packets: list[Packet],
> +) -> None:
> +"""Send packets using the traffic generator and do not capture 
> received traffic.
> +
> +Args:
> +packets: Packets to send.
> +"""
> +packets = self._adjust_addresses(packets)
> +self.tg_node.send_packets(packets, self._tg_port_egress)
> +
>  def get_expected_packet(self, packet: Packet) -> Packet:
>  """Inject the proper L2/L3 addresses into `packet`.
>
> @@ -219,39 +231,59 @@ def get_expected_packet(self, packet: Packet) -> Packet:
>  """
>  return self._adjust_addresses(packet, expected=True)
>
> -def _adjust_addresses(self, packet: Packet, expected: bool = False) -> 
> Packet:
> +def _adjust_addresses(self, packets: list[Packet], expected: bool = 
> False) -> list[Packet]:
>  """L2 and L3 address additions in both directions.
>
> +Only missing addresses are added to packets, existing addressed will 
> not be overridden.

addressed should be addresses. Only saw this because of Chrome's
built-in grammar correction.

> +
>  Assumptions:
>  Two links between SUT and TG, one link is TG -> SUT, the other 
> SUT -> TG.
>
>  Args:
> -packet: The packet to modify.
> +packets: The packets to modify.
>  expected: If :data:`True`, the direction is SUT -> TG,
>  otherwise the direction is TG -> SUT.
>  """
> -if expected:
> -# The packet enters the TG from SUT
> -# update l2 addresses
> -packet.src = self._sut_port_egress.mac_address
> -packet.dst = self._tg_port_ingress.mac_address
> +ret_packets = []
> +for packet in packets:
> +default_pkt_src = type(packet)().src
> +default_pkt_dst = type(packet)().dst

This is really just a probing question for my sake, but what is the
difference between the solution you have above type(packet)().src and
Ether().src? Is there a preferred means of doing this?

> +default_pkt_payload_src = IP().src if hasattr(packet.payload, 
> "src") else None
> +default_pkt_payload_dst = IP().dst if hasattr(packet.payload, 
> "dst") else None
> +# If `expected` is :data:`True`, the packet enters the TG from 
> SUT, otherwise the
> +# packet leaves the TG towards the SUT
>
> -# The packet is routed from TG egress to TG ingress
> -# update l3 addresses
> -packet.payload.src = self._tg_ip_address_egress.ip.exploded
> -packet.payload.dst = self._tg_ip_address_ingress.ip.exploded

This is where it gets a little tricky. There will be circumstances,
albeit probably infrequently, where a user-created packet has more
than one IP layer, such as the ones I am using in the ipgre and nvgre
test suites that I am writing. In these cases, you need to specify an
index of the IP layer you want to modify, otherwise it will modify the
outermost IP layer in the packet (the IP layer outside the GRE layer.
See my previous comment for an example packet). Should be pretty easy
to fix, you just need to check if a packet contains an GRE layer, and
if it does, modify the packet by doing something like
packet[IP][1].src = self._tg_ip_address_egress.ip.exploded.

> -else:
> -# The packet leaves TG towards SUT
>  # update l2 addresses
> -packet.src = self._tg_port_egress.mac_address
> -packet.dst = self._sut_port_ingress.mac_address

You wouldn't need to make changes to how Ether addresses get allocated
if accounting for GRE as described above, since I'm pretty sure there
aren't really circumstances where packets would have more than one
Ethernet header, at least not that I've seen (GRE packets only have
one).

> +if packet.src == default_pkt_src:
> +packet.src = (
> +self._sut_port_egress.mac_address
> +if expected
> +else self._tg_port_egress.mac_address
> +)
> +if packet.dst 

Re: [PATCH] eal: add support for TRNG with Arm RNG feature

2024-07-26 Thread Stephen Hemminger
On Fri, 26 Jul 2024 18:34:44 +
Shunzhi Wen  wrote:

> > I'm missing a rationale here. Why is this useful?
> >  
> This creates an API for HW that supports cryptographically secure random 
> number generation.
> 
> > If you want to extend  with a cryptographically secure
> > random number generator, that's fine.
> >
> > To have an API that's only available on certain ARM CPUs is not.
> >
> > NAK
> >  
> The primary goal of this patch is to provide a direct interface to HW,
> instead of letting kernel handle it. This is not an API just for Arm
> CPUs, as other vendors also have similar HW features. For instance,
> Intel and AMD has support for x86 RDRAND and RDSEED instructions, thus
> can easily implement this API.
> 
> > A new function should be called something with "secure", rather than "true"
> > (which is a bit silly, since we might well live in a completely 
> > deterministic
> > universe). "secure" would more clearly communicate the intent, and also
> > doesn't imply any particular implementation.
> >  
> Regarding the terminology, “cryptographically secure random number”
> is a more accurate and meaningful term than “true random number.”
> This change will be made in the description, and the function name will
> be replaced with rte_csrand.

If you decide to rte_csrand() it should fallback to get_random or get_entropy.

Note: many people don't fully trust RDRAND or ARM CPU instructions. 
That is why the Linux entropy calls do not use only the HW instructions.



Re: [PATCH v3 1/4] dts: add send_packets to test suites and rework packet addressing

2024-07-26 Thread Jeremy Spewock
Thanks for the comments, I just had one clarifying question about
them, but otherwise I will address them in the next version.

On Fri, Jul 26, 2024 at 3:00 PM Nicholas Pratte  wrote:
>
> I'll make sure to look over the other parts of this series and leave
> reviews at some point next week, but I prioritized this since I will
> be using this patch at some point in my GRE suites.
>
>
> > diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
> > index 694b2eba65..0b678ed62d 100644
> > --- a/dts/framework/test_suite.py
> > +++ b/dts/framework/test_suite.py
> > @@ -199,7 +199,7 @@ def send_packet_and_capture(
> >  Returns:
> >  A list of received packets.
> >  """
> > -packet = self._adjust_addresses(packet)
> > +packet = self._adjust_addresses([packet])[0]
> >  return self.tg_node.send_packet_and_capture(
> >  packet,
> >  self._tg_port_egress,
> > @@ -208,6 +208,18 @@ def send_packet_and_capture(
> >  duration,
> >  )
> >
> > +def send_packets(
> > +self,
> > +packets: list[Packet],
> > +) -> None:
> > +"""Send packets using the traffic generator and do not capture 
> > received traffic.
> > +
> > +Args:
> > +packets: Packets to send.
> > +"""
> > +packets = self._adjust_addresses(packets)
> > +self.tg_node.send_packets(packets, self._tg_port_egress)
> > +
> >  def get_expected_packet(self, packet: Packet) -> Packet:
> >  """Inject the proper L2/L3 addresses into `packet`.
> >
> > @@ -219,39 +231,59 @@ def get_expected_packet(self, packet: Packet) -> 
> > Packet:
> >  """
> >  return self._adjust_addresses(packet, expected=True)
> >
> > -def _adjust_addresses(self, packet: Packet, expected: bool = False) -> 
> > Packet:
> > +def _adjust_addresses(self, packets: list[Packet], expected: bool = 
> > False) -> list[Packet]:
> >  """L2 and L3 address additions in both directions.
> >
> > +Only missing addresses are added to packets, existing addressed 
> > will not be overridden.
>
> addressed should be addresses. Only saw this because of Chrome's
> built-in grammar correction.

Good catch.

>
> > +
> >  Assumptions:
> >  Two links between SUT and TG, one link is TG -> SUT, the other 
> > SUT -> TG.
> >
> >  Args:
> > -packet: The packet to modify.
> > +packets: The packets to modify.
> >  expected: If :data:`True`, the direction is SUT -> TG,
> >  otherwise the direction is TG -> SUT.
> >  """
> > -if expected:
> > -# The packet enters the TG from SUT
> > -# update l2 addresses
> > -packet.src = self._sut_port_egress.mac_address
> > -packet.dst = self._tg_port_ingress.mac_address
> > +ret_packets = []
> > +for packet in packets:
> > +default_pkt_src = type(packet)().src
> > +default_pkt_dst = type(packet)().dst
>
> This is really just a probing question for my sake, but what is the
> difference between the solution you have above type(packet)().src and
> Ether().src? Is there a preferred means of doing this?

There isn't really a functional difference at all under the assumption
that every packet we send will start with an Ethernet header. This
obviously isn't an unreasonable assumption to make, so maybe I was
reaching for flexibility that isn't really needed here by making it
work with any theoretical first layer that has a source address. I
wanted to do the same thing for the payload, but that causes issues
when the following layer with an address isn't the very next layer
after Ether.

>
> > +default_pkt_payload_src = IP().src if hasattr(packet.payload, 
> > "src") else None
> > +default_pkt_payload_dst = IP().dst if hasattr(packet.payload, 
> > "dst") else None
> > +# If `expected` is :data:`True`, the packet enters the TG from 
> > SUT, otherwise the
> > +# packet leaves the TG towards the SUT
> >
> > -# The packet is routed from TG egress to TG ingress
> > -# update l3 addresses
> > -packet.payload.src = self._tg_ip_address_egress.ip.exploded
> > -packet.payload.dst = self._tg_ip_address_ingress.ip.exploded
>
> This is where it gets a little tricky. There will be circumstances,
> albeit probably infrequently, where a user-created packet has more
> than one IP layer, such as the ones I am using in the ipgre and nvgre
> test suites that I am writing. In these cases, you need to specify an
> index of the IP layer you want to modify, otherwise it will modify the
> outermost IP layer in the packet (the IP layer outside the GRE layer.
> See my previous comment for an example packet). Should be pretty easy
> to fix, you just need to check if a packet contains an GRE layer, and
> if it does, modify the 

Re: [PATCH v8 1/3] dts: add functions to testpmd shell

2024-07-26 Thread Jeremy Spewock
On Wed, Jul 24, 2024 at 2:32 PM Dean Marx  wrote:
>
> added set promisc, set verbose, and port stop
> commands to testpmd shell.
>
> Signed-off-by: Dean Marx 

Reviewed-by: Jeremy Spewock 


Re: [PATCH v8 3/3] dts: queue suite conf schema

2024-07-26 Thread Jeremy Spewock
On Wed, Jul 24, 2024 at 2:32 PM Dean Marx  wrote:
>
> Configuration schema for the queue_start_stop suite.
>
> Signed-off-by: Dean Marx 

Reviewed-by: Jeremy Spewock 


Re: [PATCH v8 2/3] dts: initial queue start/stop suite implementation

2024-07-26 Thread Jeremy Spewock
On Wed, Jul 24, 2024 at 2:32 PM Dean Marx  wrote:
>
> This suite tests the ability of the Poll Mode Driver to enable
> and disable Rx/Tx queues on a port.
>
> Signed-off-by: Dean Marx 

Reviewed-by: Jeremy Spewock 


Re: [PATCH v5 2/3] dts: dynamic config conf schema

2024-07-26 Thread Jeremy Spewock
On Wed, Jul 24, 2024 at 3:21 PM Dean Marx  wrote:
>
> configuration schema to run dynamic configuration test suite.
>
> Signed-off-by: Dean Marx 

Reviewed-by: Jeremy Spewock 


Re: [PATCH v5 3/3] dts: dynamic config test suite

2024-07-26 Thread Jeremy Spewock
On Wed, Jul 24, 2024 at 3:21 PM Dean Marx  wrote:
>
> Suite for testing ability of Poll Mode Driver to turn promiscuous
> mode on/off, allmulticast mode on/off, and show expected behavior
> when sending packets with known, unknown, broadcast, and multicast
> destination MAC addresses.
>
> Depends-on: patch-1142113 ("add send_packets to test suites and rework
> packet addressing")
>
> Signed-off-by: Dean Marx 

Reviewed-by: Jeremy Spewock 


[PATCH] examples/l3fwd: fix read beyond array bondaries

2024-07-26 Thread Konstantin Ananyev
From: Konstantin Ananyev 

ASAN report:
ERROR: AddressSanitizer: unknown-crash on address 0x7ef92e32 at pc 
0x0053d1e9 bp 0x7ef92c00 sp 0x7ef92bf8
READ of size 16 at 0x7ef92e32 thread T0
#0 0x53d1e8 in _mm_loadu_si128 
/usr/lib64/gcc/x86_64-suse-linux/11/include/emmintrin.h:703
#1 0x53d1e8 in send_packets_multi ../examples/l3fwd/l3fwd_sse.h:125
#2 0x53d1e8 in acl_send_packets ../examples/l3fwd/l3fwd_acl.c:1048
#3 0x53ec18 in acl_main_loop ../examples/l3fwd/l3fwd_acl.c:1127
#4 0x12151eb in rte_eal_mp_remote_launch 
../lib/eal/common/eal_common_launch.c:83
#5 0x5bf2df in main ../examples/l3fwd/main.c:1647
#6 0x7f6d42a0d2bc in __libc_start_main (/lib64/libc.so.6+0x352bc)
#7 0x527499 in _start 
(/home/kananyev/dpdk-l3fwd-acl/x86_64-native-linuxapp-gcc-dbg-b1/examples/dpdk-l3fwd+0x527499)

Reason for that is that send_packets_multi() uses 16B loads to access
input dst_port[]and might read beyond array boundaries.
Right now, it doesn't cause any real issue - junk values are ignored, also
inside l3fwd we always allocate dst_port[] array on the stack, so
memory beyond it is always available.
Anyway, it probably need to be fixed.
The patch below simply allocates extra space for dst_port[], so
send_packets_multi() will never read beyond its boundaries.

Probably a better fix would be to change send_packets_multi()
itself to avoid access beyond 'nb_rx' entries.

Bugzilla ID: 1502
Fixes: 94c54b4158d5 ("examples/l3fwd: rework exact-match")
Cc: sta...@dpdk.org

Signed-off-by: Konstantin Ananyev 
---
 examples/l3fwd/l3fwd_acl.c   | 2 +-
 examples/l3fwd/l3fwd_altivec.h   | 6 +-
 examples/l3fwd/l3fwd_common.h| 7 +++
 examples/l3fwd/l3fwd_em_hlm.h| 2 +-
 examples/l3fwd/l3fwd_em_sequential.h | 2 +-
 examples/l3fwd/l3fwd_fib.c   | 2 +-
 examples/l3fwd/l3fwd_lpm_altivec.h   | 2 +-
 examples/l3fwd/l3fwd_lpm_neon.h  | 2 +-
 examples/l3fwd/l3fwd_lpm_sse.h   | 2 +-
 examples/l3fwd/l3fwd_neon.h  | 6 +-
 examples/l3fwd/l3fwd_sse.h   | 6 +-
 11 files changed, 29 insertions(+), 10 deletions(-)

diff --git a/examples/l3fwd/l3fwd_acl.c b/examples/l3fwd/l3fwd_acl.c
index b635011ef7..baa01e6dde 100644
--- a/examples/l3fwd/l3fwd_acl.c
+++ b/examples/l3fwd/l3fwd_acl.c
@@ -1056,7 +1056,7 @@ int
 acl_main_loop(__rte_unused void *dummy)
 {
struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
-   uint16_t hops[MAX_PKT_BURST];
+   uint16_t hops[SENDM_PORT_OVERHEAD(MAX_PKT_BURST)];
unsigned int lcore_id;
uint64_t prev_tsc, diff_tsc, cur_tsc;
int i, nb_rx;
diff --git a/examples/l3fwd/l3fwd_altivec.h b/examples/l3fwd/l3fwd_altivec.h
index e45e138e59..b91a6b5587 100644
--- a/examples/l3fwd/l3fwd_altivec.h
+++ b/examples/l3fwd/l3fwd_altivec.h
@@ -11,6 +11,9 @@
 #include "altivec/port_group.h"
 #include "l3fwd_common.h"
 
+#undef SENDM_PORT_OVERHEAD
+#define SENDM_PORT_OVERHEAD(x) ((x) + 2 * FWDSTEP)
+
 /*
  * Update source and destination MAC addresses in the ethernet header.
  * Perform RFC1812 checks and updates for IPV4 packets.
@@ -117,7 +120,8 @@ process_packet(struct rte_mbuf *pkt, uint16_t *dst_port)
  */
 static __rte_always_inline void
 send_packets_multi(struct lcore_conf *qconf, struct rte_mbuf **pkts_burst,
-   uint16_t dst_port[MAX_PKT_BURST], int nb_rx)
+   uint16_t dst_port[SENDM_PORT_OVERHEAD(MAX_PKT_BURST)],
+   int nb_rx)
 {
int32_t k;
int j = 0;
diff --git a/examples/l3fwd/l3fwd_common.h b/examples/l3fwd/l3fwd_common.h
index 224b1c08e8..d94e5f1357 100644
--- a/examples/l3fwd/l3fwd_common.h
+++ b/examples/l3fwd/l3fwd_common.h
@@ -18,6 +18,13 @@
 /* Minimum value of IPV4 total length (20B) in network byte order. */
 #defineIPV4_MIN_LEN_BE (sizeof(struct rte_ipv4_hdr) << 8)
 
+/*
+ * send_packet_multi() specific number of dest ports
+ * due to implementation we need to allocate array bigger then
+ * actual max number of elements in the array.
+ */
+#define SENDM_PORT_OVERHEAD(x) (x)
+
 /*
  * From http://www.rfc-editor.org/rfc/rfc1812.txt section 5.2.2:
  * - The IP version number must be 4.
diff --git a/examples/l3fwd/l3fwd_em_hlm.h b/examples/l3fwd/l3fwd_em_hlm.h
index 31cda9ddc1..c1d819997a 100644
--- a/examples/l3fwd/l3fwd_em_hlm.h
+++ b/examples/l3fwd/l3fwd_em_hlm.h
@@ -249,7 +249,7 @@ static inline void
 l3fwd_em_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, uint16_t portid,
  struct lcore_conf *qconf)
 {
-   uint16_t dst_port[MAX_PKT_BURST];
+   uint16_t dst_port[SENDM_PORT_OVERHEAD(MAX_PKT_BURST)];
 
l3fwd_em_process_packets(nb_rx, pkts_burst, dst_port, portid, qconf, 0);
send_packets_multi(qconf, pkts_burst, dst_port, nb_rx);
diff --git a/examples/l3fwd/l3fwd_em_sequential.h 
b/examples/l3fwd/l3fwd_em_sequential.h
index 067f23889a..3a40b2e434 100644
--- a/examples/l3fwd/l3fwd_em_sequential.h
+++ b/examples/l3fwd/l3fwd_em_sequential.h
@@ -79,7 +79,7 @@ l3fwd_em

RE: [RFC v2] ethdev: an API for cache stashing hints

2024-07-26 Thread Wathsala Wathawana Vithanage
> rte_eth_X_get_capability()
> 

rte_eth_dev_stashing_hints_discover is somewhat similar.

> Instead of adding RTE_ETH_DEV_CAPA_ macro and contaminating
> 'rte_eth_dev_info' with this edge use case, what do you think follow above
> design and have dedicated get capability API?

I think it's better to have a dedicated API, given that we already have a fine
grained capabilities discovery function. I will add this feedback to V3 of the
RFC.

> 
> And I can see set() has two different APIs, 'rte_eth_dev_stashing_hints_rx' &
> 'rte_eth_dev_stashing_hints_tx', is there a reason to have two separate APIs
> instead of having one which gets RX & TX as argument, as done in internal
> device ops?

Some types/hints may only apply to a single queue direction, so I thought it
would be better to separate them out into separate Rx and Tx APIs for ease
of comprehension/use for the developer.
In fact, underneath, it uses one API for both Rx and Tx.



RE: [PATCH] eal: add support for TRNG with Arm RNG feature

2024-07-26 Thread Wathsala Wathawana Vithanage
> 
> If you decide to rte_csrand() it should fallback to get_random or get_entropy.
> 
> Note: many people don't fully trust RDRAND or ARM CPU instructions.
> That is why the Linux entropy calls do not use only the HW instructions.

Thanks Stephen. I understand the concern, would it be acceptable for you if
rte_csrand() by default used Linux entropy calls and still had a runtime 
parameter
to enable HW based csrng for supported CPU in rte_csrand for those who need it?


Re: [RFC v1 1/3] dts: add UDP tunnel command to testpmd shell

2024-07-26 Thread Jeremy Spewock
Hey Dean, these changes look good to me, I just had a few minor
comments/suggestions.

One thing I did notice was that the methods added here don't have
type-hints for their return-types, obviously functionally it makes no
difference since they don't return anything, but just adding the note
that says the return None is helpful for type checkers and
understanding the method at a glance.

On Thu, Jul 25, 2024 at 12:23 PM Dean Marx  wrote:
>
> add udp_tunnel_port command to testpmd shell class,
> also ports over set verbose method from vlan suite
>
> Signed-off-by: Dean Marx 
> ---
>  dts/framework/remote_session/testpmd_shell.py | 51 ++-
>  1 file changed, 50 insertions(+), 1 deletion(-)
>
> diff --git a/dts/framework/remote_session/testpmd_shell.py 
> b/dts/framework/remote_session/testpmd_shell.py
> index eda6eb320f..26114091d6 100644
> --- a/dts/framework/remote_session/testpmd_shell.py
> +++ b/dts/framework/remote_session/testpmd_shell.py
> @@ -804,7 +804,56 @@ def show_port_stats(self, port_id: int) -> 
> TestPmdPortStats:
>
>  return TestPmdPortStats.parse(output)
>
> -def _close(self) -> None:

It looks like this method might have been renamed by mistake in a
rebase, the name on main right now is _close. This could cause some
weird behavior in your testing since this is what the context manager
uses to close the session, but I don't think it would have any drastic
effect since the channel is still closed.

> +def set_verbose(self, level: int, verify: bool = True):
> +"""Set debug verbosity level.
> +
> +Args:
> +level: 0 - silent except for error
> +1 - fully verbose except for Tx packets
> +2 - fully verbose except for Rx packets
> +>2 - fully verbose
> +verify: If :data:`True` the command output will be scanned to 
> verify that verbose level
> +is properly set. Defaults to :data:`True`.
> +
> +Raises:
> +InteractiveCommandExecutionError: If `verify` is :data:`True` 
> and verbose level
> +is not correctly set.
> +"""
> +verbose_output = self.send_command(f"set verbose {level}")
> +if verify:
> +if "Change verbose level" not in verbose_output:
> +self._logger.debug(f"Failed to set verbose level to {level}: 
> \n{verbose_output}")
> +raise InteractiveCommandExecutionError(
> +f"Testpmd failed to set verbose level to {level}."
> +)
> +
> +def udp_tunnel_port(
> +self, port_id: int, add: bool, udp_port: int, protocol: str, verify: 
> bool = True
> +):
> +"""Configures a UDP tunnel on the specified port, for the specified 
> protocol.
> +
> +Args:
> +port_id: ID of the port to configure tunnel on.
> +add: If :data:`True`, adds tunnel, otherwise removes tunnel.
> +udp_port: ID of the UDP port to configure tunnel on.
> +protocol: Name of tunnelling protocol to use; options are vxlan, 
> geneve, ecpri

If there are explicit choices that this has to be like this it might
be better to put these options into an enum and then pass that in as
the parameter here. That way it is very clear from just calling the
methods what your options are.



> +verify: If :data:`True`, checks the output of the command to 
> verify that
> +no errors were thrown.
> +
> +Raises:
> +InteractiveCommandExecutionError: If verify is :data:`True` and 
> command
> +output shows an error.
> +"""
> +action = "add" if add else "rm"
> +cmd_output = self.send_command(
> +f"port config {port_id} udp_tunnel_port {action} {protocol} 
> {udp_port}"
> +)
> +if verify:
> +if "Operation not supported" in cmd_output or "Bad arguments" in 
> cmd_output:
> +self._logger.debug(f"Failed to set UDP tunnel: 
> \n{cmd_output}")
> +raise InteractiveCommandExecutionError(f"Failed to set UDP 
> tunnel: \n{cmd_output}")
> +
> +def close(self) -> None:
>  """Overrides :meth:`~.interactive_shell.close`."""
>  self.stop()
>  self.send_command("quit", "Bye...")
> --
> 2.44.0
>


Re: [RFC v1 2/3] dts: VXLAN gpe support test suite

2024-07-26 Thread Jeremy Spewock
This all makes sense to me and looks good to me, I just had one
suggestion about verification below.

On Thu, Jul 25, 2024 at 12:23 PM Dean Marx  wrote:
>
> Test suite for verifying vxlan gpe support on NIC, as well as expected
> behavior while sending vxlan packets through tunnel
>
> Signed-off-by: Dean Marx 
> ---
>  dts/tests/TestSuite_vxlan_gpe_support.py | 77 
>  1 file changed, 77 insertions(+)
>  create mode 100644 dts/tests/TestSuite_vxlan_gpe_support.py
>
> diff --git a/dts/tests/TestSuite_vxlan_gpe_support.py 
> b/dts/tests/TestSuite_vxlan_gpe_support.py
> new file mode 100644
> index 00..981f878a4c
> --- /dev/null
> +++ b/dts/tests/TestSuite_vxlan_gpe_support.py
> @@ -0,0 +1,77 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2024 University of New Hampshire
> +
> +"""VXLAN-GPE support test suite.
> +
> +This suite verifies virtual extensible local area network packets
> +are only received in the same state when a UDP tunnel port for VXLAN 
> tunneling
> +protocols is enabled. GPE is the Generic Protocol Extension for VXLAN,
> +which is used for configuring fields in the VXLAN header through GPE tunnels.
> +
> +If a GPE tunnel is configured for the corresponding UDP port within a sent 
> packet,
> +that packet should be received with its VXLAN layer. If there is no GPE 
> tunnel,
> +the packet should be received without its VXLAN layer.
> +
> +"""
> +
> +from scapy.layers.inet import IP, UDP  # type: ignore[import-untyped]
> +from scapy.layers.l2 import Ether  # type: ignore[import-untyped]
> +from scapy.layers.vxlan import VXLAN  # type: ignore[import-untyped]
> +from scapy.packet import Raw  # type: ignore[import-untyped]
> +
> +from framework.params.testpmd import SimpleForwardingModes
> +from framework.remote_session.testpmd_shell import TestPmdShell
> +from framework.test_suite import TestSuite
> +
> +
> +class TestVxlanGpeSupport(TestSuite):
> +"""DPDK VXLAN-GPE test suite.
> +
> +This suite consists of one test case (Port 4790 is designated for 
> VXLAN-GPE streams):
> +1. VXLAN-GPE ipv4 packet detect - configures a GPE tunnel on port 4790
> +and sends packets with a matching UDP destination port. This packet
> +should be received by the traffic generator with its VXLAN layer.
> +Then, remove the GPE tunnel, send the same packet, and verify that
> +the packet is received without its VXLAN layer.
> +"""
> +
> +def set_up_suite(self) -> None:
> +"""Set up the test suite.
> +
> +Setup:
> +Verify that we have at least 2 port links in the current test 
> run.
> +"""
> +self.verify(
> +len(self._port_links) > 1,
> +"There must be at least two port links to run the scatter test 
> suite",
> +)
> +
> +def send_vxlan_packet_and_verify(self, udp_dport: int, 
> should_receive_vxlan: bool) -> None:
> +"""Generate a VXLAN GPE packet with the given UDP destination port, 
> send and verify.
> +
> +Args:
> +udp_dport: The destination UDP port to generate in the packet.
> +should_receive_vxlan: Indicates whether the packet should be
> +received by the traffic generator with its VXLAN layer.
> +"""
> +packet = Ether() / IP() / UDP(dport=udp_dport) / VXLAN(flags=12) / 
> IP() / Raw(load="x")
> +received = self.send_packet_and_capture(packet)
> +print(f"Received packets = {received}")
> +has_vxlan = any(
> +"VXLAN" in packet.summary() and "x" in str(packet.load) for 
> packet in received

Scapy actually allows for checking if a layer exists in a packet using
the real types in a few different ways. You could use
packet.haslayer(VXLAN), or you could do basically the same as what you
have without the string comparison if you do `VXLAN in packet`. This
might end up being a little shorter and it saves you from having to
deal with the string provided from the packet summary.


> +)
> +self.verify(
> +not (has_vxlan ^ should_receive_vxlan), "Expected packet did not 
> match received packet."
> +)
> +
> +def test_gpe_tunneling(self) -> None:
> +"""Verifies expected behavior of VXLAN packets through a GPE 
> tunnel."""
> +GPE_port = 4790
> +with TestPmdShell(node=self.sut_node) as testpmd:
> +testpmd.set_forward_mode(SimpleForwardingModes.io)
> +testpmd.set_verbose(level=1)
> +testpmd.start()
> +testpmd.udp_tunnel_port(port_id=0, add=True, udp_port=GPE_port, 
> protocol="vxlan")
> +self.send_vxlan_packet_and_verify(udp_dport=GPE_port, 
> should_receive_vxlan=True)
> +testpmd.udp_tunnel_port(port_id=0, add=False, udp_port=GPE_port, 
> protocol="vxlan")
> +self.send_vxlan_packet_and_verify(udp_dport=GPE_port, 
> should_receive_vxlan=False)
> --
> 2.44.0
>


Re: [PATCH] eal: add support for TRNG with Arm RNG feature

2024-07-26 Thread Mattias Rönnblom

On 2024-07-26 20:34, Shunzhi Wen wrote:

I'm missing a rationale here. Why is this useful?


This creates an API for HW that supports cryptographically secure random number 
generation.


If you want to extend  with a cryptographically secure
random number generator, that's fine.

To have an API that's only available on certain ARM CPUs is not.

NAK


The primary goal of this patch is to provide a direct interface to HW,
instead of letting kernel handle it. This is not an API just for Arm
CPUs, as other vendors also have similar HW features. For instance,
Intel and AMD has support for x86 RDRAND and RDSEED instructions, thus
can easily implement this API.



No DPDK library (or PMD) currently needs this functionality, and no 
application, to my knowledge, has asked for this. If an app or a DPDK 
library would require cryptographically secure random numbers, it would 
most likely require it on all CPU/OS platforms (and with all DPDK -march 
flags).


RDRAND is only available on certain x86_64 CPUs, and is incredibly slow 
- slower than getting entropy via the kernel, even with non-vDSO syscalls.


Agner Fog lists the RDRAND latency as ~3700 cc for Zen 2. Later 
generations of both AMD and Intel CPUs have much shorter latencies, but 
a reciprocal throughput so low that one have to wait thousands of clock 
cycles before issuing another RDRAND, or risk stalling the core.


My Raptor Lake seems to require ~1000 cc retire RDRAND, which is ~11x 
slower than getting entropy (in bulk) via getentropy().


What is the latency for the ARM equivalent? Does it also have a 
reciprocal throughput issue?



A new function should be called something with "secure", rather than "true"
(which is a bit silly, since we might well live in a completely deterministic
universe). "secure" would more clearly communicate the intent, and also
doesn't imply any particular implementation.


Regarding the terminology, “cryptographically secure random number”
is a more accurate and meaningful term than “true random number.”
This change will be made in the description, and the function name will
be replaced with rte_csrand.

IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.