I tested below 18 scenarios on RHEL9 and didn't find any new dpdk issues.
Guest with device assignment(PF) throughput testing(1G hugepage size): PASS
Guest with device assignment(PF) throughput testing(2M hugepage size) : PASS
Guest with device assignment(VF) throughput testing: PASS
PVP (host dpd
Hi Yanghang,
Thanks for the test and confirmation!
Best Regards,
Xueming Li
From: YangHang Liu
Sent: 12/27/2023 16:17
To: Xueming(Steven) Li
Cc: sta...@dpdk.org; dev@dpdk.org; Abhishek Marathe
; Ali Alnubani ;
benjamin.wal...@intel.com; David Christensen ; Hemant
Agrawal ; Ian Stokes ; Jer
The type of NAT64 action will be parsed.
Usage example with template API:
...
flow actions_template 0 create ingress actions_template_id 1 \
template count / nat64 / jump / end mask count / nat64 / \
jump / end
flow template_table 0 create group 1 priority 0 ingress table_id \
0x
This patchset introduce the NAT64 action support for rte_flow.
Bing Zhao (7):
ethdev: introduce NAT64 action
app/testpmd: add support for NAT64 in the command line
net/mlx5: fetch the available registers for NAT64
common/mlx5: add new modify field defininations
net/mlx5: create NAT64 act
In order to support the communication between IPv4 and IPv6 nodes in
the network, different technologies are used, like dual-stacks,
tunneling and NAT64. In some IPv4-only clients, it is hard to deploy
new software and(or) hardware to support IPv6 protocol.
NAT64 is a choice and it will also reduc
This commit adds TCP data offset, IPv4 total length, IPv4 IHL,
IPv6 payload length in modify field operation.
Also redefine the out protocol(next header) for both IPv4 and IPv6.
Signed-off-by: Bing Zhao
---
drivers/common/mlx5/mlx5_prm.h | 5 +
1 file changed, 5 insertions(+)
diff --git a/
REG_C_6 is used as the 1st one and since it is reserved internally
by default, there is no impact.
The remaining 2 registers will be fetched from the available TAGs
array from right to left. They will not be masked in the array due
to the fact that not all the rules will use NAT64 action.
Signed-
The NAT64 DR actions can be shared among the tables. All these
actions can be created during configuring the flow queues and saved
for the future usage.
Even the actions can be shared now, inside per each flow rule, the
actual hardware resources are unique.
Signed-off-by: Bing Zhao
---
doc/guid
From: Erez Shitrit
Add support of new action mlx5dr_action_create_nat64.
The new action allows to modify IP packets from version to version, IPV6
to IPV4 and vice versa.
Signed-off-by: Erez Shitrit
Signed-off-by: Bing Zhao
---
drivers/net/mlx5/hws/mlx5dr.h| 29 ++
drivers/net/mlx5/hw
The action will handle the IPv4 and IPv6 headers translation. It will
add / remove IPv6 address prefix by default.
To use the user specific address, another rule to modify the
addresses of the IP header is needed.
Signed-off-by: Bing Zhao
---
drivers/net/mlx5/mlx5_flow_hw.c | 22 +++
NAT64 is treated as a modify header action. The action order and
limitation should be the same as that of modify header in each
domain.
Since the last 2 TAG registers will be used implicitly in the
address backup mode, the values in these registers are no longer
valid after the NAT64 action. The a
Hello,
Cc: 22.11 stable maintainer for info
On Wed, Dec 27, 2023 at 4:14 AM Linzhe Lee
wrote:
>
> Dear Team,
>
> I hope this message finds you well.
>
> We have encountered a recurring deadlock issue within the function
> rte_rwlock_write_lock in the DPDK version 22.11.3 LTS.
>
> It appears to b
The only way to enable diagnostics for TX paths is to modify the
application source code. Making it difficult to diagnose faults.
In this patch, the devarg option "mbuf_check" is introduced and the
parameters are configured to enable the corresponding diagnostics.
supported cases: mbuf, size, seg
Goal of the proposed API changes is reducing the overhead of performance
critical asynchronous flow API functions at library level.
Specifically the functions which can be called while processing packets
received by the application in data path.
The plan for the changes is as follows:
1. Fast pat
> -Original Message-
> From: Deng, KaiwenX
> Sent: Thursday, December 7, 2023 10:31 AM
> To: dev@dpdk.org
> Cc: sta...@dpdk.org; Yang, Qiming ; Zhou, YidingX
> ; Deng, KaiwenX ; Zhang,
> Qi Z ; Ting Xu ; Kevin Liu
> ; Ajit Khaparde ;
> Andrew Rybchenko ; Jerin Jacob
> ; Hemant Agrawal ;
> -Original Message-
> From: Mingjin Ye
> Sent: Wednesday, December 27, 2023 6:17 PM
> To: dev@dpdk.org
> Cc: Yang, Qiming ; Ye, MingjinX
> ; Wu, Jingjing ; Xing, Beilei
>
> Subject: [PATCH v4 1/2] net/iavf: add diagnostic support in TX path
>
> The only way to enable diagnostics for
Hi Andrew, Stephen, Ferruh and Thomas,
> -Original Message-
> From: Andrew Rybchenko
> Sent: Saturday, December 16, 2023 11:04 AM
>
> On 12/15/23 19:21, Thomas Monjalon wrote:
> > 15/12/2023 14:44, Ferruh Yigit:
> >> On 12/14/2023 5:26 PM, Stephen Hemminger wrote:
> >>> On Thu, 14 Dec 20
On Wed, 27 Dec 2023 12:57:09 +0200
Dariusz Sosnowski wrote:
> +/**
> + * @internal
> + *
> + * Fast path async flow functions and related data are held in a flat array,
> one entry per ethdev.
> + * It is assumed that each entry is read-only and cache aligned.
> + */
> +struct rte_flow_fp_ops {
Added Testpmd CLI support for dumping Tx scheduling tree.
Usage:
testpmd>txsched dump
The output file is in "dot" format, which can be converted
into an image file using Graphviz.
- In "brief" mode, all scheduling nodes in the tree are displayed.
- In "detail" mode, each node's configuration
Document CLI for diagnose purpose.
Signed-off-by: Qi Zhang
---
doc/guides/nics/ice.rst | 36
1 file changed, 36 insertions(+)
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index 820a385b06..29309abe4d 100644
--- a/doc/guides/nics/ice.rst
+++
I tested below 18 scenarios on RHEL9 and didn't find any new dpdk issues.
Guest with device assignment(PF) throughput testing(1G hugepage size): PASS
Guest with device assignment(PF) throughput testing(2M hugepage size) : PASS
Guest with device assignment(VF) throughput testing: PASS
PVP (host dpd
This patch set adds an IAVF testpmd command "set tx lldp on|off" which
will register an mbuf dynfield IAVF_TX_LLDP_DYNFIELD to indicate
the need to send LLDP packet. It needs to close the Tx port first,
then "set tx lldp on", and reopen the port to select correct Tx path,
only supports turning on f
This patch adds an mbuf dynfield IAVF_TX_LLDP_DYNFIELD to determine
whether or not to fill the SWTCH_UPLINK bit in the Tx context descriptor
to send LLDP packet.
Signed-off-by: Zhichao Zeng
---
drivers/net/iavf/iavf_ethdev.c | 5 +
drivers/net/iavf/iavf_rxtx.c | 16 ++--
drive
This patch adds an avx512 ctx Tx path that supports context descriptor,
filling in the SWTCH_UPLINK bit based on mbuf
dynfield IAVF_TX_LLDP_DYNFIELD to support sending LLDP packet.
Signed-off-by: Zhichao Zeng
---
drivers/net/iavf/iavf_rxtx.c| 5 +
drivers/net/iavf/iavf_rxtx.h
This patch adds an IAVF testpmd command "set tx lldp on|off" which
will register an mbuf dynfield IAVF_TX_LLDP_DYNFIELD to indicate
the need to send LLDP packet.
It needs to close the Tx port first, then "set tx lldp on", and reopen
the port to select correct Tx path, only supports turning on for
Hi,
Testing on 22.11.4-rc3 confirms that this issue has been resolved.
Thank you very much.
David Marchand 于2023年12月27日周三 18:14写道:
>
> Hello,
>
> Cc: 22.11 stable maintainer for info
>
> On Wed, Dec 27, 2023 at 4:14 AM Linzhe Lee
> wrote:
> >
> > Dear Team,
> >
> > I hope this message finds yo
26 matches
Mail list logo