[Bug 1239] VMXNET 3 returned the wrong error code in initializing

2023-05-28 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1239

Bug ID: 1239
   Summary: VMXNET 3 returned the wrong error code in initializing
   Product: DPDK
   Version: unspecified
  Hardware: All
OS: All
Status: UNCONFIRMED
  Severity: minor
  Priority: Low
 Component: ethdev
  Assignee: dev@dpdk.org
  Reporter: corez...@gmail.com
  Target Milestone: ---

To call rte_eth_dev_start() to start a device, this function calls the function
pointer (dev_start) that was registered by the specific device.
rte_eth_dev_start() judges the integer returned from dev_start to continue or
return an error. 

In the driver of VMXNET 3 the vmxnet3_dev_start is the function that needs to
be registered as dev_start, and it called vmxnet3_dev_rxtx_init(). 

In vmxnet3_dev_rxtx_init(), a wrong error code may be thrown after it invokes
vmxnet3_post_rx_bufs() because it negates the error code before returning it*.
It causes rte_eth_dev_start() to give a positive number to the invoker, but it
should be a negative number, as described in the comments.

*: At vmxnet3_rxtx.c:1318

-- 
You are receiving this mail because:
You are the assignee for the bug.

RE: [EXT] Re: [PATCH v4] lib: set/get max memzone segments

2023-05-28 Thread Alok Prasad


> -Original Message-
> From: David Marchand 
> Sent: 26 May 2023 15:25
> To: Devendra Singh Rawat ; Alok Prasad 
> 
> Cc: dev@dpdk.org; Bruce Richardson ; Ophir Munk 
> ; Matan Azrad
> ; Thomas Monjalon ; Lior Margalit 
> 
> Subject: [EXT] Re: [PATCH v4] lib: set/get max memzone segments
> 
> External Email
> 
> --
> On Thu, May 25, 2023 at 12:26 AM Ophir Munk  wrote:
> >
> > Currently, the max memzones count constat (RTE_MAX_MEMZONE) is used to
> > decide how many memzones a DPDK application can have. This value could
> > technically be changed by manually editing `rte_config.h` before
> > compilation, but if DPDK is already compiled, that option is not useful.
> > There are certain use cases that would benefit from making this value
> > configurable.
> >
> > This commit addresses the issue by adding a new API to set the max
> > number of memzones before EAL initialization (while using the old
> > constant as default value), as well as an API to get current maximum
> > number of memzones.
> >
> > Signed-off-by: Ophir Munk 
> > Acked-by: Morten Brørup 
> 
> qede maintainers, can you double check the change on your driver please?
> Thanks.
> 
> 
> --
> David Marchand

Acked-by: Alok Prasad 


RE: [PATCH] net/mlx5: fix drop action attribute validation

2023-05-28 Thread Raslan Darawsheh
Hi,

> -Original Message-
> From: Dariusz Sosnowski 
> Sent: Wednesday, May 17, 2023 11:36 PM
> To: Ori Kam ; Suanming Mou ;
> Matan Azrad ; Slava Ovsiienko
> ; Jiawei(Jonny) Wang 
> Cc: dev@dpdk.org; sta...@dpdk.org
> Subject: [PATCH] net/mlx5: fix drop action attribute validation
> 
> Before this patch, DROP flow action was rejected for all egress
> flow rules, which was not correct for all cases.
> 
> When Verbs flow engine is used (dv_flow_en=0) DROP flow action
> is implemented using IBV_FLOW_SPEC_ACTION_DROP IBV action.
> This action is supported on ingress only.
> This patch amends the DROP flow action validation to allow it only on
> ingress.
> 
> When DV flow engine is used (dv_flow_en=1) there are 2 implementation
> options for DROP flow action:
> 
> - DR drop action (allocated through mlx5dv_dr_action_create_drop() API),
> - dedicated drop queue.
> 
> When flow rules are created on non-root flow tables DR drop action can
> be used on all steering domains. On root flow table however, this action
> ca be used if and only if it is supported by rdma-core and kernel
> drivers. mlx5 PMD dynamically checks if DR drop action is supported
> on root tables during device probing
> (it is checked in mlx5_flow_discover_dr_action_support()).
> If DR drop action is not supported on root table, then dedicated
> drop queue must be used and as a result, DROP flow action on root
> is supported only for ingress flow rules.
> This patch amends the DROP flow action validation with this logic
> for DV flow engine.
> 
> This patch also renames the dr_drop_action_en field in device's private
> data to dr_root_drop_action_en to align the name with field's meaning.
> 
> Fixes: 3c4338a42134 ("net/mlx5: optimize device spawn time with
> representors")
> Fixes: 45633c460c22 ("net/mlx5: workaround drop action with old kernel")
> Fixes: da845ae9d7c1 ("net/mlx5: fix drop action for Direct Rules/Verbs")
> Cc: suanmi...@nvidia.com
> Cc: viachesl...@nvidia.com
> Cc: jiaw...@nvidia.com
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Dariusz Sosnowski 
> Acked-by: Viacheslav Ovsiienko 

Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh


RE: [PATCH v5 2/2] ethdev: add indirect list METER_MARK update structures

2023-05-28 Thread Ori Kam
Hi Gregory,

> -Original Message-
> From: Gregory Etelson 
> Sent: Thursday, May 25, 2023 11:12 AM
> 
> In the indirect list API, update action and update flow contexts
> are mutually exclusive.
> The patch splits legacy METER_MASK update structure to support
> indirect list API:
> 
> `struct rte_flow_indirect_update_action_meter_mark` defines
> METER_MARK
> action context that is shared between all flows that reference a given
> indirect list handle.
> 
> `struct rte_flow_indirect_update_flow_meter_mark` defines METER_MARK
> context private to specific flow.
> 
> Signed-off-by: Gregory Etelson 
> ---
>  lib/ethdev/rte_flow.h | 32 
>  1 file changed, 32 insertions(+)
> 
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index a0d01a97e7..ce1aa336f2 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -3891,6 +3891,38 @@ struct rte_flow_update_meter_mark {
>   uint32_t reserved:27;
>  };
> 
> +/**
> + * @see RTE_FLOW_ACTION_TYPE_METER_MARK
> + * @see RTE_FLOW_ACTION_TYPE_INDIRECT_LIST
> + *
> + * Update action mutable context.
> + */
> +struct rte_flow_indirect_update_action_meter_mark {
> + /** New meter_mark parameters to be updated. */
> + struct rte_flow_action_meter_mark meter_mark;
> + /** The profile will be updated. */
> + uint32_t profile_valid:1;
> + /** The policy will be updated. */
> + uint32_t policy_valid:1;
> + /** The color mode will be updated. */
> + uint32_t color_mode_valid:1;
> + /** The meter state will be updated. */
> + uint32_t state_valid:1;
> + /** Reserved bits for the future usage. */
> + uint32_t reserved:28;
> +};
> +

Why did you create new meter_mark structure?

> +/**
> + * @see RTE_FLOW_ACTION_TYPE_METER_MARK
> + * @see RTE_FLOW_ACTION_TYPE_INDIRECT_LIST
> + *
> + * Update flow mutable context.
> + */
> +struct rte_flow_indirect_update_flow_meter_mark {
> + /** Updated init color applied to packet */
> + enum rte_color init_color;
> +};
> +
>  /* Mbuf dynamic field offset for metadata. */
>  extern int32_t rte_flow_dynf_metadata_offs;
> 
> --
> 2.34.1

Best,
Ori


RE: [PATCH v2] app/testpmd: set srv6 header without any TLV

2023-05-28 Thread Ori Kam
Hi Rongwei,

> -Original Message-
> From: Rongwei Liu 
> Sent: Friday, May 26, 2023 6:22 AM
> 
> HI @Ori Kam @NBU-Contact-Thomas Monjalon (EXTERNAL) @Andrew
> Rybchenko
>   Can you share some comments on this?
>   Thanks.
> 
> BR
> Rongwei
> 
> > -Original Message-
> > From: Rongwei Liu 
> > Sent: Tuesday, March 28, 2023 20:28
> > To: dev@dpdk.org; Matan Azrad ; Slava Ovsiienko
> > ; Ori Kam ; NBU-Contact-
> > Thomas Monjalon (EXTERNAL) 
> > Cc: Aman Singh ; Yuying Zhang
> > ; Olivier Matz 
> > Subject: [PATCH v2] app/testpmd: set srv6 header without any TLV
> >
> > External email: Use caution opening links or attachments
> >
> >
> > When the type field of the IPv6 routing extension is 4, it means segment
> > routing header.
> >
> > In this case, set the last_entry to be segment_left minus 1 if the user
> doesn't
> > specify the header length explicitly.
> >
> > Signed-off-by: Rongwei Liu 
> >
> > v2: add macro definition for segment routing header.
> > ---
> >  app/test-pmd/cmdline_flow.c | 3 +++
> >  lib/net/rte_ip.h| 3 +++
> >  2 files changed, 6 insertions(+)
> >
> > diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> > index 5fbc450849..09f417b76e 100644
> > --- a/app/test-pmd/cmdline_flow.c
> > +++ b/app/test-pmd/cmdline_flow.c
> > @@ -12817,6 +12817,9 @@ cmd_set_raw_parsed(const struct buffer *in)
> > size = sizeof(struct rte_ipv6_routing_ext) +
> > (ext->hdr.segments_left << 4);
> > ext->hdr.hdr_len = ext->hdr.segments_left 
> > << 1;
> > +   /* Srv6 without TLV. */
> > +   if (ext->hdr.type == RTE_IPV6_SRCRT_TYPE_4)
> > +   ext->hdr.last_entry =
> > + ext->hdr.segments_left - 1;
> > } else {
> > size = sizeof(struct rte_ipv6_routing_ext) +
> > (ext->hdr.hdr_len << 3); diff --git 
> > a/lib/net/rte_ip.h
> > b/lib/net/rte_ip.h index 337fad15d7..cfdbfb86ba 100644
> > --- a/lib/net/rte_ip.h
> > +++ b/lib/net/rte_ip.h
> > @@ -540,6 +540,9 @@ struct rte_ipv6_hdr {
> > uint8_t  dst_addr[16];  /**< IP address of destination host(s). */  
> > }
> > __rte_packed;
> >
> > +/* IPv6 routing extension type definition. */ #define
> > +RTE_IPV6_SRCRT_TYPE_4 4
> > +
> >  /**
> >   * IPv6 Routing Extension Header
> >   */
> > --
> > 2.27.0

Acked-by: Ori Kam 

Best,
Ori


[PATCH v6] ethdev: add indirect list flow action

2023-05-28 Thread Gregory Etelson
Indirect API creates a shared flow action with unique action handle.
Flow rules can access the shared flow action and resources related to
that action through the indirect action handle.
In addition, the API allows to update existing shared flow action
configuration.  After the update completes, new action configuration
is available to all flows that reference that shared action.

Indirect actions list expands the indirect action API:
• Indirect action list creates a handle for one or several
  flow actions, while legacy indirect action handle references
  single action only.
  Input flow actions arranged in END terminated list.
• Flow rule can provide rule specific configuration parameters to
  existing shared handle.
  Updates of flow rule specific configuration will not change the base
  action configuration.
  Base action configuration was set during the action creation.

Indirect action list handle defines 2 types of resources:
• Mutable handle resource can be changed during handle lifespan.
• Immutable handle resource value is set during handle creation
  and cannot be changed.

There are 2 types of mutable indirect handle contexts:
• Action mutable context is always shared between all flows
  that referenced indirect actions list handle.
  Action mutable context can be changed by explicit invocation
  of indirect handle update function.
• Flow mutable context is private to a flow.
  Flow mutable context can be updated by indirect list handle
  flow rule configuration.

flow 1:
 / indirect handle H conf C1 /
   |   |
   |   |
   |   | flow 2:
   |   | / indirect handle H conf C2 /
   |   |   |  |
   |   |   |  |
   |   |   |  |
=
^  |   |   |  |
|  |   V   |  V
|~~  ~~~
| flow mutableflow mutable
| context 1   context 2
|~~  ~~~
  indirect  |  |   |
  action|  |   |
  context   |  V   V
|   -
| action mutable context
|   -
vaction immutable context
=

Indirect action types - immutable, action / flow mutable, are mutually
exclusive and depend on the action definition.
For example:
• Indirect METER_MARK policy is immutable action member and profile is
  action mutable action member.
• Indirect METER_MARK flow action defines init_color as flow mutable
  member.
• Indirect QUOTA flow action does not define flow mutable members.

If indirect list handle was created from a list of actions
A1 / A2 ... An / END
indirect list flow action can update Ai flow mutable context in the
action configuration parameter.
Indirect list action configuration is and array [C1, C2,  .., Cn]
where Ci corresponds to Ai in the action handle source.
Ci configuration element points Ai flow mutable update, or it's NULL
if Ai has no flow mutable update.
Indirect list action configuration can be NULL if the action
has no flow mutable updates.

Template API:

Action template format:

template .. indirect_list handle Htmpl conf Ctmpl ..
mask .. indirect_list handle Hmask conf Cmask ..

1 If Htmpl was masked (Hmask != 0), it will be fixed in that template.
  Otherwise, indirect action value is set in a flow rule.

2 If Htmpl and Ctmpl[i] were masked (Hmask !=0 and Cmask[i] != 0),
  Htmpl's Ai action flow mutable context fill be updated to
  Ctmpl[i] values and will be fixed in that template.

Flow rule format:

actions .. indirect_list handle Hflow conf Cflow ..

3 If Htmpl was not masked in actions template, Hflow references an
  action of the same type as Htmpl.

4 Cflow[i] updates handle's Ai flow mutable configuration if
  the Ci was not masked in action template.

Signed-off-by: Gregory Etelson 
---
 app/test-pmd/cmdline_flow.c| 207 ++-
 app/test-pmd/config.c  | 163 +++
 app/test-pmd/testpmd.h |   9 +-
 doc/guides/nics/features/default.ini   |   1 +
 doc/guides/prog_guide/rte_flow.rst | 119 +++
 doc/guides/rel_notes/release_23_07.rst |   2 +
 lib/ethdev/ethdev_trace.h  |  88 
 lib/ethdev/ethdev_trace_points.c   |  18 ++
 lib/ethdev/rte_flow.c  

[PATCH] ethdev: add indirect list METER_MARK update structures

2023-05-28 Thread Gregory Etelson
In the indirect list API, update action and update flow contexts
are mutually exclusive.
The patch splits legacy METER_MASK update structure to support
indirect list API:

`struct rte_flow_indirect_update_action_meter_mark` defines METER_MARK
action context that is shared between all flows that reference a given
indirect list handle.

`struct rte_flow_indirect_update_flow_meter_mark` defines METER_MARK
context private to specific flow.

Depends-on: patch-127638 ("ethdev: add indirect list flow action")

Signed-off-by: Gregory Etelson 
---
 lib/ethdev/rte_flow.h | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 71727883ad..750df8401d 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3891,6 +3891,17 @@ struct rte_flow_update_meter_mark {
uint32_t reserved:27;
 };
 
+/**
+ * @see RTE_FLOW_ACTION_TYPE_METER_MARK
+ * @see RTE_FLOW_ACTION_TYPE_INDIRECT_LIST
+ *
+ * Update flow mutable context.
+ */
+struct rte_flow_indirect_update_flow_meter_mark {
+   /** Updated init color applied to packet */
+   enum rte_color init_color;
+};
+
 /* Mbuf dynamic field offset for metadata. */
 extern int32_t rte_flow_dynf_metadata_offs;
 
-- 
2.34.1



[PATCH v2] ethdev: add indirect list METER_MARK flow update structure

2023-05-28 Thread Gregory Etelson
Indirect list API defines 2 types of action update:
• Action mutable context is always shared between all flows
  that referenced indirect actions list handle.
  Action mutable context can be changed by explicit invocation
  of indirect handle update function.
• Flow mutable context is private to a flow.
  Flow mutable context can be updated by indirect list handle
  flow rule configuration.

The patch defines `struct rte_flow_indirect_update_flow_meter_mark`
for indirect METER_MARK flow mutable updates.

Depends-on: patch-127638 ("ethdev: add indirect list flow action")

Signed-off-by: Gregory Etelson 
---
 lib/ethdev/rte_flow.h | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 71727883ad..750df8401d 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3891,6 +3891,17 @@ struct rte_flow_update_meter_mark {
uint32_t reserved:27;
 };
 
+/**
+ * @see RTE_FLOW_ACTION_TYPE_METER_MARK
+ * @see RTE_FLOW_ACTION_TYPE_INDIRECT_LIST
+ *
+ * Update flow mutable context.
+ */
+struct rte_flow_indirect_update_flow_meter_mark {
+   /** Updated init color applied to packet */
+   enum rte_color init_color;
+};
+
 /* Mbuf dynamic field offset for metadata. */
 extern int32_t rte_flow_dynf_metadata_offs;
 
-- 
2.34.1



[PATCH v3] ethdev: add indirect list METER_MARK flow update structure

2023-05-28 Thread Gregory Etelson
Indirect list API defines 2 types of action update:
• Action mutable context is always shared between all flows
  that referenced indirect actions list handle.
  Action mutable context can be changed by explicit invocation
  of indirect handle update function.
• Flow mutable context is private to a flow.
  Flow mutable context can be updated by indirect list handle
  flow rule configuration.

The patch defines `struct rte_flow_indirect_update_flow_meter_mark`
for indirect METER_MARK flow mutable updates.

Signed-off-by: Gregory Etelson 
---
Depends-on: patch-127638 ("ethdev: add indirect list flow action")
---
 lib/ethdev/rte_flow.h | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 71727883ad..750df8401d 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3891,6 +3891,17 @@ struct rte_flow_update_meter_mark {
uint32_t reserved:27;
 };
 
+/**
+ * @see RTE_FLOW_ACTION_TYPE_METER_MARK
+ * @see RTE_FLOW_ACTION_TYPE_INDIRECT_LIST
+ *
+ * Update flow mutable context.
+ */
+struct rte_flow_indirect_update_flow_meter_mark {
+   /** Updated init color applied to packet */
+   enum rte_color init_color;
+};
+
 /* Mbuf dynamic field offset for metadata. */
 extern int32_t rte_flow_dynf_metadata_offs;
 
-- 
2.34.1



[PATCH v2 0/4] Replace obsolote test cases.

2023-05-28 Thread Arek Kusztal
This patchset removes obsolete test cases for RSA, MOD EXP, MOD INV.
Doing that, new way of handling ut_setup and ut_teardown was proposed.
Now both behave like constructor/desctuctor to the unit tests.
It frees particular alghorithm functions from any kind of responsibility to 
free resources.
The functionality of the tests was extended, but the number of lines of code 
was reduced by ~600 lines.

v2:
- fixed build problem with non compile-time constant

Arkadiusz Kusztal (4):
  app/test: remove testsuite calls from ut setup
  app/test: refactor mod exp test case
  app/test: refactor mod inv tests
  app/test: add rsa kat and pwct tests

 app/test/test_cryptodev_asym.c | 1610 +++-
 app/test/test_cryptodev_asym_util.h|   28 -
 app/test/test_cryptodev_mod_test_vectors.h |  631 +---
 app/test/test_cryptodev_rsa_test_vectors.h |  600 
 4 files changed, 852 insertions(+), 2017 deletions(-)

-- 
2.25.1



[PATCH v2 1/4] app/test: remove testsuite calls from ut setup

2023-05-28 Thread Arek Kusztal
Unit test setup should be responsible for setting unit
test parateres only, analogous rules should apply to ut teardown
function.
Cryptodev start/stop functions should be used once - during
setting the testsuite.

Signed-off-by: Arek Kusztal 
Acked-by: Ciara Power 
---
 app/test/test_cryptodev_asym.c | 310 ++---
 1 file changed, 130 insertions(+), 180 deletions(-)

diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 9236817650..026fa48c9e 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -41,12 +41,13 @@ struct crypto_testsuite_params_asym {
struct rte_cryptodev_qp_conf qp_conf;
uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS];
uint8_t valid_dev_count;
-};
+} _testsuite_params, *params = &_testsuite_params;
 
-struct crypto_unittest_params {
+static struct ut_args {
void *sess;
struct rte_crypto_op *op;
-};
+   struct rte_crypto_op *result_op;
+} _args, *self = &_args;
 
 union test_case_structure {
struct modex_test_data modex;
@@ -62,14 +63,11 @@ static struct test_cases_array test_vector = {0, { NULL } };
 
 static uint32_t test_index;
 
-static struct crypto_testsuite_params_asym testsuite_params = { NULL };
-
 static int
 queue_ops_rsa_sign_verify(void *sess)
 {
-   struct crypto_testsuite_params_asym *ts_params = &testsuite_params;
-   struct rte_mempool *op_mpool = ts_params->op_mpool;
-   uint8_t dev_id = ts_params->valid_devs[0];
+   struct rte_mempool *op_mpool = params->op_mpool;
+   uint8_t dev_id = params->valid_devs[0];
struct rte_crypto_op *op, *result_op;
struct rte_crypto_asym_op *asym_op;
uint8_t output_buf[TEST_DATA_SIZE];
@@ -158,9 +156,8 @@ queue_ops_rsa_sign_verify(void *sess)
 static int
 queue_ops_rsa_enc_dec(void *sess)
 {
-   struct crypto_testsuite_params_asym *ts_params = &testsuite_params;
-   struct rte_mempool *op_mpool = ts_params->op_mpool;
-   uint8_t dev_id = ts_params->valid_devs[0];
+   struct rte_mempool *op_mpool = params->op_mpool;
+   uint8_t dev_id = params->valid_devs[0];
struct rte_crypto_op *op, *result_op;
struct rte_crypto_asym_op *asym_op;
uint8_t cipher_buf[TEST_DATA_SIZE] = {0};
@@ -299,7 +296,7 @@ test_cryptodev_asym_ver(struct rte_crypto_op *op,
 }
 
 static int
-test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
+test_cryptodev_asym_op(struct crypto_testsuite_params_asym *params,
union test_case_structure *data_tc,
char *test_msg, int sessionless, enum rte_crypto_asym_op_type type,
enum rte_crypto_rsa_priv_key_type key_type)
@@ -311,7 +308,7 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym 
*ts_params,
void *sess = NULL;
struct rte_cryptodev_asym_capability_idx cap_idx;
const struct rte_cryptodev_asymmetric_xform_capability *capability;
-   uint8_t dev_id = ts_params->valid_devs[0];
+   uint8_t dev_id = params->valid_devs[0];
uint8_t input[TEST_DATA_SIZE] = {0};
uint8_t *result = NULL;
 
@@ -330,7 +327,7 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym 
*ts_params,
}
 
/* Generate crypto op data structure */
-   op = rte_crypto_op_alloc(ts_params->op_mpool,
+   op = rte_crypto_op_alloc(params->op_mpool,
RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
 
if (!op) {
@@ -451,7 +448,7 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym 
*ts_params,
 
if (!sessionless) {
ret = rte_cryptodev_asym_session_create(dev_id, &xform_tc,
-   ts_params->session_mpool, &sess);
+   params->session_mpool, &sess);
if (ret < 0) {
snprintf(test_msg, ASYM_TEST_MSG_LEN,
"line %u "
@@ -524,7 +521,7 @@ test_one_case(const void *test_case, int sessionless)
 
if (tc.modex.xform_type == RTE_CRYPTO_ASYM_XFORM_MODEX
|| tc.modex.xform_type == RTE_CRYPTO_ASYM_XFORM_MODINV) 
{
-   status = test_cryptodev_asym_op(&testsuite_params, &tc, 
test_msg,
+   status = test_cryptodev_asym_op(params, &tc, test_msg,
sessionless, 0, 0);
printf("  %u) TestCase %s %s\n", test_index++,
tc.modex.description, test_msg);
@@ -534,7 +531,7 @@ test_one_case(const void *test_case, int sessionless)
if (tc.rsa_data.op_type_flags & (1 << i)) {
if (tc.rsa_data.key_exp) {
status = test_cryptodev_asym_op(
-   &testsuite_params, &tc,
+   params, &tc,
test_msg, sessionless, 
i,
  

[PATCH v2 2/4] app/test: refactor mod exp test case

2023-05-28 Thread Arek Kusztal
Refactored modular exponentiation test case.
Added multiple vectors to be checked in KAT tests.

Signed-off-by: Arek Kusztal 
Acked-by: Ciara Power 
---
 app/test/test_cryptodev_asym.c | 219 
 app/test/test_cryptodev_asym_util.h|   9 -
 app/test/test_cryptodev_mod_test_vectors.h | 567 +
 3 files changed, 141 insertions(+), 654 deletions(-)

diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 026fa48c9e..dd670305ab 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -32,6 +32,7 @@
 #endif
 #define ASYM_TEST_MSG_LEN 256
 #define TEST_VECTOR_SIZE 256
+#define DEQ_TIMEOUT 50
 
 static int gbl_driver_id;
 struct crypto_testsuite_params_asym {
@@ -63,6 +64,38 @@ static struct test_cases_array test_vector = {0, { NULL } };
 
 static uint32_t test_index;
 
+static int send(struct rte_crypto_op **op,
+   struct rte_crypto_op **result_op)
+{
+   int ticks = 0;
+
+   if (rte_cryptodev_enqueue_burst(params->valid_devs[0], 0,
+   op, 1) != 1) {
+   RTE_LOG(ERR, USER1,
+   "line %u FAILED: Error sending packet for operation on 
device %d",
+   __LINE__, params->valid_devs[0]);
+   return TEST_FAILED;
+   }
+   while (rte_cryptodev_dequeue_burst(params->valid_devs[0], 0,
+   result_op, 1) == 0) {
+   rte_delay_ms(1);
+   ticks++;
+   if (ticks >= DEQ_TIMEOUT) {
+   RTE_LOG(ERR, USER1,
+   "line %u FAILED: Cannot dequeue the crypto op 
on device %d",
+   __LINE__, params->valid_devs[0]);
+   return TEST_FAILED;
+   }
+   }
+   TEST_ASSERT_NOT_NULL(*result_op,
+   "line %u FAILED: Failed to process asym crypto op",
+   __LINE__);
+   TEST_ASSERT_SUCCESS((*result_op)->status,
+   "line %u FAILED: Failed to process asym crypto op, 
error status received",
+   __LINE__);
+   return TEST_SUCCESS;
+}
+
 static int
 queue_ops_rsa_sign_verify(void *sess)
 {
@@ -1417,113 +1450,60 @@ test_mod_inv(void)
 }
 
 static int
-test_mod_exp(void)
+modular_exponentiation(const void *test_data)
 {
-   struct rte_mempool *op_mpool = params->op_mpool;
-   struct rte_mempool *sess_mpool = params->session_mpool;
-   uint8_t dev_id = params->valid_devs[0];
-   struct rte_crypto_asym_op *asym_op = NULL;
-   struct rte_crypto_op *op = NULL, *result_op = NULL;
-   void *sess = NULL;
-   int status = TEST_SUCCESS;
+   const struct modex_test_data *vector = test_data;
+   uint8_t input[TEST_DATA_SIZE] = { 0 };
+   uint8_t exponent[TEST_DATA_SIZE] = { 0 };
+   uint8_t modulus[TEST_DATA_SIZE] = { 0 };
+   uint8_t result[TEST_DATA_SIZE] = { 0 };
struct rte_cryptodev_asym_capability_idx cap_idx;
const struct rte_cryptodev_asymmetric_xform_capability *capability;
-   uint8_t input[TEST_DATA_SIZE] = {0};
-   int ret = 0;
-   uint8_t result[sizeof(mod_p)] = { 0 };
+   struct rte_crypto_asym_xform xform = { };
+   const uint8_t dev_id = params->valid_devs[0];
 
-   if (rte_cryptodev_asym_get_xform_enum(&modex_xform.xform_type,
-   "modexp")
-   < 0) {
-   RTE_LOG(ERR, USER1,
-   "Invalid ASYM algorithm specified\n");
-   return -1;
-   }
+   memcpy(input, vector->base.data, vector->base.len);
+   memcpy(exponent, vector->exponent.data, vector->exponent.len);
+   memcpy(modulus, vector->modulus.data, vector->modulus.len);
 
-   /* check for modlen capability */
-   cap_idx.type = modex_xform.xform_type;
-   capability = rte_cryptodev_asym_capability_get(dev_id, &cap_idx);
+   xform.xform_type = RTE_CRYPTO_ASYM_XFORM_MODEX;
+   xform.modex.exponent.data = exponent;
+   xform.modex.exponent.length = vector->exponent.len;
+   xform.modex.modulus.data = modulus;
+   xform.modex.modulus.length = vector->modulus.len;
 
+   cap_idx.type = xform.xform_type;
+   capability = rte_cryptodev_asym_capability_get(dev_id, &cap_idx);
if (capability == NULL) {
RTE_LOG(INFO, USER1,
"Device doesn't support MOD EXP. Test Skipped\n");
return TEST_SKIPPED;
}
-
if (rte_cryptodev_asym_xform_capability_check_modlen(
-   capability, modex_xform.modex.modulus.length)) {
-   RTE_LOG(ERR, USER1,
-   "Invalid MODULUS length specified\n");
-   return TEST_SKIPPED;
-   }
-
-   /* Create op, create session, and process packets. 8< */
-   op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
-   if (!op

[PATCH v2 3/4] app/test: refactor mod inv tests

2023-05-28 Thread Arek Kusztal
Added new modular multiplicative inverse function.
Now it handles changes to the generic setup.

Signed-off-by: Arek Kusztal 
Acked-by: Ciara Power 
---
 app/test/test_cryptodev_asym.c | 144 ++---
 app/test/test_cryptodev_asym_util.h|   9 --
 app/test/test_cryptodev_mod_test_vectors.h |  64 +
 3 files changed, 45 insertions(+), 172 deletions(-)

diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index dd670305ab..7a0124d7c7 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -594,17 +594,6 @@ static int
 load_test_vectors(void)
 {
uint32_t i = 0, v_size = 0;
-   /* Load MODINV vector*/
-   v_size = RTE_DIM(modinv_test_case);
-   for (i = 0; i < v_size; i++) {
-   if (test_vector.size >= (TEST_VECTOR_SIZE)) {
-   RTE_LOG(DEBUG, USER1,
-   "TEST_VECTOR_SIZE too small\n");
-   return -1;
-   }
-   test_vector.address[test_vector.size] = &modinv_test_case[i];
-   test_vector.size++;
-   }
/* Load RSA vector*/
v_size = RTE_DIM(rsa_test_case_list);
for (i = 0; i < v_size; i++) {
@@ -1339,32 +1328,26 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
 }
 
 static int
-test_mod_inv(void)
+modular_multiplicative_inverse(const void *test_data)
 {
-   struct rte_mempool *op_mpool = params->op_mpool;
-   struct rte_mempool *sess_mpool = params->session_mpool;
-   uint8_t dev_id = params->valid_devs[0];
-   struct rte_crypto_asym_op *asym_op = NULL;
-   struct rte_crypto_op *op = NULL, *result_op = NULL;
-   void *sess = NULL;
-   int status = TEST_SUCCESS;
+   const struct modinv_test_data *vector = test_data;
+   uint8_t input[TEST_DATA_SIZE] = { 0 };
+   uint8_t modulus[TEST_DATA_SIZE] = { 0 };
+   uint8_t result[TEST_DATA_SIZE] = { 0 };
struct rte_cryptodev_asym_capability_idx cap_idx;
const struct rte_cryptodev_asymmetric_xform_capability *capability;
-   uint8_t input[TEST_DATA_SIZE] = {0};
-   int ret = 0;
-   uint8_t result[sizeof(mod_p)] = { 0 };
+   struct rte_crypto_asym_xform xform = { };
+   const uint8_t dev_id = params->valid_devs[0];
 
-   if (rte_cryptodev_asym_get_xform_enum(
-   &modinv_xform.xform_type, "modinv") < 0) {
-   RTE_LOG(ERR, USER1,
-"Invalid ASYM algorithm specified\n");
-   return -1;
-   }
+   memcpy(input, vector->base.data, vector->base.len);
+   memcpy(modulus, vector->modulus.data, vector->modulus.len);
 
-   cap_idx.type = modinv_xform.xform_type;
+   xform.xform_type = RTE_CRYPTO_ASYM_XFORM_MODINV;
+   xform.modex.modulus.data = modulus;
+   xform.modex.modulus.length = vector->modulus.len;
+   cap_idx.type = xform.xform_type;
capability = rte_cryptodev_asym_capability_get(dev_id,
&cap_idx);
-
if (capability == NULL) {
RTE_LOG(INFO, USER1,
"Device doesn't support MOD INV. Test Skipped\n");
@@ -1372,81 +1355,31 @@ test_mod_inv(void)
}
 
if (rte_cryptodev_asym_xform_capability_check_modlen(
-   capability,
-   modinv_xform.modinv.modulus.length)) {
+   capability,
+   xform.modinv.modulus.length)) {
RTE_LOG(ERR, USER1,
-"Invalid MODULUS length specified\n");
-   return TEST_SKIPPED;
-   }
-
-   ret = rte_cryptodev_asym_session_create(dev_id, &modinv_xform, 
sess_mpool, &sess);
-   if (ret < 0) {
-   RTE_LOG(ERR, USER1, "line %u "
-   "FAILED: %s", __LINE__,
-   "Session creation failed");
-   status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
-   goto error_exit;
-   }
-
-   /* generate crypto op data structure */
-   op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
-   if (!op) {
-   RTE_LOG(ERR, USER1,
-   "line %u FAILED: %s",
-   __LINE__, "Failed to allocate asymmetric crypto "
-   "operation struct");
-   status = TEST_FAILED;
-   goto error_exit;
-   }
-
-   asym_op = op->asym;
-   memcpy(input, base, sizeof(base));
-   asym_op->modinv.base.data = input;
-   asym_op->modinv.base.length = sizeof(base);
-   asym_op->modinv.result.data = result;
-   asym_op->modinv.result.length = sizeof(result);
-
-   /* attach asymmetric crypto session to crypto operations */
-   rte_crypto_op_attach_asym_session(op, sess);
-
-   RTE_LOG(DEBUG, USER1, "Process ASYM operation");
-
-   /* Process crypto operation */

[PATCH v2 4/4] app/test: add rsa kat and pwct tests

2023-05-28 Thread Arek Kusztal
Added RSA PWCT and KAT tests. Now it complies
with setup/teardown logic.

Signed-off-by: Arek Kusztal 
---
 app/test/test_cryptodev_asym.c | 1073 ++--
 app/test/test_cryptodev_asym_util.h|   10 -
 app/test/test_cryptodev_rsa_test_vectors.h |  600 +--
 3 files changed, 604 insertions(+), 1079 deletions(-)

diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 7a0124d7c7..91a3bc6150 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright(c) 2018 Cavium Networks
- * Copyright (c) 2019 Intel Corporation
+ * Copyright (c) 2019-2023 Intel Corporation
  */
 
 #include 
@@ -62,741 +62,6 @@ struct test_cases_array {
 };
 static struct test_cases_array test_vector = {0, { NULL } };
 
-static uint32_t test_index;
-
-static int send(struct rte_crypto_op **op,
-   struct rte_crypto_op **result_op)
-{
-   int ticks = 0;
-
-   if (rte_cryptodev_enqueue_burst(params->valid_devs[0], 0,
-   op, 1) != 1) {
-   RTE_LOG(ERR, USER1,
-   "line %u FAILED: Error sending packet for operation on 
device %d",
-   __LINE__, params->valid_devs[0]);
-   return TEST_FAILED;
-   }
-   while (rte_cryptodev_dequeue_burst(params->valid_devs[0], 0,
-   result_op, 1) == 0) {
-   rte_delay_ms(1);
-   ticks++;
-   if (ticks >= DEQ_TIMEOUT) {
-   RTE_LOG(ERR, USER1,
-   "line %u FAILED: Cannot dequeue the crypto op 
on device %d",
-   __LINE__, params->valid_devs[0]);
-   return TEST_FAILED;
-   }
-   }
-   TEST_ASSERT_NOT_NULL(*result_op,
-   "line %u FAILED: Failed to process asym crypto op",
-   __LINE__);
-   TEST_ASSERT_SUCCESS((*result_op)->status,
-   "line %u FAILED: Failed to process asym crypto op, 
error status received",
-   __LINE__);
-   return TEST_SUCCESS;
-}
-
-static int
-queue_ops_rsa_sign_verify(void *sess)
-{
-   struct rte_mempool *op_mpool = params->op_mpool;
-   uint8_t dev_id = params->valid_devs[0];
-   struct rte_crypto_op *op, *result_op;
-   struct rte_crypto_asym_op *asym_op;
-   uint8_t output_buf[TEST_DATA_SIZE];
-   int status = TEST_SUCCESS;
-
-   /* Set up crypto op data structure */
-   op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
-   if (!op) {
-   RTE_LOG(ERR, USER1, "Failed to allocate asymmetric crypto "
-   "operation struct\n");
-   return TEST_FAILED;
-   }
-
-   asym_op = op->asym;
-
-   /* Compute sign on the test vector */
-   asym_op->rsa.op_type = RTE_CRYPTO_ASYM_OP_SIGN;
-
-   asym_op->rsa.message.data = rsaplaintext.data;
-   asym_op->rsa.message.length = rsaplaintext.len;
-   asym_op->rsa.sign.length = 0;
-   asym_op->rsa.sign.data = output_buf;
-   asym_op->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5;
-
-   debug_hexdump(stdout, "message", asym_op->rsa.message.data,
- asym_op->rsa.message.length);
-
-   /* Attach asymmetric crypto session to crypto operations */
-   rte_crypto_op_attach_asym_session(op, sess);
-
-   RTE_LOG(DEBUG, USER1, "Process ASYM operation\n");
-
-   /* Process crypto operation */
-   if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
-   RTE_LOG(ERR, USER1, "Error sending packet for sign\n");
-   status = TEST_FAILED;
-   goto error_exit;
-   }
-
-   while (rte_cryptodev_dequeue_burst(dev_id, 0, &result_op, 1) == 0)
-   rte_pause();
-
-   if (result_op == NULL) {
-   RTE_LOG(ERR, USER1, "Failed to process sign op\n");
-   status = TEST_FAILED;
-   goto error_exit;
-   }
-
-   debug_hexdump(stdout, "signed message", asym_op->rsa.sign.data,
- asym_op->rsa.sign.length);
-   asym_op = result_op->asym;
-
-   /* Verify sign */
-   asym_op->rsa.op_type = RTE_CRYPTO_ASYM_OP_VERIFY;
-   asym_op->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5;
-
-   /* Process crypto operation */
-   if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
-   RTE_LOG(ERR, USER1, "Error sending packet for verify\n");
-   status = TEST_FAILED;
-   goto error_exit;
-   }
-
-   while (rte_cryptodev_dequeue_burst(dev_id, 0, &result_op, 1) == 0)
-   rte_pause();
-
-   if (result_op == NULL) {
-   RTE_LOG(ERR, USER1, "Failed to process verify op\n");
-   status = TEST_FAILED;
-   goto error_exit;
-   }
-
-   status = TEST_SUCCESS

[PATCH v2] crypto/qat: add SM3 HMAC to gen4 devices

2023-05-28 Thread Arek Kusztal
This commit adds SM3 HMAC to Intel QuickAssist Technology PMD
generation 4.

Signed-off-by: Arek Kusztal 
---
v2:
- Fixed problem with chaining operations
- Added implementation of prefix tables

Depends-on: patch-127513 ("cryptodev: support SM3_HMAC,SM4_CFB and SM4_OFB")

 doc/guides/cryptodevs/features/qat.ini   |  1 +
 doc/guides/cryptodevs/qat.rst|  5 ++
 drivers/common/qat/qat_adf/icp_qat_fw_la.h   | 10 
 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c |  4 ++
 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 12 +
 drivers/crypto/qat/qat_sym_session.c | 57 +---
 drivers/crypto/qat/qat_sym_session.h |  7 +++
 7 files changed, 90 insertions(+), 6 deletions(-)

diff --git a/doc/guides/cryptodevs/features/qat.ini 
b/doc/guides/cryptodevs/features/qat.ini
index 70511a3076..6358a43357 100644
--- a/doc/guides/cryptodevs/features/qat.ini
+++ b/doc/guides/cryptodevs/features/qat.ini
@@ -70,6 +70,7 @@ AES XCBC MAC = Y
 ZUC EIA3 = Y
 AES CMAC (128) = Y
 SM3  = Y
+SM3 HMAC = Y
 
 ;
 ; Supported AEAD algorithms of the 'qat' crypto driver.
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index ef754106a8..a5d5196ed4 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -51,6 +51,9 @@ Cipher algorithms:
 * ``RTE_CRYPTO_CIPHER_AES_DOCSISBPI``
 * ``RTE_CRYPTO_CIPHER_DES_DOCSISBPI``
 * ``RTE_CRYPTO_CIPHER_ZUC_EEA3``
+* ``RTE_CRYPTO_CIPHER_SM4_ECB``
+* ``RTE_CRYPTO_CIPHER_SM4_CBC``
+* ``RTE_CRYPTO_CIPHER_SM4_CTR``
 
 Hash algorithms:
 
@@ -76,6 +79,8 @@ Hash algorithms:
 * ``RTE_CRYPTO_AUTH_AES_GMAC``
 * ``RTE_CRYPTO_AUTH_ZUC_EIA3``
 * ``RTE_CRYPTO_AUTH_AES_CMAC``
+* ``RTE_CRYPTO_AUTH_SM3``
+* ``RTE_CRYPTO_AUTH_SM3_HMAC``
 
 Supported AEAD algorithms:
 
diff --git a/drivers/common/qat/qat_adf/icp_qat_fw_la.h 
b/drivers/common/qat/qat_adf/icp_qat_fw_la.h
index c4901eb869..cd1675d1f2 100644
--- a/drivers/common/qat/qat_adf/icp_qat_fw_la.h
+++ b/drivers/common/qat/qat_adf/icp_qat_fw_la.h
@@ -187,6 +187,16 @@ struct icp_qat_fw_la_bulk_req {
QAT_FIELD_SET(flags, val, QAT_LA_PARTIAL_BITPOS, \
QAT_LA_PARTIAL_MASK)
 
+#define QAT_FW_LA_MODE2 1
+#define QAT_FW_LA_NO_MODE2 0
+#define QAT_FW_LA_MODE2_MASK 0x1
+#define QAT_FW_LA_MODE2_BITPOS 5
+#define ICP_QAT_FW_HASH_FLAG_MODE2_SET(flags, val) \
+QAT_FIELD_SET(flags, \
+   val, \
+   QAT_FW_LA_MODE2_BITPOS, \
+   QAT_FW_LA_MODE2_MASK)
+
 struct icp_qat_fw_cipher_req_hdr_cd_pars {
union {
struct {
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c 
b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
index b219a418ba..a7f50c73df 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
@@ -103,6 +103,10 @@ static struct rte_cryptodev_capabilities 
qat_sym_crypto_caps_gen4[] = {
QAT_SYM_PLAIN_AUTH_CAP(SM3,
CAP_SET(block_size, 64),
CAP_RNG(digest_size, 32, 32, 0)),
+   QAT_SYM_AUTH_CAP(SM3_HMAC,
+   CAP_SET(block_size, 64),
+   CAP_RNG(key_size, 16, 64, 4), CAP_RNG(digest_size, 32, 32, 0),
+   CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h 
b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
index 092265631b..14b3f50d97 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
@@ -617,6 +617,12 @@ enqueue_one_auth_job_gen1(struct qat_sym_session *ctx,
rte_memcpy(cipher_param->u.cipher_IV_array, auth_iv->va,
ctx->auth_iv.length);
break;
+   case ICP_QAT_HW_AUTH_ALGO_SM3:
+   if (ctx->auth_mode == ICP_QAT_HW_AUTH_MODE0)
+   auth_param->u1.aad_adr = 0;
+   else
+   auth_param->u1.aad_adr = ctx->prefix_paddr;
+   break;
default:
break;
}
@@ -670,6 +676,12 @@ enqueue_one_chain_job_gen1(struct qat_sym_session *ctx,
case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
break;
+   case ICP_QAT_HW_AUTH_ALGO_SM3:
+   if (ctx->auth_mode == ICP_QAT_HW_AUTH_MODE0)
+   auth_param->u1.aad_adr = 0;
+   else
+   auth_param->u1.aad_adr = ctx->prefix_paddr;
+   break;
default:
break;
}
diff --git a/drivers/crypto/qat/qat_sym_session.c 
b/drivers/crypto/qat/qat_sym_session.c
index 6ad6c7ee3a..cf527c6246 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -561,6 +561,8 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev,
/* Set context descriptor physical address */
session->cd_paddr = session_paddr +
  

RE: [PATCH v2 1/3] cryptodev: add SM2 asymmetric crypto algorithm

2023-05-28 Thread Kusztal, ArkadiuszX
Hi Gowrishankar,

> -Original Message-
> From: Gowrishankar Muthukrishnan 
> Sent: Friday, May 26, 2023 11:12 AM
> To: dev@dpdk.org
> Cc: ano...@marvell.com; Akhil Goyal ; Fan Zhang
> ; Gowrishankar Muthukrishnan
> 
> Subject: [PATCH v2 1/3] cryptodev: add SM2 asymmetric crypto algorithm
> 
> ShangMi 2 (SM2) is a encryption and digital signature algorithm used in the
> Chinese National Standard.

It is more of a set of public-key cryptography algorithms based on elliptic 
curves.

> 
> Added support for asymmetric SM2 in cryptodev along with prime field curve, as
> referenced in RFC:
> https://datatracker.ietf.org/doc/html/draft-shen-sm2-ecdsa-02
> 
> Signed-off-by: Gowrishankar Muthukrishnan 
> ---
>  doc/guides/cryptodevs/features/default.ini |  1 +
>  doc/guides/rel_notes/release_23_07.rst |  5 ++
>  lib/cryptodev/rte_crypto_asym.h| 77 ++
>  lib/cryptodev/rte_cryptodev.c  |  1 +
>  4 files changed, 84 insertions(+)
> 
> diff --git a/doc/guides/cryptodevs/features/default.ini
> b/doc/guides/cryptodevs/features/default.ini
> index 523da0cfa8..a69967bb9e 100644
> --- a/doc/guides/cryptodevs/features/default.ini
> +++ b/doc/guides/cryptodevs/features/default.ini
> @@ -125,6 +125,7 @@ Diffie-hellman  =
>  ECDSA   =
>  ECPM=
>  ECDH=
> +SM2 =
> 
>  ;
>  ; Supported Operating systems of a default crypto driver.
> diff --git a/doc/guides/rel_notes/release_23_07.rst
> b/doc/guides/rel_notes/release_23_07.rst
> index a9b1293689..8b8e69d619 100644
> --- a/doc/guides/rel_notes/release_23_07.rst
> +++ b/doc/guides/rel_notes/release_23_07.rst
> @@ -55,6 +55,11 @@ New Features
>   Also, make sure to start the actual text at the margin.
>   ===
> 
> +* **Added SM2 asymmetric algorithm in cryptodev.**
> +
> +  Added support for ShamMi 2 (SM2) asymmetric crypto algorithm  along
> + with prime field curve support.
> +
> 
>  Removed Items
>  -
> diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
> index 989f38323f..35fa2c0a6d 100644
> --- a/lib/cryptodev/rte_crypto_asym.h
> +++ b/lib/cryptodev/rte_crypto_asym.h
> @@ -119,6 +119,11 @@ enum rte_crypto_asym_xform_type {
>   /**< Elliptic Curve Point Multiplication */
>   RTE_CRYPTO_ASYM_XFORM_ECFPM,
>   /**< Elliptic Curve Fixed Point Multiplication */
> + RTE_CRYPTO_ASYM_XFORM_SM2,
> + /**< ShangMi 2
> +  * Performs Encrypt, Decrypt, Sign and Verify.
> +  * Refer to rte_crypto_asym_op_type.
> +  */
>   RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
>   /**< End of list */
>  };
> @@ -382,6 +387,20 @@ struct rte_crypto_ec_xform {
>   /**< Pre-defined ec groups */
>  };
> 
> +/**
> + * Asymmetric SM2 transform data
> + *
> + * Structure describing SM2 xform params
> + *
> + */
> +struct rte_crypto_sm2_xform {
> + rte_crypto_uint pkey;
> + /**< Private key of the signer for signature generation. */
> +
> + struct rte_crypto_ec_point q;
> + /**< Public key of the signer for verification. */ };
> +
>  /**
>   * Operations params for modular operations:
>   * exponentiation and multiplicative inverse @@ -637,9 +656,66 @@ struct
> rte_crypto_asym_xform {
>   /**< EC xform parameters, used by elliptic curve based
>* operations.
>*/
> +
> + struct rte_crypto_sm2_xform sm2;
> + /**< SM2 xform parameters */
>   };
>  };
> 
> +/**
> + * SM2 operation params
> + */
> +struct rte_crypto_sm2_op_param {

There is no random value 'k'. And SM2 is also using it for encryption.
There is no key exchange or point multiplication option in this op, therefore, 
I would rather have all SM2 algorithms in separate ops.
We also could abandon finally '_param' suffix, it does not add clarity but 
extends struct tags, which are very long already.

> + enum rte_crypto_asym_op_type op_type;
> + /**< Signature generation or verification */
> +
> + rte_crypto_param message;
> + /**<
> +  * Pointer to input data
> +  * - to be encrypted for SM2 public encrypt.
> +  * - to be signed for SM2 sign generation.
> +  * - to be authenticated for SM2 sign verification.

Is 'message' in signature case here plaintext or hash?
If 'plaintext' it is inconsistent with other signature algorithms, and most 
likely not all HW devices will support that.
Additionally or hash function should be specified as an argument by the user, 
or the function used should be defined
in the API information. I see it is not directly said in this draft what 
function it ought be, although most likely
SM3 would be the one.   

> +  *
> +  * Pointer to output data
> +  * - for SM2 private decrypt.
> +  * In this case the underlying array should have been
> +  * allocated with enough memory to hold plaintext output
> +  * (at least encrypt

Hugepage migration

2023-05-28 Thread Baruch Even
Hi,

We found an issue with newer kernels (5.13+) that are found on newer OSes
(Ubuntu22, Rocky9, Ubuntu20 with kernel 5.15) where a 2M page that was
allocated for DPDK was migrated (moved into another physical page) when a
1G page was allocated.

>From our reading of the kernel commits this started with commit
ae37c7ff79f1f030e28ec76c46ee032f8fd07607
mm: make alloc_contig_range handle in-use hugetlb pages

This caused what looked like memory corruptions to us and cases where the
rings were moved from their physical location and communication was no
longer possible.

I wanted to ask if anyone else hit this issue and what mitigations are
available?

We are currently looking at using a kernel driver to pin the pages but I
expect that this issue will affect others and that a more general approach
is needed.

Thanks,
Baruch

-- 
Baruch Even
Platform Technical Lead,  WEKA
E bar...@weka.io* ­*W www.weka.io
*
­* * ­*



RE: [EXTERNAL] Re: EXTERNAL] [PATCH ] Add checks for the port capabilties

2023-05-28 Thread Ajay Sharma
>From 1290db88b8748085c9f09a58b336b8c757442b87 Mon Sep 17 00:00:00 2001
 From: Ajay Sharma 
 Date: Sun, 28 May 2023 18:31:59 -0700
 Subject: [PATCH] Change USHRT_MAX to UINT16_MAX

 ---
  drivers/net/mana/mana.c | 16 
  1 file changed, 8 insertions(+), 8 deletions(-)

 diff --git a/drivers/net/mana/mana.c b/drivers/net/mana/mana.c
 index 3a7e302c86..a39d6798bf 100644
 --- a/drivers/net/mana/mana.c
 +++ b/drivers/net/mana/mana.c
 @@ -292,8 +292,8 @@ mana_dev_info_get(struct rte_eth_dev *dev,
 dev_info->min_rx_bufsize = MIN_RX_BUF_SIZE;
 dev_info->max_rx_pktlen = MAX_FRAME_SIZE;

 -   dev_info->max_rx_queues = RTE_MIN(priv->max_rx_queues, USHRT_MAX);
 -   dev_info->max_tx_queues = RTE_MIN(priv->max_tx_queues, USHRT_MAX);
 +   dev_info->max_rx_queues = RTE_MIN(priv->max_rx_queues, UINT16_MAX);
 +   dev_info->max_tx_queues = RTE_MIN(priv->max_tx_queues, UINT16_MAX);


 dev_info->max_mac_addrs = MANA_MAX_MAC_ADDR;
 @@ -335,17 +335,17 @@ mana_dev_info_get(struct rte_eth_dev *dev,

 /* Buffer limits */
 dev_info->rx_desc_lim.nb_min = MIN_BUFFERS_PER_QUEUE;
 -   dev_info->rx_desc_lim.nb_max = RTE_MIN(priv->max_rx_desc, USHRT_MAX);
 +   dev_info->rx_desc_lim.nb_max = RTE_MIN(priv->max_rx_desc, UINT16_MAX);
 dev_info->rx_desc_lim.nb_align = MIN_BUFFERS_PER_QUEUE;
 -   dev_info->rx_desc_lim.nb_seg_max = RTE_MIN(priv->max_recv_sge, 
USHRT_MAX);
 -   dev_info->rx_desc_lim.nb_mtu_seg_max = RTE_MIN(priv->max_recv_sge, 
USHRT_MAX);
 +   dev_info->rx_desc_lim.nb_seg_max = RTE_MIN(priv->max_recv_sge, 
UINT16_MAX);
 +   dev_info->rx_desc_lim.nb_mtu_seg_max = RTE_MIN(priv->max_recv_sge, 
UINT16_MAX);


 dev_info->tx_desc_lim.nb_min = MIN_BUFFERS_PER_QUEUE;
 -   dev_info->tx_desc_lim.nb_max = RTE_MIN(priv->max_tx_desc, USHRT_MAX);
 +   dev_info->tx_desc_lim.nb_max = RTE_MIN(priv->max_tx_desc, UINT16_MAX);
 dev_info->tx_desc_lim.nb_align = MIN_BUFFERS_PER_QUEUE;
 -   dev_info->tx_desc_lim.nb_seg_max = RTE_MIN(priv->max_send_sge, 
USHRT_MAX);
 -   dev_info->rx_desc_lim.nb_mtu_seg_max = RTE_MIN(priv->max_recv_sge, 
USHRT_MAX);
 +   dev_info->tx_desc_lim.nb_seg_max = RTE_MIN(priv->max_send_sge, 
UINT16_MAX);
 +   dev_info->rx_desc_lim.nb_mtu_seg_max = RTE_MIN(priv->max_recv_sge, 
UINT16_MAX);

 /* Speed */
 dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
 --
 2.25.1

> -Original Message-
> From: Stephen Hemminger 
> Sent: Thursday, May 25, 2023 8:31 PM
> To: Ajay Sharma 
> Cc: Ferruh Yigit ; Andrew Rybchenko
> ; dev@dpdk.org; Long Li
> ; sta...@dpdk.org
> Subject: [EXTERNAL] Re: EXTERNAL] [PATCH ] Add checks for the port
> capabilties
> 
> On Fri, 26 May 2023 00:19:59 +
> Ajay Sharma  wrote:
> 
> >   +   dev_info->max_rx_queues = RTE_MIN(priv->max_rx_queues,
> USHRT_MAX);
> >   +   dev_info->max_tx_queues = RTE_MIN(priv->max_tx_queues,
> USHRT_MAX);
> >   +
> 
> Please use UINT16_MAX instead of USHRT_MAX since that is the type of
> max_rx_queues.
> Both are the same size but best to be consistent.


RE: [PATCH] net/ice: init dvm mode for parser

2023-05-28 Thread Zeng, ZhichaoX



> -Original Message-
> From: Qi Zhang 
> Sent: Saturday, May 27, 2023 3:21 AM
> To: Guo, Junfeng 
> Cc: Yang, Qiming ; dev@dpdk.org; Zhang, Qi Z
> ; sta...@dpdk.org
> Subject: [PATCH] net/ice: init dvm mode for parser
> 
> Double Vlan mode need to be configured for parser Otherwise parser result
> will not be consistent with hardware.
> 
> Fixes: 531d2555c8a6 ("net/ice: refactor parser usage")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Qi Zhang 
> ---
>  drivers/net/ice/ice_generic_flow.c | 5 +
>  1 file changed, 5 insertions(+)
> 
> diff --git a/drivers/net/ice/ice_generic_flow.c
> b/drivers/net/ice/ice_generic_flow.c
> index 86a32f8cb1..ed3075d555 100644
> --- a/drivers/net/ice/ice_generic_flow.c
> +++ b/drivers/net/ice/ice_generic_flow.c

Verified and passed.

Tested-by: Zhichao Zeng 



[PATCH v4 0/3] Enable iavf Rx Timestamp offload on vector path

2023-05-28 Thread Zhichao Zeng
Enable timestamp offload with the command '--enable-rx-timestamp',
pay attention that getting Rx timestamp offload will drop the performance.

---
v4: rework avx2 patch based on offload path
---
v3: logging with driver dedicated macro
---
v2: fix compile warning and SSE path

Zhichao Zeng (3):
  net/iavf: support Rx timestamp offload on AVX512
  net/iavf: support Rx timestamp offload on AVX2
  net/iavf: support Rx timestamp offload on SSE

 drivers/net/iavf/iavf_rxtx.h|   3 +-
 drivers/net/iavf/iavf_rxtx_vec_avx2.c   | 186 +-
 drivers/net/iavf/iavf_rxtx_vec_avx512.c | 203 +++-
 drivers/net/iavf/iavf_rxtx_vec_common.h |   3 -
 drivers/net/iavf/iavf_rxtx_vec_sse.c| 159 ++-
 5 files changed, 538 insertions(+), 16 deletions(-)

-- 
2.34.1



[PATCH v4 1/3] net/iavf: support Rx timestamp offload on AVX512

2023-05-28 Thread Zhichao Zeng
This patch enables Rx timestamp offload on AVX512 data path.

Enable timestamp offload with the command '--enable-rx-timestamp',
pay attention that getting Rx timestamp offload will drop the performance.

Signed-off-by: Wenjun Wu 
Signed-off-by: Zhichao Zeng 

---
v4: rework avx2 patch based on offload path
---
v3: logging with driver dedicated macro
---
v2: fix compile warning
---
 drivers/net/iavf/iavf_rxtx.h|   3 +-
 drivers/net/iavf/iavf_rxtx_vec_avx512.c | 203 +++-
 drivers/net/iavf/iavf_rxtx_vec_common.h |   3 -
 3 files changed, 200 insertions(+), 9 deletions(-)

diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 547b68f441..0345a6a51d 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -47,7 +47,8 @@
RTE_ETH_RX_OFFLOAD_CHECKSUM |\
RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |  \
RTE_ETH_RX_OFFLOAD_VLAN |\
-   RTE_ETH_RX_OFFLOAD_RSS_HASH)
+   RTE_ETH_RX_OFFLOAD_RSS_HASH |\
+   RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 
 /**
  * According to the vlan capabilities returned by the driver and FW, the vlan 
tci
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c 
b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 4fe9b97278..f9961e53b8 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -16,18 +16,20 @@
 /**
  * If user knows a specific offload is not enabled by APP,
  * the macro can be commented to save the effort of fast path.
- * Currently below 2 features are supported in RX path,
+ * Currently below 6 features are supported in RX path,
  * 1, checksum offload
  * 2, VLAN/QINQ stripping
  * 3, RSS hash
  * 4, packet type analysis
  * 5, flow director ID report
+ * 6, timestamp offload
  
**/
 #define IAVF_RX_CSUM_OFFLOAD
 #define IAVF_RX_VLAN_OFFLOAD
 #define IAVF_RX_RSS_OFFLOAD
 #define IAVF_RX_PTYPE_OFFLOAD
 #define IAVF_RX_FDIR_OFFLOAD
+#define IAVF_RX_TS_OFFLOAD
 
 static __rte_always_inline void
 iavf_rxq_rearm(struct iavf_rx_queue *rxq)
@@ -587,9 +589,9 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct 
iavf_rx_queue *rxq,
bool offload)
 {
struct iavf_adapter *adapter = rxq->vsi->adapter;
-
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
uint64_t offloads = adapter->dev_data->dev_conf.rxmode.offloads;
-
+#endif
 #ifdef IAVF_RX_PTYPE_OFFLOAD
const uint32_t *type_table = adapter->ptype_tbl;
 #endif
@@ -618,6 +620,25 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct 
iavf_rx_queue *rxq,
  rte_cpu_to_le_32(1 << IAVF_RX_FLEX_DESC_STATUS0_DD_S)))
return 0;
 
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+#ifdef IAVF_RX_TS_OFFLOAD
+   uint8_t inflection_point = 0;
+   bool is_tsinit = false;
+   __m256i hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, 
(uint32_t)rxq->phc_time);
+
+   if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
+   uint64_t sw_cur_time = rte_get_timer_cycles() / 
(rte_get_timer_hz() / 1000);
+
+   if (unlikely(sw_cur_time - rxq->hw_time_update > 4)) {
+   hw_low_last = _mm256_setzero_si256();
+   is_tsinit = 1;
+   } else {
+   hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, 
(uint32_t)rxq->phc_time);
+   }
+   }
+#endif
+#endif
+
/* constants used in processing loop */
const __m512i crc_adjust =
_mm512_set_epi32
@@ -1081,12 +1102,13 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct 
iavf_rx_queue *rxq,
 
 #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
if (offload) {
-#ifdef IAVF_RX_RSS_OFFLOAD
+#if defined(IAVF_RX_RSS_OFFLOAD) || defined(IAVF_RX_TS_OFFLOAD)
/**
 * needs to load 2nd 16B of each desc for RSS hash 
parsing,
 * will cause performance drop to get into this context.
 */
if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH ||
+   offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP ||
rxq->rx_flags & 
IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
@@ -1138,6 +1160,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct 
iavf_rx_queue *rxq,

(_mm256_castsi128_si256(raw_desc_bh0),
 raw_desc_bh1, 1);
 
+#ifdef IAVF_RX_RSS_OFFLOAD
if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
/**

[PATCH v4 2/3] net/iavf: support Rx timestamp offload on AVX2

2023-05-28 Thread Zhichao Zeng
This patch enables Rx timestamp offload on AVX2 data path.

Enable timestamp offload with the command '--enable-rx-timestamp',
pay attention that getting Rx timestamp offload will drop the performance.

Signed-off-by: Zhichao Zeng 

---
v4: rework avx2 patch based on offload path
---
v3: logging with driver dedicated macro
---
v2: fix compile warning
---
 drivers/net/iavf/iavf_rxtx_vec_avx2.c | 186 +-
 1 file changed, 182 insertions(+), 4 deletions(-)

diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c 
b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 22d4d3a90f..86290c4bbb 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -532,7 +532,9 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue 
*rxq,
 
struct iavf_adapter *adapter = rxq->vsi->adapter;
 
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
uint64_t offloads = adapter->dev_data->dev_conf.rxmode.offloads;
+#endif
const uint32_t *type_table = adapter->ptype_tbl;
 
const __m256i mbuf_init = _mm256_set_epi64x(0, 0,
@@ -558,6 +560,21 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue 
*rxq,
if (!(rxdp->wb.status_error0 &
rte_cpu_to_le_32(1 << IAVF_RX_FLEX_DESC_STATUS0_DD_S)))
return 0;
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+   bool is_tsinit = false;
+   uint8_t inflection_point = 0;
+   __m256i hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, 
rxq->phc_time);
+   if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
+   uint64_t sw_cur_time = rte_get_timer_cycles() / 
(rte_get_timer_hz() / 1000);
+
+   if (unlikely(sw_cur_time - rxq->hw_time_update > 4)) {
+   hw_low_last = _mm256_setzero_si256();
+   is_tsinit = 1;
+   } else {
+   hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, 
rxq->phc_time);
+   }
+   }
+#endif
 
/* constants used in processing loop */
const __m256i crc_adjust =
@@ -967,10 +984,11 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct 
iavf_rx_queue *rxq,
if (offload) {
 #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
/**
-* needs to load 2nd 16B of each desc for RSS hash 
parsing,
+* needs to load 2nd 16B of each desc,
 * will cause performance drop to get into this context.
 */
if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH ||
+   offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP ||
rxq->rx_flags & 
IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
@@ -1053,7 +1071,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct 
iavf_rx_queue *rxq,
mb4_5 = _mm256_or_si256(mb4_5, 
rss_hash4_5);
mb2_3 = _mm256_or_si256(mb2_3, 
rss_hash2_3);
mb0_1 = _mm256_or_si256(mb0_1, 
rss_hash0_1);
-   }
+   } /* if() on RSS hash parsing */
 
if (rxq->rx_flags & 
IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
/* merge the status/error-1 bits into 
one register */
@@ -1132,8 +1150,121 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct 
iavf_rx_queue *rxq,
mb4_5 = _mm256_or_si256(mb4_5, 
vlan_tci4_5);
mb2_3 = _mm256_or_si256(mb2_3, 
vlan_tci2_3);
mb0_1 = _mm256_or_si256(mb0_1, 
vlan_tci0_1);
-   }
-   } /* if() on RSS hash parsing */
+   } /* if() on Vlan parsing */
+
+   if (offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
+   uint32_t mask = 0x;
+   __m256i ts;
+   __m256i ts_low = _mm256_setzero_si256();
+   __m256i ts_low1;
+   __m256i ts_low2;
+   __m256i max_ret;
+   __m256i cmp_ret;
+   uint8_t ret = 0;
+   uint8_t shift = 8;
+   __m256i ts_desp_mask = 
_mm256_set_epi32(mask, 0, 0, 0, mask, 0, 0, 0);
+   __m256i cmp_mask = 
_mm256_set1_epi32(mask);
+   __m256i ts_permute_mask = 
_mm256_set_epi32(7, 3, 6, 2, 5, 1, 4, 0);
+
+   ts = _m

[PATCH v4 3/3] net/iavf: support Rx timestamp offload on SSE

2023-05-28 Thread Zhichao Zeng
This patch enables Rx timestamp offload on SSE data path.

Enable timestamp offload with the command '--enable-rx-timestamp',
pay attention that getting Rx timestamp offload will drop the performance.

Signed-off-by: Zhichao Zeng 

---
v4: rework avx2 patch based on offload path
---
v3: logging with driver dedicated macro
---
v2: fix compile warning and timestamp error
---
 drivers/net/iavf/iavf_rxtx_vec_sse.c | 159 ++-
 1 file changed, 156 insertions(+), 3 deletions(-)

diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c 
b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index 3f30be01aa..b754122c51 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -392,6 +392,11 @@ flex_desc_to_olflags_v(struct iavf_rx_queue *rxq, __m128i 
descs[4],
_mm_extract_epi32(fdir_id0_3, 3);
} /* if() on fdir_enabled */
 
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+   if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+   flags = _mm_or_si128(flags, 
_mm_set1_epi32(iavf_timestamp_dynflag));
+#endif
+
/**
 * At this point, we have the 4 sets of flags in the low 16-bits
 * of each 32-bit value in flags.
@@ -723,7 +728,9 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
int pos;
uint64_t var;
struct iavf_adapter *adapter = rxq->vsi->adapter;
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
uint64_t offloads = adapter->dev_data->dev_conf.rxmode.offloads;
+#endif
const uint32_t *ptype_tbl = adapter->ptype_tbl;
__m128i crc_adjust = _mm_set_epi16
(0, 0, 0,   /* ignore non-length fields */
@@ -793,6 +800,24 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
  rte_cpu_to_le_32(1 << IAVF_RX_FLEX_DESC_STATUS0_DD_S)))
return 0;
 
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+   uint8_t inflection_point = 0;
+   bool is_tsinit = false;
+   __m128i hw_low_last = _mm_set_epi32(0, 0, 0, (uint32_t)rxq->phc_time);
+
+   if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
+   uint64_t sw_cur_time = rte_get_timer_cycles() / 
(rte_get_timer_hz() / 1000);
+
+   if (unlikely(sw_cur_time - rxq->hw_time_update > 4)) {
+   hw_low_last = _mm_setzero_si128();
+   is_tsinit = 1;
+   } else {
+   hw_low_last = _mm_set_epi32(0, 0, 0, 
(uint32_t)rxq->phc_time);
+   }
+   }
+
+#endif
+
/**
 * Compile-time verify the shuffle mask
 * NOTE: some field positions already verified above, but duplicated
@@ -825,7 +850,7 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
 rxdp += IAVF_VPMD_DESCS_PER_LOOP) {
__m128i descs[IAVF_VPMD_DESCS_PER_LOOP];
 #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
-   __m128i descs_bh[IAVF_VPMD_DESCS_PER_LOOP];
+   __m128i descs_bh[IAVF_VPMD_DESCS_PER_LOOP] = 
{_mm_setzero_si128()};
 #endif
__m128i pkt_mb0, pkt_mb1, pkt_mb2, pkt_mb3;
__m128i staterr, sterr_tmp1, sterr_tmp2;
@@ -895,10 +920,11 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
 
 #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
/**
-* needs to load 2nd 16B of each desc for RSS hash parsing,
+* needs to load 2nd 16B of each desc,
 * will cause performance drop to get into this context.
 */
if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH ||
+   offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP ||
rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
/* load bottom half of every 32B desc */
descs_bh[3] = _mm_load_si128
@@ -964,7 +990,94 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
pkt_mb2 = _mm_or_si128(pkt_mb2, vlan_tci2);
pkt_mb1 = _mm_or_si128(pkt_mb1, vlan_tci1);
pkt_mb0 = _mm_or_si128(pkt_mb0, vlan_tci0);
-   }
+   } /* if() on Vlan parsing */
+
+   if (offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
+   uint32_t mask = 0x;
+   __m128i ts;
+   __m128i ts_low = _mm_setzero_si128();
+   __m128i ts_low1;
+   __m128i max_ret;
+   __m128i cmp_ret;
+   uint8_t ret = 0;
+   uint8_t shift = 4;
+   __m128i ts_desp_mask = _mm_set_epi32(mask, 0, 0, 0);
+   __m128i cmp_mask = _mm_set1_epi32(mask);
+
+   ts = _mm_and_si128(descs_bh[0], ts_desp_mask);
+   ts_low = _mm_or_si128(ts_low, _mm_srli_si128(ts, 3 * 
4));
+   ts = _mm_and_si128(descs_bh[1], ts_desp_mask);
+ 

[PATCH v3] app/test-pmd: fix not polling all queues without deferred starting

2023-05-28 Thread Jie Hai
Each stream has a read-only "disabled" field that control if this
stream should be used to forward. This field depends on states
of Rx/Tx queues, please see
commit 3c4426db54fc ("app/testpmd: do not poll stopped queues").

Currently, the testpmd and DPDK frameworks maintain queue state
separately. That of the primary process of testpmd are set by
deferred_start in the queue configuration. And that of the
framework(dev->data->rx_queue_state or dev->data->tx_queue_state)
is set when the driver enables/disables the queue, and it is
shared between the primary/secondary process.

If the deferred_start is set, the queue is disabled and the
corresponding queue state in the framework changes to stopped.
However, the queue state in the framework does not only come from
this. If the primary/secondary process stops a queue, the related
queue state will change, too. However, the primary process of
testpmd does not know the change brought by this operation.
Therefore, setting the queue state in the primary testpmd by only
the deferred_start is unsafe.

For example, Rx/Tx queues who are stopped before the operations of
stopping and starting port cannot forward packets after these
operations on primary process.

Therefore, the primary process should getting the queue state from
of the framework as the secondary process does, please see commit
e065c9aa3e05 ("app/testpmd: fix secondary process packet forwarding").

Fixes: 3c4426db54fc ("app/testpmd: do not poll stopped queues")
Cc: sta...@dpdk.org

Signed-off-by: Jie Hai 
---
v1->v2:
1. Fix misspelled word 'deferred'.
2. Fix incorrect format of reference to commits.

v2->v3
1. Fix incorrect format of reference to commits.
---
 app/test-pmd/testpmd.c | 19 +++
 1 file changed, 3 insertions(+), 16 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 5cb6f9252395..a07a67a2639e 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2502,8 +2502,7 @@ start_packet_forwarding(int with_tx_first)
return;
 
if (stream_init != NULL) {
-   if (rte_eal_process_type() == RTE_PROC_SECONDARY)
-   update_queue_state();
+   update_queue_state();
for (i = 0; i < cur_fwd_config.nb_fwd_streams; i++)
stream_init(fwd_streams[i]);
}
@@ -2860,9 +2859,6 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
socket_id, rx_conf, mp);
}
 
-   ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start ?
-   RTE_ETH_QUEUE_STATE_STOPPED :
-   RTE_ETH_QUEUE_STATE_STARTED;
return ret;
 }
 
@@ -3129,9 +3125,6 @@ start_port(portid_t pid)
port->need_reconfig_queues = 0;
/* setup tx queues */
for (qi = 0; qi < nb_txq; qi++) {
-   struct rte_eth_txconf *conf =
-   &port->txq[qi].conf;
-
if ((numa_support) &&
(txring_numa[pi] != NUMA_NO_CONFIG))
diag = rte_eth_tx_queue_setup(pi, qi,
@@ -3144,13 +3137,8 @@ start_port(portid_t pid)
port->socket_id,
&(port->txq[qi].conf));
 
-   if (diag == 0) {
-   port->txq[qi].state =
-   conf->tx_deferred_start ?
-   RTE_ETH_QUEUE_STATE_STOPPED :
-   RTE_ETH_QUEUE_STATE_STARTED;
+   if (diag == 0)
continue;
-   }
 
/* Fail to setup tx queue, return */
if (port->port_status == RTE_PORT_HANDLING)
@@ -3266,8 +3254,7 @@ start_port(portid_t pid)
pl[cfg_pi++] = pi;
}
 
-   if (rte_eal_process_type() == RTE_PROC_SECONDARY)
-   update_queue_state();
+   update_queue_state();
 
if (at_least_one_port_successfully_started && !no_link_check)
check_all_ports_link_status(RTE_PORT_ALL);
-- 
2.33.0



RE: [PATCH] net/ice: init dvm mode for parser

2023-05-28 Thread Zhang, Qi Z



> -Original Message-
> From: Zeng, ZhichaoX 
> Sent: Monday, May 29, 2023 10:11 AM
> To: Zhang, Qi Z ; Guo, Junfeng
> 
> Cc: Yang, Qiming ; dev@dpdk.org; Zhang, Qi Z
> ; sta...@dpdk.org
> Subject: RE: [PATCH] net/ice: init dvm mode for parser
> 
> 
> 
> > -Original Message-
> > From: Qi Zhang 
> > Sent: Saturday, May 27, 2023 3:21 AM
> > To: Guo, Junfeng 
> > Cc: Yang, Qiming ; dev@dpdk.org; Zhang, Qi Z
> > ; sta...@dpdk.org
> > Subject: [PATCH] net/ice: init dvm mode for parser
> >
> > Double Vlan mode need to be configured for parser Otherwise parser
> > result will not be consistent with hardware.
> >
> > Fixes: 531d2555c8a6 ("net/ice: refactor parser usage")
> > Cc: sta...@dpdk.org
> >
> > Signed-off-by: Qi Zhang 
> > ---
> >  drivers/net/ice/ice_generic_flow.c | 5 +
> >  1 file changed, 5 insertions(+)
> >
> > diff --git a/drivers/net/ice/ice_generic_flow.c
> > b/drivers/net/ice/ice_generic_flow.c
> > index 86a32f8cb1..ed3075d555 100644
> > --- a/drivers/net/ice/ice_generic_flow.c
> > +++ b/drivers/net/ice/ice_generic_flow.c
> 
> Verified and passed.
> 
> Tested-by: Zhichao Zeng 

Applied to dpdk-next-net-intel.

Thanks
Qi



[PATCH v4 1/2] cryptodev: support SM3_HMAC,SM4_CFB and SM4_OFB

2023-05-28 Thread Sunyang Wu
Add SM3_HMAC/SM4_CFB/SM4_OFB support in DPDK.

Signed-off-by: Sunyang Wu 
---
 doc/guides/cryptodevs/features/default.ini | 3 +++
 doc/guides/rel_notes/release_23_07.rst | 5 +
 lib/cryptodev/rte_crypto_sym.h | 8 +++-
 lib/cryptodev/rte_cryptodev.c  | 5 -
 4 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/doc/guides/cryptodevs/features/default.ini 
b/doc/guides/cryptodevs/features/default.ini
index 523da0cfa8..8f54d4a2a5 100644
--- a/doc/guides/cryptodevs/features/default.ini
+++ b/doc/guides/cryptodevs/features/default.ini
@@ -64,6 +64,8 @@ ZUC EEA3   =
 SM4 ECB=
 SM4 CBC=
 SM4 CTR=
+SM4 CFB=
+SM4 OFB=
 
 ;
 ; Supported authentication algorithms of a default crypto driver.
@@ -99,6 +101,7 @@ SHA3_384 HMAC   =
 SHA3_512=
 SHA3_512 HMAC   =
 SM3 =
+SM3 HMAC=
 SHAKE_128   =
 SHAKE_256   =
 
diff --git a/doc/guides/rel_notes/release_23_07.rst 
b/doc/guides/rel_notes/release_23_07.rst
index a9b1293689..405b34c6d2 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -55,6 +55,11 @@ New Features
  Also, make sure to start the actual text at the margin.
  ===
 
+* **Added new algorithms to cryptodev.**
+
+  * Added symmetric hash algorithm SM3-HMAC.
+  * Added symmetric cipher algorithm ShangMi 4 (SM4) in CFB and OFB modes.
+
 
 Removed Items
 -
diff --git a/lib/cryptodev/rte_crypto_sym.h b/lib/cryptodev/rte_crypto_sym.h
index b43174dbec..152d40623f 100644
--- a/lib/cryptodev/rte_crypto_sym.h
+++ b/lib/cryptodev/rte_crypto_sym.h
@@ -172,8 +172,12 @@ enum rte_crypto_cipher_algorithm {
/**< ShangMi 4 (SM4) algorithm in ECB mode */
RTE_CRYPTO_CIPHER_SM4_CBC,
/**< ShangMi 4 (SM4) algorithm in CBC mode */
-   RTE_CRYPTO_CIPHER_SM4_CTR
+   RTE_CRYPTO_CIPHER_SM4_CTR,
/**< ShangMi 4 (SM4) algorithm in CTR mode */
+   RTE_CRYPTO_CIPHER_SM4_OFB,
+   /**< ShangMi 4 (SM4) algorithm in OFB mode */
+   RTE_CRYPTO_CIPHER_SM4_CFB
+   /**< ShangMi 4 (SM4) algorithm in CFB mode */
 };
 
 /** Cipher algorithm name strings */
@@ -381,6 +385,8 @@ enum rte_crypto_auth_algorithm {
/**< 128 bit SHAKE algorithm. */
RTE_CRYPTO_AUTH_SHAKE_256,
/**< 256 bit SHAKE algorithm. */
+   RTE_CRYPTO_AUTH_SM3_HMAC,
+   /** < HMAC using ShangMi 3 (SM3) algorithm */
 };
 
 /** Authentication algorithm name strings */
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index a96114b2da..4ff7046e97 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -127,7 +127,9 @@ crypto_cipher_algorithm_strings[] = {
[RTE_CRYPTO_CIPHER_ZUC_EEA3]= "zuc-eea3",
[RTE_CRYPTO_CIPHER_SM4_ECB] = "sm4-ecb",
[RTE_CRYPTO_CIPHER_SM4_CBC] = "sm4-cbc",
-   [RTE_CRYPTO_CIPHER_SM4_CTR] = "sm4-ctr"
+   [RTE_CRYPTO_CIPHER_SM4_CTR] = "sm4-ctr",
+   [RTE_CRYPTO_CIPHER_SM4_CFB] = "sm4-cfb",
+   [RTE_CRYPTO_CIPHER_SM4_OFB] = "sm4-ofb"
 };
 
 /**
@@ -227,6 +229,7 @@ crypto_auth_algorithm_strings[] = {
[RTE_CRYPTO_AUTH_SNOW3G_UIA2]   = "snow3g-uia2",
[RTE_CRYPTO_AUTH_ZUC_EIA3]  = "zuc-eia3",
[RTE_CRYPTO_AUTH_SM3]   = "sm3",
+   [RTE_CRYPTO_AUTH_SM3_HMAC]  = "sm3-hmac",
 
[RTE_CRYPTO_AUTH_SHAKE_128]  = "shake-128",
[RTE_CRYPTO_AUTH_SHAKE_256]  = "shake-256",
-- 
2.19.0.rc0.windows.1



回复: [EXT] [PATCH v3 1/2] cryptodev: support SM3_HMAC,SM4_CFB and SM4_OFB

2023-05-28 Thread Sunyang Wu
Hi Akhil,
Thank you very much for your patient guidance, the patches have been 
resubmitted.

Best wishes
Sunyang

> Add SM3_HMAC/SM4_CFB/SM4_OFB support in DPDK.
> 
> Signed-off-by: Sunyang Wu 
> ---
>  doc/guides/cryptodevs/features/default.ini | 3 +++
>  doc/guides/rel_notes/release_23_07.rst | 5 +
>  lib/cryptodev/rte_crypto_sym.h | 8 +++-
>  lib/cryptodev/rte_cryptodev.c  | 5 -
>  4 files changed, 19 insertions(+), 2 deletions(-)
> 
> diff --git a/doc/guides/cryptodevs/features/default.ini
> b/doc/guides/cryptodevs/features/default.ini
> index 523da0cfa8..8f54d4a2a5 100644
> --- a/doc/guides/cryptodevs/features/default.ini
> +++ b/doc/guides/cryptodevs/features/default.ini
> @@ -64,6 +64,8 @@ ZUC EEA3   =
>  SM4 ECB=
>  SM4 CBC=
>  SM4 CTR=
> +SM4 CFB=
> +SM4 OFB=
> 
>  ;
>  ; Supported authentication algorithms of a default crypto driver.
> @@ -99,6 +101,7 @@ SHA3_384 HMAC   =
>  SHA3_512=
>  SHA3_512 HMAC   =
>  SM3 =
> +SM3 HMAC=
>  SHAKE_128   =
>  SHAKE_256   =
> 
> diff --git a/doc/guides/rel_notes/release_23_07.rst
> b/doc/guides/rel_notes/release_23_07.rst
> index a9b1293689..405b34c6d2 100644
> --- a/doc/guides/rel_notes/release_23_07.rst
> +++ b/doc/guides/rel_notes/release_23_07.rst
> @@ -55,6 +55,11 @@ New Features
>   Also, make sure to start the actual text at the margin.
>   ===
> 
> +* **Added new algorithms to cryptodev.**
> +
> +  * Added symmetric hash algorithm SM3-HMAC.
> +  * Added symmetric cipher algorithm ShangMi 4 (SM4) in CFB and OFB modes.
> +
> 
>  Removed Items
>  -
> diff --git a/lib/cryptodev/rte_crypto_sym.h 
> b/lib/cryptodev/rte_crypto_sym.h index b43174dbec..428603d06e 100644
> --- a/lib/cryptodev/rte_crypto_sym.h
> +++ b/lib/cryptodev/rte_crypto_sym.h
> @@ -172,8 +172,12 @@ enum rte_crypto_cipher_algorithm {
>   /**< ShangMi 4 (SM4) algorithm in ECB mode */
>   RTE_CRYPTO_CIPHER_SM4_CBC,
>   /**< ShangMi 4 (SM4) algorithm in CBC mode */
> - RTE_CRYPTO_CIPHER_SM4_CTR
> + RTE_CRYPTO_CIPHER_SM4_CTR,
>   /**< ShangMi 4 (SM4) algorithm in CTR mode */
> + RTE_CRYPTO_CIPHER_SM4_OFB,
> + /**< ShangMi 4 (SM4) algorithm in OFB mode */
> + RTE_CRYPTO_CIPHER_SM4_CFB
> + /**< ShangMi 4 (SM4) algorithm in CFB mode */
>  };
> 
>  /** Cipher algorithm name strings */
> @@ -376,6 +380,8 @@ enum rte_crypto_auth_algorithm {
>   /**< HMAC using 512 bit SHA3 algorithm. */
>   RTE_CRYPTO_AUTH_SM3,
>   /**< ShangMi 3 (SM3) algorithm */
> + RTE_CRYPTO_AUTH_SM3_HMAC,
> + /** < HMAC using ShangMi 3 (SM3) algorithm */

You cannot insert in the middle of enum.
This will result in ABI break.
http://mails.dpdk.org/archives/test-report/2023-May/400475.html
Please move this change to end of enum for this release.

You can submit a patch for next release(which is an ABI break release.) to move 
it back.


> 
>   RTE_CRYPTO_AUTH_SHAKE_128,
>   /**< 128 bit SHAKE algorithm. */
> diff --git a/lib/cryptodev/rte_cryptodev.c 
> b/lib/cryptodev/rte_cryptodev.c index a96114b2da..4ff7046e97 100644
> --- a/lib/cryptodev/rte_cryptodev.c
> +++ b/lib/cryptodev/rte_cryptodev.c
> @@ -127,7 +127,9 @@ crypto_cipher_algorithm_strings[] = {
>   [RTE_CRYPTO_CIPHER_ZUC_EEA3]= "zuc-eea3",
>   [RTE_CRYPTO_CIPHER_SM4_ECB] = "sm4-ecb",
>   [RTE_CRYPTO_CIPHER_SM4_CBC] = "sm4-cbc",
> - [RTE_CRYPTO_CIPHER_SM4_CTR] = "sm4-ctr"
> + [RTE_CRYPTO_CIPHER_SM4_CTR] = "sm4-ctr",
> + [RTE_CRYPTO_CIPHER_SM4_CFB] = "sm4-cfb",
> + [RTE_CRYPTO_CIPHER_SM4_OFB] = "sm4-ofb"
>  };
> 
>  /**
> @@ -227,6 +229,7 @@ crypto_auth_algorithm_strings[] = {
>   [RTE_CRYPTO_AUTH_SNOW3G_UIA2]   = "snow3g-uia2",
>   [RTE_CRYPTO_AUTH_ZUC_EIA3]  = "zuc-eia3",
>   [RTE_CRYPTO_AUTH_SM3]   = "sm3",
> + [RTE_CRYPTO_AUTH_SM3_HMAC]  = "sm3-hmac",
> 
>   [RTE_CRYPTO_AUTH_SHAKE_128]  = "shake-128",
>   [RTE_CRYPTO_AUTH_SHAKE_256]  = "shake-256",
> --
> 2.19.0.rc0.windows.1



RE: [PATCH v5 2/2] ethdev: add indirect list METER_MARK update structures

2023-05-28 Thread Gregory Etelson
Hello Ori,

[snip]

> > +/**
> > + * @see RTE_FLOW_ACTION_TYPE_METER_MARK
> > + * @see RTE_FLOW_ACTION_TYPE_INDIRECT_LIST
> > + *
> > + * Update action mutable context.
> > + */
> > +struct rte_flow_indirect_update_action_meter_mark {
> > +   /** New meter_mark parameters to be updated. */
> > +   struct rte_flow_action_meter_mark meter_mark;
> > +   /** The profile will be updated. */
> > +   uint32_t profile_valid:1;
> > +   /** The policy will be updated. */
> > +   uint32_t policy_valid:1;
> > +   /** The color mode will be updated. */
> > +   uint32_t color_mode_valid:1;
> > +   /** The meter state will be updated. */
> > +   uint32_t state_valid:1;
> > +   /** Reserved bits for the future usage. */
> > +   uint32_t reserved:28;
> > +};
> > +
> 
> Why did you create new meter_mark structure?
> 


Fixed.

> > +/**
> > + * @see RTE_FLOW_ACTION_TYPE_METER_MARK
> > + * @see RTE_FLOW_ACTION_TYPE_INDIRECT_LIST
> > + *
> > + * Update flow mutable context.
> > + */
> > +struct rte_flow_indirect_update_flow_meter_mark {
> > +   /** Updated init color applied to packet */
> > +   enum rte_color init_color;
> > +};
> > +
> >  /* Mbuf dynamic field offset for metadata. */
> >  extern int32_t rte_flow_dynf_metadata_offs;
> >
> > --
> > 2.34.1
> 
> Best,
> Ori


[PATCH 1/2] config/arm: update config for Neoverse N2

2023-05-28 Thread Ruifeng Wang
Updated maximum number of lcores and numa nodes to support platforms
with multiple numa nodes.
Added mcpu compiler option.

Signed-off-by: Ruifeng Wang 
Reviewed-by: Feifei Wang 
---
 config/arm/meson.build | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/config/arm/meson.build b/config/arm/meson.build
index 5213434ca4..9e55e9f2a4 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -91,11 +91,12 @@ part_number_config_arm = {
 '0xd49': {
 'march': 'armv8.5-a',
 'march_features': ['sve2'],
+   'compiler_options':  ['-mcpu=neoverse-n2'],
 'flags': [
 ['RTE_MACHINE', '"neoverse-n2"'],
 ['RTE_ARM_FEATURE_ATOMICS', true],
-['RTE_MAX_LCORE', 64],
-['RTE_MAX_NUMA_NODES', 1]
+['RTE_MAX_LCORE', 128],
+['RTE_MAX_NUMA_NODES', 2]
 ]
 }
 }
-- 
2.25.1



[PATCH 2/2] build: fix warning when running external command

2023-05-28 Thread Ruifeng Wang
Meson gives warnings on calls to run_command when there is
a missing "check" parameter. Most of the occurrences has been fixed.
Fixed the remaining one in this change.

Fixes: ecb904cc4596 ("build: fix warnings when running external commands")
Cc: bruce.richard...@intel.com
Cc: sta...@dpdk.org

Signed-off-by: Ruifeng Wang 
---
 config/meson.build | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/config/meson.build b/config/meson.build
index fa730a1b14..65087ce090 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -366,7 +366,7 @@ if max_numa_nodes == 'detect'
 error('Discovery of max_numa_nodes not supported for 
cross-compilation.')
 endif
 # overwrite the default value with discovered values
-max_numa_nodes = run_command(get_numa_count_cmd).stdout().to_int()
+max_numa_nodes = run_command(get_numa_count_cmd, check: 
true).stdout().to_int()
 message('Found @0@ numa nodes'.format(max_numa_nodes))
 dpdk_conf.set('RTE_MAX_NUMA_NODES', max_numa_nodes)
 elif max_numa_nodes != 'default'
-- 
2.25.1



RE: [PATCH v3] ethdev: add indirect list METER_MARK flow update structure

2023-05-28 Thread Ori Kam
Hi Gregory

> -Original Message-
> From: Gregory Etelson 
> Sent: Sunday, May 28, 2023 7:21 PM
> To: dev@dpdk.org
> 
> Indirect list API defines 2 types of action update:
> • Action mutable context is always shared between all flows
>   that referenced indirect actions list handle.
>   Action mutable context can be changed by explicit invocation
>   of indirect handle update function.
> • Flow mutable context is private to a flow.
>   Flow mutable context can be updated by indirect list handle
>   flow rule configuration.
> 
> The patch defines `struct rte_flow_indirect_update_flow_meter_mark`
> for indirect METER_MARK flow mutable updates.
> 
> Signed-off-by: Gregory Etelson 
> ---
> Depends-on: patch-127638 ("ethdev: add indirect list flow action")
> ---
>  lib/ethdev/rte_flow.h | 11 +++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index 71727883ad..750df8401d 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -3891,6 +3891,17 @@ struct rte_flow_update_meter_mark {
>   uint32_t reserved:27;
>  };
> 
> +/**
> + * @see RTE_FLOW_ACTION_TYPE_METER_MARK
> + * @see RTE_FLOW_ACTION_TYPE_INDIRECT_LIST
> + *
> + * Update flow mutable context.
> + */
> +struct rte_flow_indirect_update_flow_meter_mark {
> + /** Updated init color applied to packet */
> + enum rte_color init_color;
> +};
> +
>  /* Mbuf dynamic field offset for metadata. */
>  extern int32_t rte_flow_dynf_metadata_offs;
> 
> --
> 2.34.1

Acked-by: Ori Kam 
Best,
Ori


RE: [PATCH] ethdev: fix indirect action convert

2023-05-28 Thread Ori Kam
Hi Suanming,

> -Original Message-
> From: Suanming Mou 
> Sent: Friday, May 26, 2023 6:18 AM
> 
> As indirect action conf fills the indirect action handler, while
> converting indirect action, the action conf(action handler) should
> be copied from original indirect action conf instead of duplicating
> the action handler memory.
> 
> Fixes: 4b61b8774be9 ("ethdev: introduce indirect flow action")
> 
> Signed-off-by: Suanming Mou 
> ---
>  lib/ethdev/rte_flow.c | 10 +-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> index 69e6e749f7..ff740f19a4 100644
> --- a/lib/ethdev/rte_flow.c
> +++ b/lib/ethdev/rte_flow.c
> @@ -889,7 +889,15 @@ rte_flow_conv_actions(struct rte_flow_action *dst,
>   src -= num;
>   dst -= num;
>   do {
> - if (src->conf) {
> + if (src->type == RTE_FLOW_ACTION_TYPE_INDIRECT) {
> + /*
> +  * Indirect action conf fills the indirect action
> +  * handler. Copy the action handle directly instead
> +  * of duplicating the pointer memory.
> +  */
> + if (size)
> + dst->conf = src->conf;
> + } else if (src->conf) {
>   off = RTE_ALIGN_CEIL(off, sizeof(double));
>   ret = rte_flow_conv_action_conf
>   ((void *)(data + off),
> --
> 2.25.1

Acked-by: Ori Kam 
Best,
Ori


RE: [EXT] [PATCH v3 1/4] bus/pci: introduce an internal representation of PCI device

2023-05-28 Thread Sunil Kumar Kori
> -Original Message-
> From: Miao Li 
> Sent: Thursday, May 25, 2023 10:01 PM
> To: dev@dpdk.org
> Cc: Sunil Kumar Kori ; tho...@monjalon.net;
> david.march...@redhat.com; ferruh.yi...@amd.com;
> chenbo@intel.com; yahui@intel.com
> Subject: [EXT] [PATCH v3 1/4] bus/pci: introduce an internal representation
> of PCI device
> 
> External Email
> 
> --
> From: Chenbo Xia 
> 
> This patch introduces an internal representation of the PCI device which will
> be used to store the internal information that don't have to be exposed to
> drivers, e.g., the VFIO region sizes/offsets.
> 
> In this patch, the internal structure is simply a wrapper of the 
> rte_pci_device
> structure. More fields will be added.
> 
> Signed-off-by: Chenbo Xia 
> ---
>  drivers/bus/pci/bsd/pci.c | 13 -
>  drivers/bus/pci/linux/pci.c   | 28 
>  drivers/bus/pci/pci_common.c  | 12 ++--
>  drivers/bus/pci/private.h | 14 +-
>  drivers/bus/pci/windows/pci.c | 14 +-
>  5 files changed, 52 insertions(+), 29 deletions(-)
> 

Acked-by: Sunil Kumar Kori 

...
[snipped]
...
> 2.25.1



RE: [PATCH v6] ethdev: add indirect list flow action

2023-05-28 Thread Ori Kam
Hi gregory,

> -Original Message-
> From: Gregory Etelson 
> Sent: Sunday, May 28, 2023 6:44 PM
> 
> Indirect API creates a shared flow action with unique action handle.
> Flow rules can access the shared flow action and resources related to
> that action through the indirect action handle.
> In addition, the API allows to update existing shared flow action
> configuration.  After the update completes, new action configuration
> is available to all flows that reference that shared action.
> 
> Indirect actions list expands the indirect action API:
> • Indirect action list creates a handle for one or several
>   flow actions, while legacy indirect action handle references
>   single action only.
>   Input flow actions arranged in END terminated list.
> • Flow rule can provide rule specific configuration parameters to
>   existing shared handle.
>   Updates of flow rule specific configuration will not change the base
>   action configuration.
>   Base action configuration was set during the action creation.
> 
> Indirect action list handle defines 2 types of resources:
> • Mutable handle resource can be changed during handle lifespan.
> • Immutable handle resource value is set during handle creation
>   and cannot be changed.
> 
> There are 2 types of mutable indirect handle contexts:
> • Action mutable context is always shared between all flows
>   that referenced indirect actions list handle.
>   Action mutable context can be changed by explicit invocation
>   of indirect handle update function.
> • Flow mutable context is private to a flow.
>   Flow mutable context can be updated by indirect list handle
>   flow rule configuration.
> 
> flow 1:
>  / indirect handle H conf C1 /
>|   |
>|   |
>|   | flow 2:
>|   | / indirect handle H conf C2 /
>|   |   |  |
>|   |   |  |
>|   |   |  |
> =
> ^  |   |   |  |
> |  |   V   |  V
> |~~  ~~~
> | flow mutableflow mutable
> | context 1   context 2
> |~~  ~~~
>   indirect  |  |   |
>   action|  |   |
>   context   |  V   V
> |   -
> | action mutable context
> |   -
> vaction immutable context
> =
> 
> Indirect action types - immutable, action / flow mutable, are mutually
> exclusive and depend on the action definition.
> For example:
> • Indirect METER_MARK policy is immutable action member and profile is
>   action mutable action member.
> • Indirect METER_MARK flow action defines init_color as flow mutable
>   member.
> • Indirect QUOTA flow action does not define flow mutable members.
> 
> If indirect list handle was created from a list of actions
> A1 / A2 ... An / END
> indirect list flow action can update Ai flow mutable context in the
> action configuration parameter.
> Indirect list action configuration is and array [C1, C2,  .., Cn]
> where Ci corresponds to Ai in the action handle source.
> Ci configuration element points Ai flow mutable update, or it's NULL
> if Ai has no flow mutable update.
> Indirect list action configuration can be NULL if the action
> has no flow mutable updates.
> 
> Template API:
> 
> Action template format:
> 
>   template .. indirect_list handle Htmpl conf Ctmpl ..
>   mask .. indirect_list handle Hmask conf Cmask ..
> 
> 1 If Htmpl was masked (Hmask != 0), it will be fixed in that template.
>   Otherwise, indirect action value is set in a flow rule.
> 
> 2 If Htmpl and Ctmpl[i] were masked (Hmask !=0 and Cmask[i] != 0),
>   Htmpl's Ai action flow mutable context fill be updated to
>   Ctmpl[i] values and will be fixed in that template.
> 
> Flow rule format:
> 
>   actions .. indirect_list handle Hflow conf Cflow ..
> 
> 3 If Htmpl was not masked in actions template, Hflow references an
>   action of the same type as Htmpl.
> 
> 4 Cflow[i] updates handle's Ai flow mutable configuration if
>   the Ci was not masked in action template.
> 
> Signed-off-by: Gregory Etelson 
> ---

Acked-by: Ori Kam 
Best,
Ori


RE: [EXT] [PATCH v3 2/4] bus/pci: avoid depending on private value in kernel source

2023-05-28 Thread Sunil Kumar Kori
> -Original Message-
> From: Miao Li 
> Sent: Thursday, May 25, 2023 10:01 PM
> To: dev@dpdk.org
> Cc: Sunil Kumar Kori ; tho...@monjalon.net;
> david.march...@redhat.com; ferruh.yi...@amd.com;
> chenbo@intel.com; yahui@intel.com; Anatoly Burakov
> 
> Subject: [EXT] [PATCH v3 2/4] bus/pci: avoid depending on private value in
> kernel source
> 
> External Email
> 
> --
> From: Chenbo Xia 
> 
> The value 40 used in VFIO_GET_REGION_ADDR() is a private value
> (VFIO_PCI_OFFSET_SHIFT) defined in Linux kernel source [1]. It is not part of
> VFIO API, and we should not depend on it.
> 
> [1] https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__github.com_torvalds_linux_blob_v6.2_include_linux_vfio-5Fpci-
> 5Fcore.h&d=DwIDAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=dXeXaAMkP5COgn1z
> xHMyaF1_d9IIuq6vHQO6NrIPjaE&m=JmiGBKn8A8yznTdPB1knOyEeYM4moYy
> ws5F5wylRMz9_Jp8-FRr-
> _FDWaUpA6a7U&s=DBgE0M81mcB0EWqXuq8apKbHmhKQIQ52RFcPWdHXat
> s&e=
> 
> Signed-off-by: Chenbo Xia 
> ---
>  drivers/bus/pci/linux/pci.c  |   4 +-
>  drivers/bus/pci/linux/pci_init.h |   4 +-
>  drivers/bus/pci/linux/pci_vfio.c | 197 +++
>  drivers/bus/pci/private.h|   9 ++
>  lib/eal/include/rte_vfio.h   |   1 -
>  5 files changed, 159 insertions(+), 56 deletions(-)
> 
Acked-by: Sunil Kumar Kori 

...
[snipped]
...

> 2.25.1



RE: [EXT] [PATCH v3 3/4] bus/pci: introduce helper for MMIO read and write

2023-05-28 Thread Sunil Kumar Kori
> -Original Message-
> From: Miao Li 
> Sent: Thursday, May 25, 2023 10:01 PM
> To: dev@dpdk.org
> Cc: Sunil Kumar Kori ; tho...@monjalon.net;
> david.march...@redhat.com; ferruh.yi...@amd.com;
> chenbo@intel.com; yahui@intel.com; Anatoly Burakov
> 
> Subject: [EXT] [PATCH v3 3/4] bus/pci: introduce helper for MMIO read and
> write
> 
> External Email
> 
> --
> From: Chenbo Xia 
> 
> The MMIO regions may not be mmap-able for VFIO-PCI devices.
> In this case, the driver should explicitly do read and write to access these
> regions.
> 
> Signed-off-by: Chenbo Xia 
> ---
>  drivers/bus/pci/bsd/pci.c| 22 +++
>  drivers/bus/pci/linux/pci.c  | 46 ++
>  drivers/bus/pci/linux/pci_init.h | 10 +++
> drivers/bus/pci/linux/pci_uio.c  | 22 +++
> drivers/bus/pci/linux/pci_vfio.c | 36 
>  drivers/bus/pci/rte_bus_pci.h| 48 
>  drivers/bus/pci/version.map  |  3 ++
>  7 files changed, 187 insertions(+)
> 
Acked-by: Sunil Kumar Kori 

...
[snipped]
...

> 2.25.1



RE: [EXT] [PATCH v3 4/4] bus/pci: add VFIO sparse mmap support

2023-05-28 Thread Sunil Kumar Kori
> -Original Message-
> From: Miao Li 
> Sent: Thursday, May 25, 2023 10:01 PM
> To: dev@dpdk.org
> Cc: Sunil Kumar Kori ; tho...@monjalon.net;
> david.march...@redhat.com; ferruh.yi...@amd.com;
> chenbo@intel.com; yahui@intel.com; Anatoly Burakov
> 
> Subject: [EXT] [PATCH v3 4/4] bus/pci: add VFIO sparse mmap support
> 
> External Email
> 
> --
> This patch adds sparse mmap support in PCI bus. Sparse mmap is a capability
> defined in VFIO which allows multiple mmap areas in one VFIO region.
> 
> In this patch, the sparse mmap regions are mapped to one continuous virtual
> address region that follows device-specific BAR layout. So, driver can still
> access all mapped sparse mmap regions by using 'bar_base_address +
> bar_offset'.
> 
> Signed-off-by: Miao Li 
> Signed-off-by: Chenbo Xia 
> ---
>  drivers/bus/pci/linux/pci_vfio.c | 104 +++
>  drivers/bus/pci/private.h|   2 +
>  2 files changed, 94 insertions(+), 12 deletions(-)
> 

Acked-by: Sunil Kumar Kori 
...
[snipped]
...

> 2.25.1



RE: [PATCH v2 3/3] examples/l3fwd-graph: add IPv6 lookup and rewrite support

2023-05-28 Thread Sunil Kumar Kori
Hi Amit,

Did you get time to check build error mentioned in previous mail ?

Regards
Sunil Kumar Kori

> -Original Message-
> From: Sunil Kumar Kori 
> Sent: Monday, May 22, 2023 11:20 AM
> To: Amit Prakash Shukla ; Jerin Jacob
> Kollanukkaran ; Kiran Kumar Kokkilagadda
> ; Nithin Kumar Dabilpuram
> 
> Cc: dev@dpdk.org; Amit Prakash Shukla 
> Subject: [EXT] RE: [PATCH v2 3/3] examples/l3fwd-graph: add IPv6 lookup
> and rewrite support
> 
> External Email
> 
> --
> Hi Amit,
> 
> Please look into build failure.
> https://urldefense.proofpoint.com/v2/url?u=http-
> 3A__mails.dpdk.org_archives_test-2Dreport_2023-
> 2DMay_396241.html&d=DwIFAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=dXeXaAM
> kP5COgn1zxHMyaF1_d9IIuq6vHQO6NrIPjaE&m=zTb1-
> _jN4zpc1HXoemLgQSj9JDAgE3vn-
> Cw_LdW29znc70FYi75BnYGNMN2PwaGj&s=uLOxl_GycqIBg3mdoxkQ9I0zuCK
> QzKkdoXvi-FOW9gA&e=
> 
> Thanks & Regards
> Sunil Kumar Kori
> 
> > -Original Message-
> > From: Amit Prakash Shukla 
> > Sent: Thursday, May 18, 2023 9:27 PM
> > To: Jerin Jacob Kollanukkaran ; Kiran Kumar
> > Kokkilagadda ; Nithin Kumar Dabilpuram
> > 
> > Cc: dev@dpdk.org; Sunil Kumar Kori ; Amit Prakash
> > Shukla 
> > Subject: [PATCH v2 3/3] examples/l3fwd-graph: add IPv6 lookup and
> > rewrite support
> >
> > From: Sunil Kumar Kori 
> >
> > Similar to ipv4, to support IPv6 lookup and rewrite node routes and
> > rewrite data needs to be added.
> >
> > Patch adds routes for ipv6 to validate ip6_lookup node and  rewrite
> > data to validate ip6_rewrite node.
> >
> > Signed-off-by: Sunil Kumar Kori 
> > Signed-off-by: Amit Prakash Shukla 
> > ---
> > v2:
> > - Performance related changes
> >
> >  doc/guides/sample_app_ug/l3_forward_graph.rst | 40 ++
> >  examples/l3fwd-graph/main.c   | 77 ++-
> >  2 files changed, 98 insertions(+), 19 deletions(-)
> >
> 
> [snipped]
> 
> > 2.25.1



Re: [PATCH v3 1/4] bus/pci: introduce an internal representation of PCI device

2023-05-28 Thread Cao, Yahui



On 5/26/2023 12:31 AM, Miao Li wrote:

From: Chenbo Xia 

This patch introduces an internal representation of the PCI device
which will be used to store the internal information that don't have
to be exposed to drivers, e.g., the VFIO region sizes/offsets.

In this patch, the internal structure is simply a wrapper of the
rte_pci_device structure. More fields will be added.

Signed-off-by: Chenbo Xia 
---
  drivers/bus/pci/bsd/pci.c | 13 -
  drivers/bus/pci/linux/pci.c   | 28 
  drivers/bus/pci/pci_common.c  | 12 ++--
  drivers/bus/pci/private.h | 14 +-
  drivers/bus/pci/windows/pci.c | 14 +-
  5 files changed, 52 insertions(+), 29 deletions(-)


Acked-by: Yahui Cao 


Re: [PATCH v3 2/4] bus/pci: avoid depending on private value in kernel source

2023-05-28 Thread Cao, Yahui



On 5/26/2023 12:31 AM, Miao Li wrote:

From: Chenbo Xia 

The value 40 used in VFIO_GET_REGION_ADDR() is a private value
(VFIO_PCI_OFFSET_SHIFT) defined in Linux kernel source [1]. It
is not part of VFIO API, and we should not depend on it.

[1] https://github.com/torvalds/linux/blob/v6.2/include/linux/vfio_pci_core.h

Signed-off-by: Chenbo Xia 
---
  drivers/bus/pci/linux/pci.c  |   4 +-
  drivers/bus/pci/linux/pci_init.h |   4 +-
  drivers/bus/pci/linux/pci_vfio.c | 197 +++
  drivers/bus/pci/private.h|   9 ++
  lib/eal/include/rte_vfio.h   |   1 -
  5 files changed, 159 insertions(+), 56 deletions(-)


Acked-by: Yahui Cao 


Re: [PATCH v3 3/4] bus/pci: introduce helper for MMIO read and write

2023-05-28 Thread Cao, Yahui



On 5/26/2023 12:31 AM, Miao Li wrote:

From: Chenbo Xia 

The MMIO regions may not be mmap-able for VFIO-PCI devices.
In this case, the driver should explicitly do read and write
to access these regions.

Signed-off-by: Chenbo Xia 
---
  drivers/bus/pci/bsd/pci.c| 22 +++
  drivers/bus/pci/linux/pci.c  | 46 ++
  drivers/bus/pci/linux/pci_init.h | 10 +++
  drivers/bus/pci/linux/pci_uio.c  | 22 +++
  drivers/bus/pci/linux/pci_vfio.c | 36 
  drivers/bus/pci/rte_bus_pci.h| 48 
  drivers/bus/pci/version.map  |  3 ++
  7 files changed, 187 insertions(+)


Acked-by: Yahui Cao 


RE: [PATCH v3 07/28] vhost: change to single IOTLB cache per device

2023-05-28 Thread Xia, Chenbo
> -Original Message-
> From: Maxime Coquelin 
> Sent: Friday, May 26, 2023 12:26 AM
> To: dev@dpdk.org; Xia, Chenbo ;
> david.march...@redhat.com; m...@redhat.com; f...@redhat.com;
> jasow...@redhat.com; Liang, Cunming ; Xie, Yongji
> ; echau...@redhat.com; epere...@redhat.com;
> amore...@redhat.com; l...@redhat.com
> Cc: Maxime Coquelin 
> Subject: [PATCH v3 07/28] vhost: change to single IOTLB cache per device
> 
> This patch simplifies IOTLB implementation and improves
> IOTLB memory consumption by having a single IOTLB cache
> per device, instead of having one per queue.
> 
> In order to not impact performance, it keeps an IOTLB lock
> per virtqueue, so that there is no contention between
> multiple queue trying to acquire it.
> 
> Signed-off-by: Maxime Coquelin 
> ---
>  lib/vhost/iotlb.c  | 212 +++--
>  lib/vhost/iotlb.h  |  43 ++---
>  lib/vhost/vhost.c  |  18 ++--
>  lib/vhost/vhost.h  |  16 ++--
>  lib/vhost/vhost_user.c |  23 +++--
>  5 files changed, 159 insertions(+), 153 deletions(-)
> 
> --
> 2.40.1

Reviewed-by: Chenbo Xia  


[PATCH v5] crypto/qat: support to enable insecure algorithms

2023-05-28 Thread Vikash Poddar
All the insecure algorithms are default disable from
cryptodev Gen 1,2,3 and 4.
use qat_legacy_capa to enable all the legacy
algorithms.
These change effects both sym and asym insecure crypto
algorithms.

Signed-off-by: Vikash Poddar 
---
Depends-on: patch-28182 ("[v2] common/qat: fix qat_dev_cmd_param
corruption")
v5:
Resolving apply patch warning
v4:
Resolved rebase conflict.
v3:
Rebased the patch.
v2:
Extend the support to enable the insecure algorithm in
QAT Gen 1,3 and 4 for sym as well as asym.
---
 app/test/test_cryptodev_asym.c   |  28 +++--
 doc/guides/cryptodevs/qat.rst|  14 +++
 drivers/common/qat/qat_device.c  |   1 +
 drivers/common/qat/qat_device.h  |   3 +-
 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c |  88 ---
 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 113 +++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c |  64 ++-
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c|  90 ---
 drivers/crypto/qat/qat_asym.c|  16 ++-
 drivers/crypto/qat/qat_crypto.h  |   1 +
 drivers/crypto/qat/qat_sym.c |   3 +
 11 files changed, 256 insertions(+), 165 deletions(-)

diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 9236817650..bb32d81e57 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -453,11 +453,14 @@ test_cryptodev_asym_op(struct 
crypto_testsuite_params_asym *ts_params,
ret = rte_cryptodev_asym_session_create(dev_id, &xform_tc,
ts_params->session_mpool, &sess);
if (ret < 0) {
-   snprintf(test_msg, ASYM_TEST_MSG_LEN,
-   "line %u "
-   "FAILED: %s", __LINE__,
-   "Session creation failed");
status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
+   if (status == TEST_SKIPPED)
+   snprintf(test_msg, ASYM_TEST_MSG_LEN, 
"SKIPPED");
+   else
+   snprintf(test_msg, ASYM_TEST_MSG_LEN,
+   "line %u "
+   "FAILED: %s", __LINE__,
+   "Session creation failed");
goto error_exit;
}
 
@@ -489,6 +492,11 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym 
*ts_params,
}
 
if (test_cryptodev_asym_ver(op, &xform_tc, data_tc, result_op) != 
TEST_SUCCESS) {
+   if (result_op->status == RTE_CRYPTO_OP_STATUS_INVALID_ARGS) {
+   snprintf(test_msg, ASYM_TEST_MSG_LEN, "SESSIONLESS 
SKIPPED");
+   status = TEST_SKIPPED;
+   goto error_exit;
+   }
snprintf(test_msg, ASYM_TEST_MSG_LEN,
"line %u FAILED: %s",
__LINE__, "Verification failed ");
@@ -619,13 +627,19 @@ test_one_by_one(void)
/* Go through all test cases */
test_index = 0;
for (i = 0; i < test_vector.size; i++) {
-   if (test_one_case(test_vector.address[i], 0) != TEST_SUCCESS)
+   status = test_one_case(test_vector.address[i], 0);
+   if (status == TEST_SUCCESS || status == TEST_SKIPPED)
+   status = TEST_SUCCESS;
+   else
status = TEST_FAILED;
}
+
if (sessionless) {
for (i = 0; i < test_vector.size; i++) {
-   if (test_one_case(test_vector.address[i], 1)
-   != TEST_SUCCESS)
+   status = test_one_case(test_vector.address[i], 1);
+   if (status == TEST_SUCCESS || status == TEST_SKIPPED)
+   status = TEST_SUCCESS;
+   else
status = TEST_FAILED;
}
}
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index a4a25711ed..9360a6a9e5 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -272,6 +272,20 @@ allocated while for GEN1 devices, 12 buffers are 
allocated, plus 1472 bytes over
larger than the input size).
 
 
+Running QAT PMD with insecure crypto algorithms
+~~~
+
+A few insecure crypto algorithms are deprecated from QAT drivers. This needs 
to be reflected in DPDK QAT PMD.
+DPDK QAT PMD has by default disabled all the insecure crypto algorithms from 
Gen 1,2,3 and 4.
+A PMD parameter is used to enable the capability.
+
+- qat_legacy_capa
+
+To use this feature the user must set the parameter on process start as a 
device additional parameter::
+
+  -a b1:01.2,qat_legac

RE: [PATCH v3 09/28] vhost: add page size info to IOTLB entry

2023-05-28 Thread Xia, Chenbo
> -Original Message-
> From: Maxime Coquelin 
> Sent: Friday, May 26, 2023 12:26 AM
> To: dev@dpdk.org; Xia, Chenbo ;
> david.march...@redhat.com; m...@redhat.com; f...@redhat.com;
> jasow...@redhat.com; Liang, Cunming ; Xie, Yongji
> ; echau...@redhat.com; epere...@redhat.com;
> amore...@redhat.com; l...@redhat.com
> Cc: Maxime Coquelin 
> Subject: [PATCH v3 09/28] vhost: add page size info to IOTLB entry
> 
> VDUSE will close the file descriptor after having mapped
> the shared memory, so it will not be possible to get the
> page size afterwards.
> 
> This patch adds an new page_shift field to the IOTLB entry,
> so that the information will be passed at IOTLB cache
> insertion time. The information is stored as a bit shift
> value so that IOTLB entry keeps fitting in a single
> cacheline.
> 
> Signed-off-by: Maxime Coquelin 
> ---
>  lib/vhost/iotlb.c  | 46 --
>  lib/vhost/iotlb.h  |  2 +-
>  lib/vhost/vhost.h  |  1 -
>  lib/vhost/vhost_user.c |  8 +---
>  4 files changed, 28 insertions(+), 29 deletions(-)
> 
> diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c
> index 14d143366b..a23008909f 100644
> --- a/lib/vhost/iotlb.c
> +++ b/lib/vhost/iotlb.c
> @@ -19,14 +19,14 @@ struct vhost_iotlb_entry {
>   uint64_t uaddr;
>   uint64_t uoffset;
>   uint64_t size;
> + uint8_t page_shift;
>   uint8_t perm;
>  };
> 
>  #define IOTLB_CACHE_SIZE 2048
> 
>  static bool
> -vhost_user_iotlb_share_page(struct vhost_iotlb_entry *a, struct
> vhost_iotlb_entry *b,
> - uint64_t align)
> +vhost_user_iotlb_share_page(struct vhost_iotlb_entry *a, struct
> vhost_iotlb_entry *b)
>  {
>   uint64_t a_start, a_end, b_start;
> 
> @@ -38,44 +38,41 @@ vhost_user_iotlb_share_page(struct vhost_iotlb_entry
> *a, struct vhost_iotlb_entr
> 
>   /* Assumes entry a lower than entry b */
>   RTE_ASSERT(a_start < b_start);
> - a_end = RTE_ALIGN_CEIL(a_start + a->size, align);
> - b_start = RTE_ALIGN_FLOOR(b_start, align);
> + a_end = RTE_ALIGN_CEIL(a_start + a->size, RTE_BIT64(a->page_shift));
> + b_start = RTE_ALIGN_FLOOR(b_start, RTE_BIT64(b->page_shift));
> 
>   return a_end > b_start;
>  }
> 
>  static void
> -vhost_user_iotlb_set_dump(struct virtio_net *dev, struct
> vhost_iotlb_entry *node)
> +vhost_user_iotlb_set_dump(struct vhost_iotlb_entry *node)
>  {
> - uint64_t align, start;
> + uint64_t start;
> 
>   start = node->uaddr + node->uoffset;
> - align = hua_to_alignment(dev->mem, (void *)(uintptr_t)start);
> -
> - mem_set_dump((void *)(uintptr_t)start, node->size, true, align);
> + mem_set_dump((void *)(uintptr_t)start, node->size, true,
> RTE_BIT64(node->page_shift));
>  }
> 
>  static void
> -vhost_user_iotlb_clear_dump(struct virtio_net *dev, struct
> vhost_iotlb_entry *node,
> +vhost_user_iotlb_clear_dump(struct vhost_iotlb_entry *node,
>   struct vhost_iotlb_entry *prev, struct vhost_iotlb_entry *next)
>  {
> - uint64_t align, start, end;
> + uint64_t start, end;
> 
>   start = node->uaddr + node->uoffset;
>   end = start + node->size;
> 
> - align = hua_to_alignment(dev->mem, (void *)(uintptr_t)start);
> -
>   /* Skip first page if shared with previous entry. */
> - if (vhost_user_iotlb_share_page(prev, node, align))
> - start = RTE_ALIGN_CEIL(start, align);
> + if (vhost_user_iotlb_share_page(prev, node))
> + start = RTE_ALIGN_CEIL(start, RTE_BIT64(node->page_shift));
> 
>   /* Skip last page if shared with next entry. */
> - if (vhost_user_iotlb_share_page(node, next, align))
> - end = RTE_ALIGN_FLOOR(end, align);
> + if (vhost_user_iotlb_share_page(node, next))
> + end = RTE_ALIGN_FLOOR(end, RTE_BIT64(node->page_shift));
> 
>   if (end > start)
> - mem_set_dump((void *)(uintptr_t)start, end - start, false,
> align);
> + mem_set_dump((void *)(uintptr_t)start, end - start, false,
> + RTE_BIT64(node->page_shift));
>  }
> 
>  static struct vhost_iotlb_entry *
> @@ -198,7 +195,7 @@ vhost_user_iotlb_cache_remove_all(struct virtio_net
> *dev)
>   vhost_user_iotlb_wr_lock_all(dev);
> 
>   RTE_TAILQ_FOREACH_SAFE(node, &dev->iotlb_list, next, temp_node) {
> - vhost_user_iotlb_set_dump(dev, node);
> + vhost_user_iotlb_set_dump(node);
> 
>   TAILQ_REMOVE(&dev->iotlb_list, node, next);
>   vhost_user_iotlb_pool_put(dev, node);
> @@ -223,7 +220,7 @@ vhost_user_iotlb_cache_random_evict(struct virtio_net
> *dev)
>   if (!entry_idx) {
>   struct vhost_iotlb_entry *next_node =
> RTE_TAILQ_NEXT(node, next);
> 
> - vhost_user_iotlb_clear_dump(dev, node, prev_node,
> next_node);
> + vhost_user_iotlb_clear_dump(node, prev_node, next_node);
> 
>   TAILQ_REMOVE(&dev->iotlb_list, node, next);
>

Re: [PATCH v3 4/4] bus/pci: add VFIO sparse mmap support

2023-05-28 Thread Cao, Yahui



On 5/26/2023 12:31 AM, Miao Li wrote:

This patch adds sparse mmap support in PCI bus. Sparse mmap is a
capability defined in VFIO which allows multiple mmap areas in one
VFIO region.

In this patch, the sparse mmap regions are mapped to one continuous
virtual address region that follows device-specific BAR layout. So,
driver can still access all mapped sparse mmap regions by using
'bar_base_address + bar_offset'.

Signed-off-by: Miao Li 
Signed-off-by: Chenbo Xia 
---
  drivers/bus/pci/linux/pci_vfio.c | 104 +++
  drivers/bus/pci/private.h|   2 +
  2 files changed, 94 insertions(+), 12 deletions(-)


Acked-by: Yahui Cao 


RE: [PATCH v3 12/28] vhost: add IOTLB cache entry removal callback

2023-05-28 Thread Xia, Chenbo
> -Original Message-
> From: Maxime Coquelin 
> Sent: Friday, May 26, 2023 12:26 AM
> To: dev@dpdk.org; Xia, Chenbo ;
> david.march...@redhat.com; m...@redhat.com; f...@redhat.com;
> jasow...@redhat.com; Liang, Cunming ; Xie, Yongji
> ; echau...@redhat.com; epere...@redhat.com;
> amore...@redhat.com; l...@redhat.com
> Cc: Maxime Coquelin 
> Subject: [PATCH v3 12/28] vhost: add IOTLB cache entry removal callback
> 
> VDUSE will need to munmap() the IOTLB entry on removal
> from the cache, as it performs mmap() before insertion.
> 
> This patch introduces a callback that VDUSE layer will
> implement to achieve this.
> 
> Signed-off-by: Maxime Coquelin 
> ---
>  lib/vhost/iotlb.c | 12 
>  lib/vhost/vhost.h |  3 +++
>  2 files changed, 15 insertions(+)
> 
> diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c
> index a23008909f..6dca0ba7d0 100644
> --- a/lib/vhost/iotlb.c
> +++ b/lib/vhost/iotlb.c
> @@ -25,6 +25,15 @@ struct vhost_iotlb_entry {
> 
>  #define IOTLB_CACHE_SIZE 2048
> 
> +static void
> +vhost_user_iotlb_remove_notify(struct virtio_net *dev, struct
> vhost_iotlb_entry *entry)
> +{
> + if (dev->backend_ops->iotlb_remove_notify == NULL)
> + return;
> +
> + dev->backend_ops->iotlb_remove_notify(entry->uaddr, entry->uoffset,
> entry->size);
> +}
> +
>  static bool
>  vhost_user_iotlb_share_page(struct vhost_iotlb_entry *a, struct
> vhost_iotlb_entry *b)
>  {
> @@ -198,6 +207,7 @@ vhost_user_iotlb_cache_remove_all(struct virtio_net
> *dev)
>   vhost_user_iotlb_set_dump(node);
> 
>   TAILQ_REMOVE(&dev->iotlb_list, node, next);
> + vhost_user_iotlb_remove_notify(dev, node);
>   vhost_user_iotlb_pool_put(dev, node);
>   }
> 
> @@ -223,6 +233,7 @@ vhost_user_iotlb_cache_random_evict(struct virtio_net
> *dev)
>   vhost_user_iotlb_clear_dump(node, prev_node, next_node);
> 
>   TAILQ_REMOVE(&dev->iotlb_list, node, next);
> + vhost_user_iotlb_remove_notify(dev, node);
>   vhost_user_iotlb_pool_put(dev, node);
>   dev->iotlb_cache_nr--;
>   break;
> @@ -314,6 +325,7 @@ vhost_user_iotlb_cache_remove(struct virtio_net *dev,
> uint64_t iova, uint64_t si
>   vhost_user_iotlb_clear_dump(node, prev_node, next_node);
> 
>   TAILQ_REMOVE(&dev->iotlb_list, node, next);
> + vhost_user_iotlb_remove_notify(dev, node);
>   vhost_user_iotlb_pool_put(dev, node);
>   dev->iotlb_cache_nr--;
>   } else {
> diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
> index cc5c707205..f37e0b83b8 100644
> --- a/lib/vhost/vhost.h
> +++ b/lib/vhost/vhost.h
> @@ -89,10 +89,13 @@
>   for (iter = val; iter < num; iter++)
>  #endif
> 
> +typedef void (*vhost_iotlb_remove_notify)(uint64_t addr, uint64_t off,
> uint64_t size);
> +
>  /**
>   * Structure that contains backend-specific ops.
>   */
>  struct vhost_backend_ops {
> + vhost_iotlb_remove_notify iotlb_remove_notify;
>  };
> 
>  /**
> --
> 2.40.1

Reviewed-by: Chenbo Xia  


RE: "Thread safety" in rte_flow

2023-05-28 Thread Ori Kam
Hi David,

Best,
Ori

> -Original Message-
> From: David Marchand 
> Sent: Friday, May 26, 2023 3:33 PM
> To: Ori Kam ; NBU-Contact-Thomas Monjalon
> (EXTERNAL) ; Ferruh Yigit ;
> Andrew Rybchenko 
> Cc: dev 
> Subject: "Thread safety" in rte_flow
> 
> Hello Ori, ethdev maintainers,
> 
> I am a bit puzzled at the RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE checks in
> rte_flow.c.
> 
> - The rte_flow.h does not hint at what is being protected.
> 
> All I can see is a somewhat vague, in lib/ethdev/rte_ethdev.h:
> /** PMD supports thread-safe flow operations */
> #define RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE  RTE_BIT32(0)
> 
> It would be great to have a more detailed description of what this
> thread safety means.
> 

Some history,
When the rte_flow was first created it was part of the control path.
This meant that functions should be called from a single thread or locked by
the application.

As more and more applications started to add rules from the datapath and we
saw that such global locks are too expensive, we added the THREAD_SAFE flag
that allows a PMD to declare that those functions are thread safe and no locking
by application is needed.

To create unified API from application point of view we moved the lock to the 
rte_flow
functions. This means that all such functions are no thread safe, but if PMD 
declare
itself as thread_safe the locking is done on the PMD.

The idea is that the PMD is able to lock only part of the long running function 
and not all
the function.

To handle the notion that applications are adding rules in the datapath we 
introduced the
queue based API (template/async API) in this API there shouldn't be any locks 
and it is the 
application responsibility to call each function with the queue that 
corresponds to the working
thread.

I hope answer your questions.
> 
> - I don't think many functions of the rte_flow API look at this flag.
> It seems it is never checked in newly added symbols (this is an
> impression, I did not enter the details).
> 
> Could you have a look?
> 

Will have a look, but from my above explanation, the only functions that should 
have it
are the functions that don't use queues or are assumed to be called from one 
thread only
(control function)

> 
> Thanks.
> 
> --
> David Marchand

Best,
Ori



RE: [PATCH v3 13/28] vhost: add helper for IOTLB misses

2023-05-28 Thread Xia, Chenbo
> -Original Message-
> From: Maxime Coquelin 
> Sent: Friday, May 26, 2023 12:26 AM
> To: dev@dpdk.org; Xia, Chenbo ;
> david.march...@redhat.com; m...@redhat.com; f...@redhat.com;
> jasow...@redhat.com; Liang, Cunming ; Xie, Yongji
> ; echau...@redhat.com; epere...@redhat.com;
> amore...@redhat.com; l...@redhat.com
> Cc: Maxime Coquelin 
> Subject: [PATCH v3 13/28] vhost: add helper for IOTLB misses
> 
> This patch adds a helper for sending IOTLB misses as VDUSE
> will use an ioctl while Vhost-user use a dedicated
> Vhost-user backend request.
> 
> Signed-off-by: Maxime Coquelin 
> ---
>  lib/vhost/vhost.c  | 13 -
>  lib/vhost/vhost.h  |  4 
>  lib/vhost/vhost_user.c |  6 --
>  lib/vhost/vhost_user.h |  1 -
>  4 files changed, 20 insertions(+), 4 deletions(-)
> 
> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> index 41f212315e..790eb06b28 100644
> --- a/lib/vhost/vhost.c
> +++ b/lib/vhost/vhost.c
> @@ -52,6 +52,12 @@ static const struct vhost_vq_stats_name_off
> vhost_vq_stat_strings[] = {
> 
>  #define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings)
> 
> +static int
> +vhost_iotlb_miss(struct virtio_net *dev, uint64_t iova, uint8_t perm)
> +{
> + return dev->backend_ops->iotlb_miss(dev, iova, perm);
> +}
> +
>  uint64_t
>  __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq,
>   uint64_t iova, uint64_t *size, uint8_t perm)
> @@ -86,7 +92,7 @@ __vhost_iova_to_vva(struct virtio_net *dev, struct
> vhost_virtqueue *vq,
>   vhost_user_iotlb_rd_unlock(vq);
> 
>   vhost_user_iotlb_pending_insert(dev, iova, perm);
> - if (vhost_user_iotlb_miss(dev, iova, perm)) {
> + if (vhost_iotlb_miss(dev, iova, perm)) {
>   VHOST_LOG_DATA(dev->ifname, ERR,
>   "IOTLB miss req failed for IOVA 0x%" PRIx64 
> "\n",
>   iova);
> @@ -686,6 +692,11 @@ vhost_new_device(struct vhost_backend_ops *ops)
>   return -1;
>   }
> 
> + if (ops->iotlb_miss == NULL) {
> + VHOST_LOG_CONFIG("device", ERR, "missing IOTLB miss backend
> op.\n");
> + return -1;
> + }
> +
>   pthread_mutex_lock(&vhost_dev_lock);
>   for (i = 0; i < RTE_MAX_VHOST_DEVICE; i++) {
>   if (vhost_devices[i] == NULL)
> diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
> index f37e0b83b8..ee7640e901 100644
> --- a/lib/vhost/vhost.h
> +++ b/lib/vhost/vhost.h
> @@ -89,13 +89,17 @@
>   for (iter = val; iter < num; iter++)
>  #endif
> 
> +struct virtio_net;
>  typedef void (*vhost_iotlb_remove_notify)(uint64_t addr, uint64_t off,
> uint64_t size);
> 
> +typedef int (*vhost_iotlb_miss_cb)(struct virtio_net *dev, uint64_t iova,
> uint8_t perm);
> +
>  /**
>   * Structure that contains backend-specific ops.
>   */
>  struct vhost_backend_ops {
>   vhost_iotlb_remove_notify iotlb_remove_notify;
> + vhost_iotlb_miss_cb iotlb_miss;
>  };
> 
>  /**
> diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
> index 7655082c4b..972559a2b5 100644
> --- a/lib/vhost/vhost_user.c
> +++ b/lib/vhost/vhost_user.c
> @@ -3305,7 +3305,7 @@ vhost_user_msg_handler(int vid, int fd)
>   return ret;
>  }
> 
> -int
> +static int
>  vhost_user_iotlb_miss(struct virtio_net *dev, uint64_t iova, uint8_t perm)
>  {
>   int ret;
> @@ -3465,7 +3465,9 @@ int rte_vhost_host_notifier_ctrl(int vid, uint16_t
> qid, bool enable)
>   return ret;
>  }
> 
> -static struct vhost_backend_ops vhost_user_backend_ops;
> +static struct vhost_backend_ops vhost_user_backend_ops = {
> + .iotlb_miss = vhost_user_iotlb_miss,
> +};
> 
>  int
>  vhost_user_new_device(void)
> diff --git a/lib/vhost/vhost_user.h b/lib/vhost/vhost_user.h
> index 61456049c8..1ffeca92f3 100644
> --- a/lib/vhost/vhost_user.h
> +++ b/lib/vhost/vhost_user.h
> @@ -179,7 +179,6 @@ struct __rte_packed vhu_msg_context {
> 
>  /* vhost_user.c */
>  int vhost_user_msg_handler(int vid, int fd);
> -int vhost_user_iotlb_miss(struct virtio_net *dev, uint64_t iova, uint8_t
> perm);
> 
>  /* socket.c */
>  int read_fd_message(char *ifname, int sockfd, char *buf, int buflen, int
> *fds, int max_fds,
> --
> 2.40.1

Reviewed-by: Chenbo Xia  


RE: [PATCH v3 17/28] vhost: add control virtqueue support

2023-05-28 Thread Xia, Chenbo
> -Original Message-
> From: Maxime Coquelin 
> Sent: Friday, May 26, 2023 12:26 AM
> To: dev@dpdk.org; Xia, Chenbo ;
> david.march...@redhat.com; m...@redhat.com; f...@redhat.com;
> jasow...@redhat.com; Liang, Cunming ; Xie, Yongji
> ; echau...@redhat.com; epere...@redhat.com;
> amore...@redhat.com; l...@redhat.com
> Cc: Maxime Coquelin 
> Subject: [PATCH v3 17/28] vhost: add control virtqueue support
> 
> In order to support multi-queue with VDUSE, having
> control queue support is required.
> 
> This patch adds control queue implementation, it will be
> used later when adding VDUSE support. Only split ring
> layout is supported for now, packed ring support will be
> added later.
> 
> Signed-off-by: Maxime Coquelin 
> ---
>  lib/vhost/meson.build   |   1 +
>  lib/vhost/vhost.h   |   2 +
>  lib/vhost/virtio_net_ctrl.c | 286 
>  lib/vhost/virtio_net_ctrl.h |  10 ++
>  4 files changed, 299 insertions(+)
>  create mode 100644 lib/vhost/virtio_net_ctrl.c
>  create mode 100644 lib/vhost/virtio_net_ctrl.h
> 
> diff --git a/lib/vhost/meson.build b/lib/vhost/meson.build
> index 0d1abf6283..83c8482c9e 100644
> --- a/lib/vhost/meson.build
> +++ b/lib/vhost/meson.build
> @@ -27,6 +27,7 @@ sources = files(
>  'vhost_crypto.c',
>  'vhost_user.c',
>  'virtio_net.c',
> +'virtio_net_ctrl.c',
>  )
>  headers = files(
>  'rte_vdpa.h',
> diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
> index 8f0875b4e2..76663aed24 100644
> --- a/lib/vhost/vhost.h
> +++ b/lib/vhost/vhost.h
> @@ -525,6 +525,8 @@ struct virtio_net {
>   int postcopy_ufd;
>   int postcopy_listening;
> 
> + struct vhost_virtqueue  *cvq;
> +
>   struct rte_vdpa_device *vdpa_dev;
> 
>   /* context data for the external message handlers */
> diff --git a/lib/vhost/virtio_net_ctrl.c b/lib/vhost/virtio_net_ctrl.c
> new file mode 100644
> index 00..f4b8d5f7cc
> --- /dev/null
> +++ b/lib/vhost/virtio_net_ctrl.c
> @@ -0,0 +1,286 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright (c) 2023 Red Hat, Inc.
> + */
> +
> +#include 
> +#include 
> +#include 
> +
> +#include "iotlb.h"
> +#include "vhost.h"
> +#include "virtio_net_ctrl.h"
> +
> +struct virtio_net_ctrl {
> + uint8_t class;
> + uint8_t command;
> + uint8_t command_data[];
> +};
> +
> +struct virtio_net_ctrl_elem {
> + struct virtio_net_ctrl *ctrl_req;
> + uint16_t head_idx;
> + uint16_t n_descs;
> + uint8_t *desc_ack;
> +};
> +
> +static int
> +virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq,
> + struct virtio_net_ctrl_elem *ctrl_elem)
> + __rte_shared_locks_required(&cvq->iotlb_lock)
> +{
> + uint16_t avail_idx, desc_idx, n_descs = 0;
> + uint64_t desc_len, desc_addr, desc_iova, data_len = 0;
> + uint8_t *ctrl_req;
> + struct vring_desc *descs;
> +
> + avail_idx = __atomic_load_n(&cvq->avail->idx, __ATOMIC_ACQUIRE);
> + if (avail_idx == cvq->last_avail_idx) {
> + VHOST_LOG_CONFIG(dev->ifname, DEBUG, "Control queue empty\n");
> + return 0;
> + }
> +
> + desc_idx = cvq->avail->ring[cvq->last_avail_idx];
> + if (desc_idx >= cvq->size) {
> + VHOST_LOG_CONFIG(dev->ifname, ERR, "Out of range desc index,
> dropping\n");
> + goto err;
> + }
> +
> + ctrl_elem->head_idx = desc_idx;
> +
> + if (cvq->desc[desc_idx].flags & VRING_DESC_F_INDIRECT) {
> + desc_len = cvq->desc[desc_idx].len;
> + desc_iova = cvq->desc[desc_idx].addr;
> +
> + descs = (struct vring_desc *)(uintptr_t)vhost_iova_to_vva(dev,
> cvq,
> + desc_iova, &desc_len, VHOST_ACCESS_RO);
> + if (!descs || desc_len != cvq->desc[desc_idx].len) {
> + VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to map ctrl
> indirect descs\n");
> + goto err;
> + }
> +
> + desc_idx = 0;
> + } else {
> + descs = cvq->desc;
> + }
> +
> + while (1) {
> + desc_len = descs[desc_idx].len;
> + desc_iova = descs[desc_idx].addr;
> +
> + n_descs++;
> +
> + if (descs[desc_idx].flags & VRING_DESC_F_WRITE) {
> + if (ctrl_elem->desc_ack) {
> + VHOST_LOG_CONFIG(dev->ifname, ERR,
> + "Unexpected ctrl chain 
> layout\n");
> + goto err;
> + }
> +
> + if (desc_len != sizeof(uint8_t)) {
> + VHOST_LOG_CONFIG(dev->ifname, ERR,
> + "Invalid ack size for ctrl req,
> dropping\n");
> + goto err;
> + }
> +
> + ctrl_elem->desc_ack = (uint8_t
> *)(uintptr_t)vhost_iova_t

RE: [PATCH v3 20/28] vhost: add VDUSE callback for IOTLB entry removal

2023-05-28 Thread Xia, Chenbo
> -Original Message-
> From: Maxime Coquelin 
> Sent: Friday, May 26, 2023 12:26 AM
> To: dev@dpdk.org; Xia, Chenbo ;
> david.march...@redhat.com; m...@redhat.com; f...@redhat.com;
> jasow...@redhat.com; Liang, Cunming ; Xie, Yongji
> ; echau...@redhat.com; epere...@redhat.com;
> amore...@redhat.com; l...@redhat.com
> Cc: Maxime Coquelin 
> Subject: [PATCH v3 20/28] vhost: add VDUSE callback for IOTLB entry
> removal
> 
> This patch implements the VDUSE callback for IOTLB entry
> removal, where it unmaps the pages from the invalidated
> IOTLB entry.
> 
> Signed-off-by: Maxime Coquelin 
> ---
>  lib/vhost/vduse.c | 7 +++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/lib/vhost/vduse.c b/lib/vhost/vduse.c
> index f72c7bf6ab..58c1b384a8 100644
> --- a/lib/vhost/vduse.c
> +++ b/lib/vhost/vduse.c
> @@ -42,6 +42,12 @@
>   (1ULL << VIRTIO_F_IN_ORDER) | \
>   (1ULL << VIRTIO_F_IOMMU_PLATFORM))
> 
> +static void
> +vduse_iotlb_remove_notify(uint64_t addr, uint64_t offset, uint64_t size)
> +{
> + munmap((void *)(uintptr_t)addr, offset + size);
> +}
> +
>  static int
>  vduse_iotlb_miss(struct virtio_net *dev, uint64_t iova, uint8_t perm
> __rte_unused)
>  {
> @@ -99,6 +105,7 @@ vduse_iotlb_miss(struct virtio_net *dev, uint64_t iova,
> uint8_t perm __rte_unuse
> 
>  static struct vhost_backend_ops vduse_backend_ops = {
>   .iotlb_miss = vduse_iotlb_miss,
> + .iotlb_remove_notify = vduse_iotlb_remove_notify,
>  };
> 
>  int
> --
> 2.40.1

Reviewed-by: Chenbo Xia  


RE: [PATCH v3 25/28] vhost: add support for VDUSE IOTLB update event

2023-05-28 Thread Xia, Chenbo
> -Original Message-
> From: Maxime Coquelin 
> Sent: Friday, May 26, 2023 12:26 AM
> To: dev@dpdk.org; Xia, Chenbo ;
> david.march...@redhat.com; m...@redhat.com; f...@redhat.com;
> jasow...@redhat.com; Liang, Cunming ; Xie, Yongji
> ; echau...@redhat.com; epere...@redhat.com;
> amore...@redhat.com; l...@redhat.com
> Cc: Maxime Coquelin 
> Subject: [PATCH v3 25/28] vhost: add support for VDUSE IOTLB update event
> 
> This patch adds support for VDUSE_UPDATE_IOTLB event
> handling, which consists in invaliding IOTLB entries for
> the range specified in the request.
> 
> Signed-off-by: Maxime Coquelin 
> ---
>  lib/vhost/vduse.c | 7 +++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/lib/vhost/vduse.c b/lib/vhost/vduse.c
> index 3bf65d4b8b..110654ec68 100644
> --- a/lib/vhost/vduse.c
> +++ b/lib/vhost/vduse.c
> @@ -179,6 +179,13 @@ vduse_events_handler(int fd, void *arg, int *remove
> __rte_unused)
>   dev->status = req.s.status;
>   resp.result = VDUSE_REQ_RESULT_OK;
>   break;
> + case VDUSE_UPDATE_IOTLB:
> + VHOST_LOG_CONFIG(dev->ifname, INFO, "\tIOVA range: %" PRIx64 "
> - %" PRIx64 "\n",
> + (uint64_t)req.iova.start, 
> (uint64_t)req.iova.last);
> + vhost_user_iotlb_cache_remove(dev, req.iova.start,
> + req.iova.last - req.iova.start + 1);
> + resp.result = VDUSE_REQ_RESULT_OK;
> + break;
>   default:
>   resp.result = VDUSE_REQ_RESULT_FAILED;
>   break;
> --
> 2.40.1

Reviewed-by: Chenbo Xia  


RE: [PATCH v3 28/28] vhost: add VDUSE device stop

2023-05-28 Thread Xia, Chenbo
Hi Maxime,

> -Original Message-
> From: Maxime Coquelin 
> Sent: Friday, May 26, 2023 12:26 AM
> To: dev@dpdk.org; Xia, Chenbo ;
> david.march...@redhat.com; m...@redhat.com; f...@redhat.com;
> jasow...@redhat.com; Liang, Cunming ; Xie, Yongji
> ; echau...@redhat.com; epere...@redhat.com;
> amore...@redhat.com; l...@redhat.com
> Cc: Maxime Coquelin 
> Subject: [PATCH v3 28/28] vhost: add VDUSE device stop
> 
> This patch adds VDUSE device stop and cleanup of its
> virtqueues.
> 
> Signed-off-by: Maxime Coquelin 
> ---
>  doc/guides/rel_notes/release_23_07.rst |  6 +++
>  lib/vhost/vduse.c  | 72 +++---
>  2 files changed, 70 insertions(+), 8 deletions(-)
> 
> diff --git a/doc/guides/rel_notes/release_23_07.rst
> b/doc/guides/rel_notes/release_23_07.rst
> index fa889a5ee7..66ba9e25dd 100644
> --- a/doc/guides/rel_notes/release_23_07.rst
> +++ b/doc/guides/rel_notes/release_23_07.rst
> @@ -60,6 +60,12 @@ New Features
>Introduced ``rte_vhost_driver_set_max_queue_num()`` to be able to limit
> the
>maximum number of supported queue pairs, required for VDUSE support.
> 
> +* **Added VDUSE support into Vhost library

Missing '.**' at the end like patch 15

Thanks,
Chenbo

> +
> +  VDUSE aims at implementing vDPA devices in userspace. It can be used as
> an
> +  alternative to Vhost-user when using Vhost-vDPA, but also enable
> providing a
> +  virtio-net netdev to the host when using Virtio-vDPA driver.
> +
> 
>  Removed Items
>  -
> diff --git a/lib/vhost/vduse.c b/lib/vhost/vduse.c
> index 699cfed9e3..f421b1cf4c 100644
> --- a/lib/vhost/vduse.c
> +++ b/lib/vhost/vduse.c
> @@ -252,6 +252,44 @@ vduse_vring_setup(struct virtio_net *dev, unsigned
> int index)
>   }
>  }
> 
> +static void
> +vduse_vring_cleanup(struct virtio_net *dev, unsigned int index)
> +{
> + struct vhost_virtqueue *vq = dev->virtqueue[index];
> + struct vduse_vq_eventfd vq_efd;
> + int ret;
> +
> + if (vq == dev->cvq && vq->kickfd >= 0) {
> + fdset_del(&vduse.fdset, vq->kickfd);
> + fdset_pipe_notify(&vduse.fdset);
> + }
> +
> + vq_efd.index = index;
> + vq_efd.fd = VDUSE_EVENTFD_DEASSIGN;
> +
> + ret = ioctl(dev->vduse_dev_fd, VDUSE_VQ_SETUP_KICKFD, &vq_efd);
> + if (ret)
> + VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to cleanup kickfd
> for VQ %u: %s\n",
> + index, strerror(errno));
> +
> + close(vq->kickfd);
> + vq->kickfd = VIRTIO_UNINITIALIZED_EVENTFD;
> +
> + vring_invalidate(dev, vq);
> +
> + rte_free(vq->batch_copy_elems);
> + vq->batch_copy_elems = NULL;
> +
> + rte_free(vq->shadow_used_split);
> + vq->shadow_used_split = NULL;
> +
> + vq->enabled = false;
> + vq->ready = false;
> + vq->size = 0;
> + vq->last_used_idx = 0;
> + vq->last_avail_idx = 0;
> +}
> +
>  static void
>  vduse_device_start(struct virtio_net *dev)
>  {
> @@ -304,6 +342,23 @@ vduse_device_start(struct virtio_net *dev)
>   }
>  }
> 
> +static void
> +vduse_device_stop(struct virtio_net *dev)
> +{
> + unsigned int i;
> +
> + VHOST_LOG_CONFIG(dev->ifname, INFO, "Stopping device...\n");
> +
> + vhost_destroy_device_notify(dev);
> +
> + dev->flags &= ~VIRTIO_DEV_READY;
> +
> + for (i = 0; i < dev->nr_vring; i++)
> + vduse_vring_cleanup(dev, i);
> +
> + vhost_user_iotlb_flush_all(dev);
> +}
> +
>  static void
>  vduse_events_handler(int fd, void *arg, int *remove __rte_unused)
>  {
> @@ -311,6 +366,7 @@ vduse_events_handler(int fd, void *arg, int *remove
> __rte_unused)
>   struct vduse_dev_request req;
>   struct vduse_dev_response resp;
>   struct vhost_virtqueue *vq;
> + uint8_t old_status;
>   int ret;
> 
>   memset(&resp, 0, sizeof(resp));
> @@ -339,10 +395,15 @@ vduse_events_handler(int fd, void *arg, int *remove
> __rte_unused)
>   case VDUSE_SET_STATUS:
>   VHOST_LOG_CONFIG(dev->ifname, INFO, "\tnew status: 0x%08x\n",
>   req.s.status);
> + old_status = dev->status;
>   dev->status = req.s.status;
> 
> - if (dev->status & VIRTIO_DEVICE_STATUS_DRIVER_OK)
> - vduse_device_start(dev);
> + if ((old_status ^ dev->status) &
> VIRTIO_DEVICE_STATUS_DRIVER_OK) {
> + if (dev->status & VIRTIO_DEVICE_STATUS_DRIVER_OK)
> + vduse_device_start(dev);
> + else
> + vduse_device_stop(dev);
> + }
> 
>   resp.result = VDUSE_REQ_RESULT_OK;
>   break;
> @@ -560,12 +621,7 @@ vduse_device_destroy(const char *path)
>   if (vid == RTE_MAX_VHOST_DEVICE)
>   return -1;
> 
> - if (dev->cvq && dev->cvq->kickfd >= 0) {
> - fdset_del(&vduse.fdset, dev->cvq->kickfd);
> - fdset_pipe_notify(&vduse.fdset);
> - close(dev->cvq->