[dpdk-dev] [PATCH v1] net/ice: revert removing IPID from default hash field

2021-09-07 Thread Wenjun Wu
We try to refine default RSS for IP fragment packets. However,
the change will lead to more serious errors. The scenario that
there is overlap/conflict between the new characteristics and the
existing ones has not been supported, so non-fragment packets
and fragment packets cannot share the same hash fields, or
all related profiles will be removed.

Therefore, IPID field is necessary for fragment packets.

Fixes: cf37e1e5e9d2 ("net/ice: fix default RSS hash for IP fragment packets")

Signed-off-by: Wenjun Wu 
---
 drivers/net/ice/ice_ethdev.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 8d62b84805..0683296584 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -2981,7 +2981,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 
if (rss_hf & ETH_RSS_FRAG_IPV4) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV4 | 
ICE_FLOW_SEG_HDR_IPV_FRAG;
-   cfg.hash_flds = ICE_FLOW_HASH_IPV4;
+   cfg.hash_flds = ICE_FLOW_HASH_IPV4 | 
BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_ID);
ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
if (ret)
PMD_DRV_LOG(ERR, "%s IPV4_FRAG rss flow fail %d",
@@ -2990,7 +2990,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 
if (rss_hf & ETH_RSS_FRAG_IPV6) {
cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV6 | 
ICE_FLOW_SEG_HDR_IPV_FRAG;
-   cfg.hash_flds = ICE_FLOW_HASH_IPV6;
+   cfg.hash_flds = ICE_FLOW_HASH_IPV6 | 
BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_ID);
ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
if (ret)
PMD_DRV_LOG(ERR, "%s IPV6_FRAG rss flow fail %d",
-- 
2.25.1



Re: [dpdk-dev] [PATCH v1] net/ice: revert removing IPID from default hash field

2021-09-07 Thread David Marchand
On Tue, Sep 7, 2021 at 9:05 AM Wenjun Wu  wrote:
>
> We try to refine default RSS for IP fragment packets. However,
> the change will lead to more serious errors. The scenario that
> there is overlap/conflict between the new characteristics and the
> existing ones has not been supported, so non-fragment packets
> and fragment packets cannot share the same hash fields, or
> all related profiles will be removed.
>
> Therefore, IPID field is necessary for fragment packets.
>
> Fixes: cf37e1e5e9d2 ("net/ice: fix default RSS hash for IP fragment packets")
>
> Signed-off-by: Wenjun Wu 

- If this is a revert of cf37e1e5e9d2, maybe it is simpler to drop the
original change in next-net before it gets pulled in the main repo.
- A similar change has been applied to net/iavf? Is it still relevant?


-- 
David Marchand



Re: [dpdk-dev] [PATCH v1] net/ice: revert removing IPID from default hash field

2021-09-07 Thread Wu, Wenjun1
Default RSS for outer src/dst IP address field in iavf is not supported before, 
so it does not cause any error.
However, if it can be dropped, I suggest to do so. It seems to be safer to add 
IPID field here.

> -Original Message-
> From: David Marchand 
> Sent: Tuesday, September 7, 2021 3:10 PM
> To: Wu, Wenjun1 ; Yigit, Ferruh
> 
> Cc: dev ; Yang, Qiming ; Zhang, Qi
> Z 
> Subject: Re: [dpdk-dev] [PATCH v1] net/ice: revert removing IPID from
> default hash field
> 
> On Tue, Sep 7, 2021 at 9:05 AM Wenjun Wu  wrote:
> >
> > We try to refine default RSS for IP fragment packets. However, the
> > change will lead to more serious errors. The scenario that there is
> > overlap/conflict between the new characteristics and the existing ones
> > has not been supported, so non-fragment packets and fragment packets
> > cannot share the same hash fields, or all related profiles will be
> > removed.
> >
> > Therefore, IPID field is necessary for fragment packets.
> >
> > Fixes: cf37e1e5e9d2 ("net/ice: fix default RSS hash for IP fragment
> > packets")
> >
> > Signed-off-by: Wenjun Wu 
> 
> - If this is a revert of cf37e1e5e9d2, maybe it is simpler to drop the 
> original
> change in next-net before it gets pulled in the main repo.
> - A similar change has been applied to net/iavf? Is it still relevant?
> 
> 
> --
> David Marchand



[dpdk-dev] [RFC PATCH v5 0/5] Add PIE support for HQoS library

2021-09-07 Thread Liguzinski, WojciechX
DPDK sched library is equipped with mechanism that secures it from the 
bufferbloat problem
which is a situation when excess buffers in the network cause high latency and 
latency
variation. Currently, it supports RED for active queue management (which is 
designed
to control the queue length but it does not control latency directly and is now 
being
obsoleted). However, more advanced queue management is required to address this 
problem
and provide desirable quality of service to users.

This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional 
Integral
controller Enhanced) that can effectively and directly control queuing latency 
to address
the bufferbloat problem.

The implementation of mentioned functionality includes modification of existing 
and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is 
going
to be prepared and sent.

Liguzinski, WojciechX (3):
  sched: add PIE based congestion management
  example/qos_sched: add PIE support
  example/ip_pipeline: add PIE support
  doc/guides/prog_guide: added PIE
  app/test: add tests for PIE

 app/test/autotest_data.py|   18 +
 app/test/meson.build |4 +
 app/test/test_pie.c  | 1076 ++
 config/rte_config.h  |1 -
 doc/guides/prog_guide/glossary.rst   |3 +
 doc/guides/prog_guide/qos_framework.rst  |   60 +-
 doc/guides/prog_guide/traffic_management.rst |   13 +-
 drivers/net/softnic/rte_eth_softnic_tm.c |6 +-
 examples/ip_pipeline/tmgr.c  |6 +-
 examples/qos_sched/app_thread.c  |1 -
 examples/qos_sched/cfg_file.c|   82 +-
 examples/qos_sched/init.c|7 +-
 examples/qos_sched/profile.cfg   |  196 ++--
 lib/sched/meson.build|   10 +-
 lib/sched/rte_pie.c  |   86 ++
 lib/sched/rte_pie.h  |  398 +++
 lib/sched/rte_sched.c|  228 ++--
 lib/sched/rte_sched.h|   53 +-
 lib/sched/version.map|3 +
 19 files changed, 2061 insertions(+), 190 deletions(-)
 create mode 100644 app/test/test_pie.c
 create mode 100644 lib/sched/rte_pie.c
 create mode 100644 lib/sched/rte_pie.h

-- 
2.25.1



[dpdk-dev] [RFC PATCH v5 1/5] sched: add PIE based congestion management

2021-09-07 Thread Liguzinski, WojciechX
Implement PIE based congestion management based on rfc8033

Signed-off-by: Liguzinski, WojciechX 
---
 drivers/net/softnic/rte_eth_softnic_tm.c |   6 +-
 lib/sched/meson.build|  10 +-
 lib/sched/rte_pie.c  |  82 +
 lib/sched/rte_pie.h  | 393 +++
 lib/sched/rte_sched.c| 228 +
 lib/sched/rte_sched.h|  53 ++-
 lib/sched/version.map|   3 +
 7 files changed, 685 insertions(+), 90 deletions(-)
 create mode 100644 lib/sched/rte_pie.c
 create mode 100644 lib/sched/rte_pie.h

diff --git a/drivers/net/softnic/rte_eth_softnic_tm.c 
b/drivers/net/softnic/rte_eth_softnic_tm.c
index 90baba15ce..5b6c4e6d4b 100644
--- a/drivers/net/softnic/rte_eth_softnic_tm.c
+++ b/drivers/net/softnic/rte_eth_softnic_tm.c
@@ -420,7 +420,7 @@ pmd_tm_node_type_get(struct rte_eth_dev *dev,
return 0;
 }
 
-#ifdef RTE_SCHED_RED
+#ifdef RTE_SCHED_AQM
 #define WRED_SUPPORTED 1
 #else
 #define WRED_SUPPORTED 0
@@ -2306,7 +2306,7 @@ tm_tc_wred_profile_get(struct rte_eth_dev *dev, uint32_t 
tc_id)
return NULL;
 }
 
-#ifdef RTE_SCHED_RED
+#ifdef RTE_SCHED_AQM
 
 static void
 wred_profiles_set(struct rte_eth_dev *dev, uint32_t subport_id)
@@ -2321,7 +2321,7 @@ wred_profiles_set(struct rte_eth_dev *dev, uint32_t 
subport_id)
for (tc_id = 0; tc_id < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; tc_id++)
for (color = RTE_COLOR_GREEN; color < RTE_COLORS; color++) {
struct rte_red_params *dst =
-   &pp->red_params[tc_id][color];
+   &pp->wred_params[tc_id][color];
struct tm_wred_profile *src_wp =
tm_tc_wred_profile_get(dev, tc_id);
struct rte_tm_red_params *src =
diff --git a/lib/sched/meson.build b/lib/sched/meson.build
index b24f7b8775..e7ae9bcf19 100644
--- a/lib/sched/meson.build
+++ b/lib/sched/meson.build
@@ -1,11 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2017 Intel Corporation
 
-sources = files('rte_sched.c', 'rte_red.c', 'rte_approx.c')
-headers = files(
-'rte_approx.h',
-'rte_red.h',
-'rte_sched.h',
-'rte_sched_common.h',
-)
+sources = files('rte_sched.c', 'rte_red.c', 'rte_approx.c', 'rte_pie.c')
+headers = files('rte_sched.h', 'rte_sched_common.h',
+   'rte_red.h', 'rte_approx.h', 'rte_pie.h')
 deps += ['mbuf', 'meter']
diff --git a/lib/sched/rte_pie.c b/lib/sched/rte_pie.c
new file mode 100644
index 00..2fcecb2db4
--- /dev/null
+++ b/lib/sched/rte_pie.c
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include 
+
+#include "rte_pie.h"
+#include 
+#include 
+#include 
+
+#ifdef __INTEL_COMPILER
+#pragma warning(disable:2259) /* conversion may lose significant bits */
+#endif
+
+void
+rte_pie_rt_data_init(struct rte_pie *pie)
+{
+   if (pie == NULL) {
+   /* Allocate memory to use the PIE data structure */
+   pie = rte_malloc(NULL, sizeof(struct rte_pie), 0);
+
+   if (pie == NULL)
+   RTE_LOG(ERR, SCHED, "%s: Memory allocation fails\n", 
__func__);
+   }
+
+   pie->active = 0;
+   pie->in_measurement = 0;
+   pie->departed_bytes_count = 0;
+   pie->start_measurement = 0;
+   pie->last_measurement = 0;
+   pie->qlen = 0;
+   pie->avg_dq_time = 0;
+   pie->burst_allowance = 0;
+   pie->qdelay_old = 0;
+   pie->drop_prob = 0;
+   pie->accu_prob = 0;
+}
+
+int
+rte_pie_config_init(struct rte_pie_config *pie_cfg,
+   const uint16_t qdelay_ref,
+   const uint16_t dp_update_interval,
+   const uint16_t max_burst,
+   const uint16_t tailq_th)
+{
+   uint64_t tsc_hz = rte_get_tsc_hz();
+
+   if (pie_cfg == NULL)
+   return -1;
+
+   if (qdelay_ref <= 0) {
+   RTE_LOG(ERR, SCHED,
+   "%s: Incorrect value for qdelay_ref\n", __func__);
+   return -EINVAL;
+   }
+
+   if (dp_update_interval <= 0) {
+   RTE_LOG(ERR, SCHED,
+   "%s: Incorrect value for dp_update_interval\n", 
__func__);
+   return -EINVAL;
+   }
+
+   if (max_burst <= 0) {
+   RTE_LOG(ERR, SCHED,
+   "%s: Incorrect value for max_burst\n", __func__);
+   return -EINVAL;
+   }
+
+   if (tailq_th <= 0) {
+   RTE_LOG(ERR, SCHED,
+   "%s: Incorrect value for tailq_th\n", __func__);
+   return -EINVAL;
+   }
+
+   pie_cfg->qdelay_ref = (tsc_hz * qdelay_ref) / 1000;
+   pie_cfg->dp_update_interval = (tsc_hz * dp_update_interval) / 1000;
+   pie_cfg->max_burst = (tsc_hz * max_burst) / 1000;
+

[dpdk-dev] [RFC PATCH v5 2/5] example/qos_sched: add pie support

2021-09-07 Thread Liguzinski, WojciechX
patch add support enable PIE or RED by
parsing config file.

Signed-off-by: Liguzinski, WojciechX 
---
 config/rte_config.h |   1 -
 examples/qos_sched/app_thread.c |   1 -
 examples/qos_sched/cfg_file.c   |  82 ++---
 examples/qos_sched/init.c   |   7 +-
 examples/qos_sched/profile.cfg  | 196 +---
 5 files changed, 200 insertions(+), 87 deletions(-)

diff --git a/config/rte_config.h b/config/rte_config.h
index 590903c07d..48132f27df 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -89,7 +89,6 @@
 #define RTE_MAX_LCORE_FREQS 64
 
 /* rte_sched defines */
-#undef RTE_SCHED_RED
 #undef RTE_SCHED_COLLECT_STATS
 #undef RTE_SCHED_SUBPORT_TC_OV
 #define RTE_SCHED_PORT_N_GRINDERS 8
diff --git a/examples/qos_sched/app_thread.c b/examples/qos_sched/app_thread.c
index dbc878b553..895c0d3592 100644
--- a/examples/qos_sched/app_thread.c
+++ b/examples/qos_sched/app_thread.c
@@ -205,7 +205,6 @@ app_worker_thread(struct thread_conf **confs)
if (likely(nb_pkt)) {
int nb_sent = rte_sched_port_enqueue(conf->sched_port, 
mbufs,
nb_pkt);
-
APP_STATS_ADD(conf->stat.nb_drop, nb_pkt - nb_sent);
APP_STATS_ADD(conf->stat.nb_rx, nb_pkt);
}
diff --git a/examples/qos_sched/cfg_file.c b/examples/qos_sched/cfg_file.c
index cd167bd8e6..657763ca90 100644
--- a/examples/qos_sched/cfg_file.c
+++ b/examples/qos_sched/cfg_file.c
@@ -242,20 +242,20 @@ cfg_load_subport(struct rte_cfgfile *cfg, struct 
rte_sched_subport_params *subpo
memset(active_queues, 0, sizeof(active_queues));
n_active_queues = 0;
 
-#ifdef RTE_SCHED_RED
-   char sec_name[CFG_NAME_LEN];
-   struct rte_red_params 
red_params[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE][RTE_COLORS];
+#ifdef RTE_SCHED_AQM
+   enum rte_sched_aqm_mode aqm_mode;
 
-   snprintf(sec_name, sizeof(sec_name), "red");
+   struct rte_red_params 
red_params[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE][RTE_COLORS];
 
-   if (rte_cfgfile_has_section(cfg, sec_name)) {
+   if (rte_cfgfile_has_section(cfg, "red")) {
+   aqm_mode = RTE_SCHED_AQM_WRED;
 
for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) {
char str[32];
 
/* Parse WRED min thresholds */
snprintf(str, sizeof(str), "tc %d wred min", i);
-   entry = rte_cfgfile_get_entry(cfg, sec_name, str);
+   entry = rte_cfgfile_get_entry(cfg, "red", str);
if (entry) {
char *next;
/* for each packet colour (green, yellow, red) 
*/
@@ -315,7 +315,42 @@ cfg_load_subport(struct rte_cfgfile *cfg, struct 
rte_sched_subport_params *subpo
}
}
}
-#endif /* RTE_SCHED_RED */
+
+   struct rte_pie_params pie_params[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
+
+   if (rte_cfgfile_has_section(cfg, "pie")) {
+   aqm_mode = RTE_SCHED_AQM_PIE;
+
+   for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) {
+   char str[32];
+
+   /* Parse Queue Delay Ref value */
+   snprintf(str, sizeof(str), "tc %d qdelay ref", i);
+   entry = rte_cfgfile_get_entry(cfg, "pie", str);
+   if (entry)
+   pie_params[i].qdelay_ref = (uint16_t) 
atoi(entry);
+
+   /* Parse Max Burst value */
+   snprintf(str, sizeof(str), "tc %d max burst", i);
+   entry = rte_cfgfile_get_entry(cfg, "pie", str);
+   if (entry)
+   pie_params[i].max_burst = (uint16_t) 
atoi(entry);
+
+   /* Parse Update Interval Value */
+   snprintf(str, sizeof(str), "tc %d update interval", i);
+   entry = rte_cfgfile_get_entry(cfg, "pie", str);
+   if (entry)
+   pie_params[i].dp_update_interval = (uint16_t) 
atoi(entry);
+
+   /* Parse Tailq Threshold Value */
+   snprintf(str, sizeof(str), "tc %d tailq th", i);
+   entry = rte_cfgfile_get_entry(cfg, "pie", str);
+   if (entry)
+   pie_params[i].tailq_th = (uint16_t) atoi(entry);
+
+   }
+   }
+#endif /* RTE_SCHED_AQM */
 
for (i = 0; i < MAX_SCHED_SUBPORTS; i++) {
char sec_name[CFG_NAME_LEN];
@@ -393,17 +428,30 @@ cfg_load_subport(struct rte_cfgfile *cfg, struct 
rte_sched_subport_params *subpo
}
}
}
-#ifdef RTE_SCHED_RED
+#ifdef RTE_SCHED_AQM
+ 

[dpdk-dev] [RFC PATCH v5 3/5] example/ip_pipeline: add PIE support

2021-09-07 Thread Liguzinski, WojciechX
Adding the PIE support for IP Pipeline

Signed-off-by: Liguzinski, WojciechX 
---
 examples/ip_pipeline/tmgr.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/examples/ip_pipeline/tmgr.c b/examples/ip_pipeline/tmgr.c
index e4e364cbc0..73da2da870 100644
--- a/examples/ip_pipeline/tmgr.c
+++ b/examples/ip_pipeline/tmgr.c
@@ -25,8 +25,8 @@ static const struct rte_sched_subport_params 
subport_params_default = {
.pipe_profiles = pipe_profile,
.n_pipe_profiles = 0, /* filled at run time */
.n_max_pipe_profiles = RTE_DIM(pipe_profile),
-#ifdef RTE_SCHED_RED
-.red_params = {
+#ifdef RTE_SCHED_AQM
+.wred_params = {
/* Traffic Class 0 Colors Green / Yellow / Red */
[0][0] = {.min_th = 48, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9},
[0][1] = {.min_th = 40, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9},
@@ -92,7 +92,7 @@ static const struct rte_sched_subport_params 
subport_params_default = {
[12][1] = {.min_th = 40, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9},
[12][2] = {.min_th = 32, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9},
},
-#endif /* RTE_SCHED_RED */
+#endif /* RTE_SCHED_AQM */
 };
 
 static struct tmgr_port_list tmgr_port_list;
-- 
2.25.1



[dpdk-dev] [RFC PATCH v5 4/5] doc/guides/prog_guide: added PIE

2021-09-07 Thread Liguzinski, WojciechX
Added PIE related information to documentation.

Signed-off-by: Liguzinski, WojciechX 
---
 doc/guides/prog_guide/glossary.rst   |  3 +
 doc/guides/prog_guide/qos_framework.rst  | 60 +---
 doc/guides/prog_guide/traffic_management.rst | 13 -
 3 files changed, 66 insertions(+), 10 deletions(-)

diff --git a/doc/guides/prog_guide/glossary.rst 
b/doc/guides/prog_guide/glossary.rst
index 7044a7df2a..fb0910ba5b 100644
--- a/doc/guides/prog_guide/glossary.rst
+++ b/doc/guides/prog_guide/glossary.rst
@@ -158,6 +158,9 @@ PCI
 PHY
An abbreviation for the physical layer of the OSI model.
 
+PIE
+   Proportional Integral Controller Enhanced (RFC8033)
+
 pktmbuf
An *mbuf* carrying a network packet.
 
diff --git a/doc/guides/prog_guide/qos_framework.rst 
b/doc/guides/prog_guide/qos_framework.rst
index 3b8a1184b0..7c8450181d 100644
--- a/doc/guides/prog_guide/qos_framework.rst
+++ b/doc/guides/prog_guide/qos_framework.rst
@@ -56,7 +56,8 @@ A functional description of each block is provided in the 
following table.
|   ||  
  |

+---+++
| 7 | Dropper| Congestion management using the Random Early 
Detection (RED) algorithm |
-   |   || (specified by the Sally Floyd - Van Jacobson 
paper) or Weighted RED (WRED).|
+   |   || (specified by the Sally Floyd - Van Jacobson 
paper) or Weighted RED (WRED) |
+   |   || or Proportional Integral Controller Enhanced 
(PIE).|
|   || Drop packets based on the current scheduler 
queue load level and packet|
|   || priority. When congestion is experienced, 
lower priority packets are dropped   |
|   || first.   
  |
@@ -421,7 +422,7 @@ No input packet can be part of more than one pipeline stage 
at a given time.
 The congestion management scheme implemented by the enqueue pipeline described 
above is very basic:
 packets are enqueued until a specific queue becomes full,
 then all the packets destined to the same queue are dropped until packets are 
consumed (by the dequeue operation).
-This can be improved by enabling RED/WRED as part of the enqueue pipeline 
which looks at the queue occupancy and
+This can be improved by enabling RED/WRED or PIE as part of the enqueue 
pipeline which looks at the queue occupancy and
 packet priority in order to yield the enqueue/drop decision for a specific 
packet
 (as opposed to enqueuing all packets / dropping all packets indiscriminately).
 
@@ -1155,13 +1156,13 @@ If the number of queues is small,
 then the performance of the port scheduler for the same level of active 
traffic is expected to be worse than
 the performance of a small set of message passing queues.
 
-.. _Dropper:
+.. _Droppers:
 
-Dropper
+Droppers
 ---
 
 The purpose of the DPDK dropper is to drop packets arriving at a packet 
scheduler to avoid congestion.
-The dropper supports the Random Early Detection (RED),
+The dropper supports the Proportional Integral Controller Enhanced (PIE), 
Random Early Detection (RED),
 Weighted Random Early Detection (WRED) and tail drop algorithms.
 :numref:`figure_blk_diag_dropper` illustrates how the dropper integrates with 
the scheduler.
 The DPDK currently does not support congestion management
@@ -1174,9 +1175,13 @@ so the dropper provides the only method for congestion 
avoidance.
High-level Block Diagram of the DPDK Dropper
 
 
-The dropper uses the Random Early Detection (RED) congestion avoidance 
algorithm as documented in the reference publication.
-The purpose of the RED algorithm is to monitor a packet queue,
+The dropper uses one of two congestion avoidance algorithms:
+   - the Random Early Detection (RED) as documented in the reference 
publication.
+   - the Proportional Integral Controller Enhanced (PIE) as documented in 
RFC8033 publication.
+
+The purpose of the RED/PIE algorithm is to monitor a packet queue,
 determine the current congestion level in the queue and decide whether an 
arriving packet should be enqueued or dropped.
+
 The RED algorithm uses an Exponential Weighted Moving Average (EWMA) filter to 
compute average queue size which
 gives an indication of the current congestion level in the queue.
 
@@ -1192,7 +1197,7 @@ This occurs when a packet queue has reached maximum 
capacity and cannot store an
 In this situation, all arriving packets are dropped.
 
 The flow through the dropper is illustrated in 
:numref:`figure_flow_tru_droppper`.
-The RED/WRED algorithm is exercised first and tail drop second.
+The RED/WRED/PIE algorithm is exercised first and tail drop second.

[dpdk-dev] [RFC PATCH v5 5/5] app/test: add tests for PIE

2021-09-07 Thread Liguzinski, WojciechX
Tests for PIE code added to test application.

Signed-off-by: Liguzinski, WojciechX 
---
 app/test/autotest_data.py |   18 +
 app/test/meson.build  |4 +
 app/test/test_pie.c   | 1076 +
 lib/sched/rte_pie.c   |6 +-
 lib/sched/rte_pie.h   |9 +-
 lib/sched/rte_sched.c |2 +-
 6 files changed,  insertions(+), 4 deletions(-)
 create mode 100644 app/test/test_pie.c

diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 302d6374c1..1d4418b6a3 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -279,6 +279,12 @@
 "Func":default_autotest,
 "Report":  None,
 },
+{
+"Name":"Pie autotest",
+"Command": "pie_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
 {
 "Name":"PMD ring autotest",
 "Command": "ring_pmd_autotest",
@@ -525,6 +531,12 @@
 "Func":default_autotest,
 "Report":  None,
 },
+{
+"Name":"Pie all",
+"Command": "red_all",
+"Func":default_autotest,
+"Report":  None,
+},
 {
"Name":"Fbarray autotest",
"Command": "fbarray_autotest",
@@ -731,6 +743,12 @@
 "Func":default_autotest,
 "Report":  None,
 },
+{
+"Name":"Pie_perf",
+"Command": "pie_perf",
+"Func":default_autotest,
+"Report":  None,
+},
 {
 "Name":"Lpm6 perf autotest",
 "Command": "lpm6_perf_autotest",
diff --git a/app/test/meson.build b/app/test/meson.build
index a7611686ad..f224b0c17e 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -111,6 +111,7 @@ test_sources = files(
 'test_reciprocal_division.c',
 'test_reciprocal_division_perf.c',
 'test_red.c',
+'test_pie.c',
 'test_reorder.c',
 'test_rib.c',
 'test_rib6.c',
@@ -241,6 +242,7 @@ fast_tests = [
 ['prefetch_autotest', true],
 ['rcu_qsbr_autotest', true],
 ['red_autotest', true],
+['pie_autotest', true],
 ['rib_autotest', true],
 ['rib6_autotest', true],
 ['ring_autotest', true],
@@ -292,6 +294,7 @@ perf_test_names = [
 'fib_slow_autotest',
 'fib_perf_autotest',
 'red_all',
+'pie_all',
 'barrier_autotest',
 'hash_multiwriter_autotest',
 'timer_racecond_autotest',
@@ -305,6 +308,7 @@ perf_test_names = [
 'fib6_perf_autotest',
 'rcu_qsbr_perf_autotest',
 'red_perf',
+'pie_perf',
 'distributor_perf_autotest',
 'pmd_perf_autotest',
 'stack_perf_autotest',
diff --git a/app/test/test_pie.c b/app/test/test_pie.c
new file mode 100644
index 00..ef4004b559
--- /dev/null
+++ b/app/test/test_pie.c
@@ -0,0 +1,1076 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "test.h"
+
+#include 
+
+#ifdef __INTEL_COMPILER
+#pragma warning(disable:2259)   /* conversion may lose significant bits */
+#pragma warning(disable:181)/* Arg incompatible with format string */
+#endif
+
+/**< structures for testing rte_pie performance and function */
+struct test_rte_pie_config {/**< Test structure for RTE_PIE config */
+   struct rte_pie_config *pconfig; /**< RTE_PIE configuration parameters */
+   uint8_t num_cfg;/**< Number of RTE_PIE configs to test 
*/
+   uint16_t qdelay_ref;   /**< Latency Target (milliseconds) */
+   uint16_t *dp_update_interval;   /**< Update interval for drop 
probability
+   
(milliseconds) */
+   uint16_t *max_burst;/**< Max Burst Allowance (milliseconds) 
*/
+   uint16_t tailq_th; /**< Tailq drop threshold (packet 
counts) */
+};
+
+struct test_queue { /**< Test structure for RTE_PIE Queues */
+   struct rte_pie *pdata_in;   /**< RTE_PIE runtime data input */
+   struct rte_pie *pdata_out;  /**< RTE_PIE runtime data 
output*/
+   uint32_t num_queues;/**< Number of RTE_PIE queues to test */
+   uint32_t *qlen; /**< Queue size */
+   uint32_t q_ramp_up; /**< Num of enqueues to ramp
+   
up the queue */
+   double drop_tolerance;  /**< Drop tolerance of packets
+   
not enqueued */
+};
+
+struct test_var {   /**< Test variables used for testi

[dpdk-dev] [PATCH v3 0/4] iavf base code update

2021-09-07 Thread Haiyue Wang
v3: adjust the commit title.
v2: update the commit message.

Alvin Zhang (1):
  common/iavf: enable hash calculation based on L4 checksum

Haiyue Wang (2):
  common/iavf: remove the FDIR query opcode
  common/iavf: update the driver version

Junfeng Guo (1):
  common/iavf: add QFI fields for GTPU UL and DL

 drivers/common/iavf/README |  2 +-
 drivers/common/iavf/virtchnl.h | 46 ++
 2 files changed, 9 insertions(+), 39 deletions(-)

-- 
2.33.0



[dpdk-dev] [PATCH v3 1/4] common/iavf: add QFI fields for GTPU UL and DL

2021-09-07 Thread Haiyue Wang
From: Junfeng Guo 

The QFI is 6-bit "QoS Flow Identifier" within the GTPU Extension Header.
Add virtchnl fields QFI of GTPU UL/DL for supporting the AVF FDIR.

Signed-off-by: Junfeng Guo 
Signed-off-by: Haiyue Wang 
---
 drivers/common/iavf/virtchnl.h | 5 +
 1 file changed, 5 insertions(+)

diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h
index 1cf0866124..9fa5e3e891 100644
--- a/drivers/common/iavf/virtchnl.h
+++ b/drivers/common/iavf/virtchnl.h
@@ -1642,6 +1642,11 @@ enum virtchnl_proto_hdr_field {
/* IPv6 Extension Fragment */
VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG_PKID =
PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG),
+   /* GTPU_DWN/UP */
+   VIRTCHNL_PROTO_HDR_GTPU_DWN_QFI =
+   PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_GTPU_EH_PDU_DWN),
+   VIRTCHNL_PROTO_HDR_GTPU_UP_QFI =
+   PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_GTPU_EH_PDU_UP),
 };
 
 struct virtchnl_proto_hdr {
-- 
2.33.0



[dpdk-dev] [PATCH v3 2/4] common/iavf: enable hash calculation based on L4 checksum

2021-09-07 Thread Haiyue Wang
From: Alvin Zhang 

Add TCP/UDP/SCTP header checksum field selectors, they can be used in
creating FDIR or RSS rules related to TCP/UDP/SCTP header checksum.

Signed-off-by: Alvin Zhang 
Signed-off-by: Haiyue Wang 
---
 drivers/common/iavf/virtchnl.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h
index 9fa5e3e891..c56c668cff 100644
--- a/drivers/common/iavf/virtchnl.h
+++ b/drivers/common/iavf/virtchnl.h
@@ -1598,14 +1598,17 @@ enum virtchnl_proto_hdr_field {
VIRTCHNL_PROTO_HDR_TCP_SRC_PORT =
PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_TCP),
VIRTCHNL_PROTO_HDR_TCP_DST_PORT,
+   VIRTCHNL_PROTO_HDR_TCP_CHKSUM,
/* UDP */
VIRTCHNL_PROTO_HDR_UDP_SRC_PORT =
PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_UDP),
VIRTCHNL_PROTO_HDR_UDP_DST_PORT,
+   VIRTCHNL_PROTO_HDR_UDP_CHKSUM,
/* SCTP */
VIRTCHNL_PROTO_HDR_SCTP_SRC_PORT =
PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_SCTP),
VIRTCHNL_PROTO_HDR_SCTP_DST_PORT,
+   VIRTCHNL_PROTO_HDR_SCTP_CHKSUM,
/* GTPU_IP */
VIRTCHNL_PROTO_HDR_GTPU_IP_TEID =
PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_GTPU_IP),
-- 
2.33.0



[dpdk-dev] [PATCH v3 3/4] common/iavf: remove the FDIR query opcode

2021-09-07 Thread Haiyue Wang
The VIRTCHNL_OP_QUERY_FDIR_FILTER opcode is not used, so remove it.

Signed-off-by: Haiyue Wang 
Acked-by: Qi Zhang 
---
 drivers/common/iavf/virtchnl.h | 38 --
 1 file changed, 38 deletions(-)

diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h
index c56c668cff..83f51d889f 100644
--- a/drivers/common/iavf/virtchnl.h
+++ b/drivers/common/iavf/virtchnl.h
@@ -146,7 +146,6 @@ enum virtchnl_ops {
VIRTCHNL_OP_DEL_RSS_CFG = 46,
VIRTCHNL_OP_ADD_FDIR_FILTER = 47,
VIRTCHNL_OP_DEL_FDIR_FILTER = 48,
-   VIRTCHNL_OP_QUERY_FDIR_FILTER = 49,
VIRTCHNL_OP_GET_MAX_RSS_QREGION = 50,
VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS = 51,
VIRTCHNL_OP_ADD_VLAN_V2 = 52,
@@ -244,8 +243,6 @@ static inline const char *virtchnl_op_str(enum virtchnl_ops 
v_opcode)
return "VIRTCHNL_OP_ADD_FDIR_FILTER";
case VIRTCHNL_OP_DEL_FDIR_FILTER:
return "VIRTCHNL_OP_DEL_FDIR_FILTER";
-   case VIRTCHNL_OP_QUERY_FDIR_FILTER:
-   return "VIRTCHNL_OP_QUERY_FDIR_FILTER";
case VIRTCHNL_OP_GET_MAX_RSS_QREGION:
return "VIRTCHNL_OP_GET_MAX_RSS_QREGION";
case VIRTCHNL_OP_ENABLE_QUEUES_V2:
@@ -1733,20 +1730,6 @@ struct virtchnl_fdir_rule {
 
 VIRTCHNL_CHECK_STRUCT_LEN(2604, virtchnl_fdir_rule);
 
-/* query information to retrieve fdir rule counters.
- * PF will fill out this structure to reset counter.
- */
-struct virtchnl_fdir_query_info {
-   u32 match_packets_valid:1;
-   u32 match_bytes_valid:1;
-   u32 reserved:30;  /* Reserved, must be zero. */
-   u32 pad;
-   u64 matched_packets; /* Number of packets for this rule. */
-   u64 matched_bytes;   /* Number of bytes through this rule. */
-};
-
-VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_fdir_query_info);
-
 /* Status returned to VF after VF requests FDIR commands
  * VIRTCHNL_FDIR_SUCCESS
  * VF FDIR related request is successfully done by PF
@@ -1879,24 +1862,6 @@ struct virtchnl_queue_tc_mapping {
 
 VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_tc_mapping);
 
-/* VIRTCHNL_OP_QUERY_FDIR_FILTER
- * VF sends this request to PF by filling out vsi_id,
- * flow_id and reset_counter. PF will return query_info
- * and query_status to VF.
- */
-struct virtchnl_fdir_query {
-   u16 vsi_id;   /* INPUT */
-   u16 pad1[3];
-   u32 flow_id;  /* INPUT */
-   u32 reset_counter:1; /* INPUT */
-   struct virtchnl_fdir_query_info query_info; /* OUTPUT */
-
-   /* see enum virtchnl_fdir_prgm_status; OUTPUT */
-   s32 status;
-   u32 pad2;
-};
-
-VIRTCHNL_CHECK_STRUCT_LEN(48, virtchnl_fdir_query);
 
 /* TX and RX queue types are valid in legacy as well as split queue models.
  * With Split Queue model, 2 additional types are introduced - TX_COMPLETION
@@ -2254,9 +2219,6 @@ virtchnl_vc_validate_vf_msg(struct virtchnl_version_info 
*ver, u32 v_opcode,
case VIRTCHNL_OP_DEL_FDIR_FILTER:
valid_len = sizeof(struct virtchnl_fdir_del);
break;
-   case VIRTCHNL_OP_QUERY_FDIR_FILTER:
-   valid_len = sizeof(struct virtchnl_fdir_query);
-   break;
case VIRTCHNL_OP_GET_QOS_CAPS:
break;
case VIRTCHNL_OP_CONFIG_QUEUE_TC_MAP:
-- 
2.33.0



[dpdk-dev] [PATCH v3 4/4] common/iavf: update the driver version

2021-09-07 Thread Haiyue Wang
Update the driver version to trace the change.

Signed-off-by: Haiyue Wang 
Acked-by: Qi Zhang 
---
 drivers/common/iavf/README | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/common/iavf/README b/drivers/common/iavf/README
index 611fdcea94..89bdbc827e 100644
--- a/drivers/common/iavf/README
+++ b/drivers/common/iavf/README
@@ -6,7 +6,7 @@ Intel® IAVF driver
 =
 
 This directory contains source code of FreeBSD IAVF driver of version
-cid-avf.2021.04.29.tar.gz released by the team which develops
+cid-avf.2021.08.16.tar.gz released by the team which develops
 basic drivers for any IAVF NIC. The directory of base/ contains the
 original source package.
 
-- 
2.33.0



[dpdk-dev] [PATCH v2 00/15] crypto: add raw vector support in DPAAx

2021-09-07 Thread Hemant Agrawal
This patch series adds support for raw vector API in dpaax_sec drivers
This also enhances the raw vector APIs to support OOP and security
protocol support.

v2: fix aesni compilation and add release notes.

Gagandeep Singh (11):
  crypto: add total raw buffer length
  crypto: fix raw process for multi-seg case
  crypto/dpaa2_sec: support raw datapath APIs
  crypto/dpaa2_sec: support AUTH only with raw buffer APIs
  crypto/dpaa2_sec: support AUTHENC with raw buffer APIs
  crypto/dpaa2_sec: support AEAD with raw buffer APIs
  crypto/dpaa2_sec: support OOP with raw buffer API
  crypto/dpaa2_sec: enhance error checks with raw buffer APIs
  crypto/dpaa_sec: support raw datapath APIs
  crypto/dpaa_sec: support authonly and chain with raw APIs
  crypto/dpaa_sec: support AEAD and proto with raw APIs

Hemant Agrawal (4):
  crypto: change sgl to src_sgl in vector
  crypto: add dest_sgl in raw vector APIs
  test/crypto: add raw API test for dpaax
  test/crypto: add raw API support in 5G algos

 app/test/test_cryptodev.c   |  179 +++-
 doc/guides/rel_notes/release_21_11.rst  |8 +
 drivers/crypto/aesni_gcm/aesni_gcm_pmd.c|   12 +-
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c  |6 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c |   13 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |   82 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c | 1045 ++
 drivers/crypto/dpaa2_sec/meson.build|3 +-
 drivers/crypto/dpaa_sec/dpaa_sec.c  |   23 +-
 drivers/crypto/dpaa_sec/dpaa_sec.h  |   40 +-
 drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c   | 1052 +++
 drivers/crypto/dpaa_sec/meson.build |4 +-
 drivers/crypto/qat/qat_sym_hw_dp.c  |   27 +-
 lib/cryptodev/rte_crypto_sym.h  |   13 +-
 lib/ipsec/misc.h|4 +-
 15 files changed, 2407 insertions(+), 104 deletions(-)
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
 create mode 100644 drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c

-- 
2.17.1



[dpdk-dev] [PATCH v2 01/15] crypto: change sgl to src_sgl in vector

2021-09-07 Thread Hemant Agrawal
This patch renames the sgl to src_sgl to help differentiating
between source and destination sgl.

Signed-off-by: Hemant Agrawal 
Acked-by: Akhil Goyal 
---
 app/test/test_cryptodev.c  |  6 ++---
 drivers/crypto/aesni_gcm/aesni_gcm_pmd.c   | 12 +-
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c |  6 ++---
 drivers/crypto/qat/qat_sym_hw_dp.c | 27 +-
 lib/cryptodev/rte_crypto_sym.h |  2 +-
 lib/ipsec/misc.h   |  4 ++--
 6 files changed, 31 insertions(+), 26 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 843d07ba37..ed63524edc 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -221,7 +221,7 @@ process_sym_raw_dp_op(uint8_t dev_id, uint16_t qp_id,
digest.va = NULL;
sgl.vec = data_vec;
vec.num = 1;
-   vec.sgl = &sgl;
+   vec.src_sgl = &sgl;
vec.iv = &cipher_iv;
vec.digest = &digest;
vec.aad = &aad_auth_iv;
@@ -385,7 +385,7 @@ process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op 
*op)
 
sgl.vec = vec;
sgl.num = n;
-   symvec.sgl = &sgl;
+   symvec.src_sgl = &sgl;
symvec.iv = &iv_ptr;
symvec.digest = &digest_ptr;
symvec.aad = &aad_ptr;
@@ -431,7 +431,7 @@ process_cpu_crypt_auth_op(uint8_t dev_id, struct 
rte_crypto_op *op)
 
sgl.vec = vec;
sgl.num = n;
-   symvec.sgl = &sgl;
+   symvec.src_sgl = &sgl;
symvec.iv = &iv_ptr;
symvec.digest = &digest_ptr;
symvec.status = &st;
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c 
b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 886e2a5aaa..5fbb9b79f8 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -535,7 +535,7 @@ aesni_gcm_sgl_encrypt(struct aesni_gcm_session *s,
processed = 0;
for (i = 0; i < vec->num; ++i) {
aesni_gcm_process_gcm_sgl_op(s, gdata_ctx,
-   &vec->sgl[i], vec->iv[i].va,
+   &vec->src_sgl[i], vec->iv[i].va,
vec->aad[i].va);
vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(s,
gdata_ctx, vec->digest[i].va);
@@ -554,7 +554,7 @@ aesni_gcm_sgl_decrypt(struct aesni_gcm_session *s,
processed = 0;
for (i = 0; i < vec->num; ++i) {
aesni_gcm_process_gcm_sgl_op(s, gdata_ctx,
-   &vec->sgl[i], vec->iv[i].va,
+   &vec->src_sgl[i], vec->iv[i].va,
vec->aad[i].va);
 vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(s,
gdata_ctx, vec->digest[i].va);
@@ -572,13 +572,13 @@ aesni_gmac_sgl_generate(struct aesni_gcm_session *s,
 
processed = 0;
for (i = 0; i < vec->num; ++i) {
-   if (vec->sgl[i].num != 1) {
+   if (vec->src_sgl[i].num != 1) {
vec->status[i] = ENOTSUP;
continue;
}
 
aesni_gcm_process_gmac_sgl_op(s, gdata_ctx,
-   &vec->sgl[i], vec->iv[i].va);
+   &vec->src_sgl[i], vec->iv[i].va);
vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(s,
gdata_ctx, vec->digest[i].va);
processed += (vec->status[i] == 0);
@@ -595,13 +595,13 @@ aesni_gmac_sgl_verify(struct aesni_gcm_session *s,
 
processed = 0;
for (i = 0; i < vec->num; ++i) {
-   if (vec->sgl[i].num != 1) {
+   if (vec->src_sgl[i].num != 1) {
vec->status[i] = ENOTSUP;
continue;
}
 
aesni_gcm_process_gmac_sgl_op(s, gdata_ctx,
-   &vec->sgl[i], vec->iv[i].va);
+   &vec->src_sgl[i], vec->iv[i].va);
vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(s,
gdata_ctx, vec->digest[i].va);
processed += (vec->status[i] == 0);
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c 
b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index a01c826a3c..1b05099446 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -2002,14 +2002,14 @@ aesni_mb_cpu_crypto_process_bulk(struct rte_cryptodev 
*dev,
for (i = 0, j = 0, k = 0; i != vec->num; i++) {
 
 
-   ret = check_crypto_sgl(sofs, vec->sgl + i);
+   ret = check_crypto_sgl(sofs, vec->src_sgl + i);
if (ret != 0) {
vec->status[i] = ret;
continue;
}
 
-   buf = vec->sgl[i].vec[0].base;
-   len = vec->sgl[i].vec[0].len;
+   buf = vec->src_sgl[i].vec[0].base;
+   len = vec->src_sgl[i].vec[0].len;
 
job = IMB_

[dpdk-dev] [PATCH v2 02/15] crypto: add total raw buffer length

2021-09-07 Thread Hemant Agrawal
From: Gagandeep Singh 

The current crypto raw data vectors is extended to support
rte_security usecases, where we need total data length to know
how much additional memory space is available in buffer other
than data length so that driver/HW can write expanded size
data after encryption.

Signed-off-by: Gagandeep Singh 
Acked-by: Akhil Goyal 
---
 lib/cryptodev/rte_crypto_sym.h | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/lib/cryptodev/rte_crypto_sym.h b/lib/cryptodev/rte_crypto_sym.h
index dcc0bd5933..e5cef1fb72 100644
--- a/lib/cryptodev/rte_crypto_sym.h
+++ b/lib/cryptodev/rte_crypto_sym.h
@@ -37,6 +37,8 @@ struct rte_crypto_vec {
rte_iova_t iova;
/** length of the data buffer */
uint32_t len;
+   /** total buffer length*/
+   uint32_t tot_len;
 };
 
 /**
@@ -980,12 +982,14 @@ rte_crypto_mbuf_to_vec(const struct rte_mbuf *mb, 
uint32_t ofs, uint32_t len,
seglen = mb->data_len - ofs;
if (len <= seglen) {
vec[0].len = len;
+   vec[0].tot_len = mb->buf_len;
return 1;
}
 
/* data spread across segments */
vec[0].len = seglen;
left = len - seglen;
+   vec[0].tot_len = mb->buf_len;
for (i = 1, nseg = mb->next; nseg != NULL; nseg = nseg->next, i++) {
 
vec[i].base = rte_pktmbuf_mtod(nseg, void *);
@@ -995,6 +999,7 @@ rte_crypto_mbuf_to_vec(const struct rte_mbuf *mb, uint32_t 
ofs, uint32_t len,
if (left <= seglen) {
/* whole requested data is completed */
vec[i].len = left;
+   vec[i].tot_len = mb->buf_len;
left = 0;
break;
}
@@ -1002,6 +1007,7 @@ rte_crypto_mbuf_to_vec(const struct rte_mbuf *mb, 
uint32_t ofs, uint32_t len,
/* use whole segment */
vec[i].len = seglen;
left -= seglen;
+   vec[i].tot_len = mb->buf_len;
}
 
RTE_ASSERT(left == 0);
-- 
2.17.1



[dpdk-dev] [PATCH v2 03/15] crypto: add dest_sgl in raw vector APIs

2021-09-07 Thread Hemant Agrawal
The structure rte_crypto_sym_vec is updated to
add dest_sgl to support out of place processing.

Signed-off-by: Hemant Agrawal 
Acked-by: Akhil Goyal 
---
 lib/cryptodev/rte_crypto_sym.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/cryptodev/rte_crypto_sym.h b/lib/cryptodev/rte_crypto_sym.h
index e5cef1fb72..978708845f 100644
--- a/lib/cryptodev/rte_crypto_sym.h
+++ b/lib/cryptodev/rte_crypto_sym.h
@@ -72,6 +72,8 @@ struct rte_crypto_sym_vec {
uint32_t num;
/** array of SGL vectors */
struct rte_crypto_sgl *src_sgl;
+   /** array of SGL vectors for OOP, keep it NULL for inplace*/
+   struct rte_crypto_sgl *dest_sgl;
/** array of pointers to cipher IV */
struct rte_crypto_va_iova_ptr *iv;
/** array of pointers to digest */
-- 
2.17.1



[dpdk-dev] [PATCH v2 04/15] crypto: fix raw process for multi-seg case

2021-09-07 Thread Hemant Agrawal
From: Gagandeep Singh 

If no next segment available the “for” loop will fail and it still
returns i+1 i.e. 2, which is wrong as it has filled only 1 buffer.

Fixes: 7adf992fb9bf ("cryptodev: introduce CPU crypto API")
Cc: marcinx.smoczyn...@intel.com
Cc: sta...@dpdk.org

Signed-off-by: Gagandeep Singh 
---
 lib/cryptodev/rte_crypto_sym.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/lib/cryptodev/rte_crypto_sym.h b/lib/cryptodev/rte_crypto_sym.h
index 978708845f..a48228a646 100644
--- a/lib/cryptodev/rte_crypto_sym.h
+++ b/lib/cryptodev/rte_crypto_sym.h
@@ -1003,6 +1003,7 @@ rte_crypto_mbuf_to_vec(const struct rte_mbuf *mb, 
uint32_t ofs, uint32_t len,
vec[i].len = left;
vec[i].tot_len = mb->buf_len;
left = 0;
+   i++;
break;
}
 
@@ -1013,7 +1014,7 @@ rte_crypto_mbuf_to_vec(const struct rte_mbuf *mb, 
uint32_t ofs, uint32_t len,
}
 
RTE_ASSERT(left == 0);
-   return i + 1;
+   return i;
 }
 
 
-- 
2.17.1



[dpdk-dev] [PATCH v2 05/15] crypto/dpaa2_sec: support raw datapath APIs

2021-09-07 Thread Hemant Agrawal
From: Gagandeep Singh 

This path add framework for raw API support.
The initial patch only test cipher only part.

Signed-off-by: Hemant Agrawal 
Signed-off-by: Gagandeep Singh 
---
 doc/guides/rel_notes/release_21_11.rst  |   4 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c |  13 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |  60 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c | 595 
 drivers/crypto/dpaa2_sec/meson.build|   3 +-
 5 files changed, 646 insertions(+), 29 deletions(-)
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c

diff --git a/doc/guides/rel_notes/release_21_11.rst 
b/doc/guides/rel_notes/release_21_11.rst
index b55900936d..9cbe960dbe 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -72,6 +72,10 @@ New Features
 
   * Added event crypto adapter OP_FORWARD mode support.
 
+* **Updated NXP dpaa2_sec crypto PMD.**
+
+  * Added raw vector datapath API support
+
 
 Removed Items
 -
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c 
b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 1ccead3641..fe90d9d2d8 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -49,15 +49,8 @@
 #define FSL_MC_DPSECI_DEVID 3
 
 #define NO_PREFETCH 0
-/* FLE_POOL_NUM_BUFS is set as per the ipsec-secgw application */
-#define FLE_POOL_NUM_BUFS  32000
-#define FLE_POOL_BUF_SIZE  256
-#define FLE_POOL_CACHE_SIZE512
-#define FLE_SG_MEM_SIZE(num)   (FLE_POOL_BUF_SIZE + ((num) * 32))
-#define SEC_FLC_DHR_OUTBOUND   -114
-#define SEC_FLC_DHR_INBOUND0
 
-static uint8_t cryptodev_driver_id;
+uint8_t cryptodev_driver_id;
 
 #ifdef RTE_LIB_SECURITY
 static inline int
@@ -3805,6 +3798,9 @@ static struct rte_cryptodev_ops crypto_ops = {
.sym_session_get_size = dpaa2_sec_sym_session_get_size,
.sym_session_configure= dpaa2_sec_sym_session_configure,
.sym_session_clear= dpaa2_sec_sym_session_clear,
+   /* Raw data-path API related operations */
+   .sym_get_raw_dp_ctx_size = dpaa2_sec_get_dp_ctx_size,
+   .sym_configure_raw_dp_ctx = dpaa2_sec_configure_raw_dp_ctx,
 };
 
 #ifdef RTE_LIB_SECURITY
@@ -3887,6 +3883,7 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
RTE_CRYPTODEV_FF_HW_ACCELERATED |
RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
RTE_CRYPTODEV_FF_SECURITY |
+   RTE_CRYPTODEV_FF_SYM_RAW_DP |
RTE_CRYPTODEV_FF_IN_PLACE_SGL |
RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h 
b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index 7dbc69f6cb..860c9b6520 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -15,6 +15,16 @@
 #define CRYPTODEV_NAME_DPAA2_SEC_PMD   crypto_dpaa2_sec
 /**< NXP DPAA2 - SEC PMD device name */
 
+extern uint8_t cryptodev_driver_id;
+
+/* FLE_POOL_NUM_BUFS is set as per the ipsec-secgw application */
+#define FLE_POOL_NUM_BUFS  32000
+#define FLE_POOL_BUF_SIZE  256
+#define FLE_POOL_CACHE_SIZE512
+#define FLE_SG_MEM_SIZE(num)   (FLE_POOL_BUF_SIZE + ((num) * 32))
+#define SEC_FLC_DHR_OUTBOUND   -114
+#define SEC_FLC_DHR_INBOUND0
+
 #define MAX_QUEUES 64
 #define MAX_DESC_SIZE  64
 /** private data structure for each DPAA2_SEC device */
@@ -158,6 +168,24 @@ struct dpaa2_pdcp_ctxt {
uint32_t hfn_threshold; /*!< HFN Threashold for key renegotiation */
 };
 #endif
+
+typedef int (*dpaa2_sec_build_fd_t)(
+   void *qp, uint8_t *drv_ctx, struct rte_crypto_vec *data_vec,
+   uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs,
+   struct rte_crypto_va_iova_ptr *iv,
+   struct rte_crypto_va_iova_ptr *digest,
+   struct rte_crypto_va_iova_ptr *aad_or_auth_iv,
+   void *user_data);
+
+typedef int (*dpaa2_sec_build_raw_dp_fd_t)(uint8_t *drv_ctx,
+  struct rte_crypto_sgl *sgl,
+  struct rte_crypto_va_iova_ptr *iv,
+  struct rte_crypto_va_iova_ptr *digest,
+  struct rte_crypto_va_iova_ptr *auth_iv,
+  union rte_crypto_sym_ofs ofs,
+  void *userdata,
+  struct qbman_fd *fd);
+
 typedef struct dpaa2_sec_session_entry {
void *ctxt;
uint8_t ctxt_type;
@@ -165,6 +193,8 @@ typedef struct dpaa2_sec_session_entry {
enum rte_crypto_cipher_algorithm cipher_alg; /*!< Cipher Algorithm*/
enum rte_crypto_auth_algorithm auth_alg; /*!< Authentication Algorithm*/
enum rte_crypto_aead_algorithm aead_alg; /*!< AEAD Algorithm*/
+   dpaa2_sec_build_fd_t build_fd;
+   dpaa2_sec_build_raw_dp_fd_t build_raw_dp_fd;
union {
struct {
 

[dpdk-dev] [PATCH v2 06/15] crypto/dpaa2_sec: support AUTH only with raw buffer APIs

2021-09-07 Thread Hemant Agrawal
From: Gagandeep Singh 

Auth only with raw buffer APIs has been supported in this patch.

Signed-off-by: Gagandeep Singh 
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |  21 
 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c | 114 ++--
 2 files changed, 108 insertions(+), 27 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h 
b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index 860c9b6520..f6507855e3 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -231,27 +231,6 @@ typedef struct dpaa2_sec_session_entry {
 
 static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
/* Symmetric capabilities */
-   {   /* NULL (AUTH) */
-   .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
-   {.sym = {
-   .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
-   {.auth = {
-   .algo = RTE_CRYPTO_AUTH_NULL,
-   .block_size = 1,
-   .key_size = {
-   .min = 0,
-   .max = 0,
-   .increment = 0
-   },
-   .digest_size = {
-   .min = 0,
-   .max = 0,
-   .increment = 0
-   },
-   .iv_size = { 0 }
-   }, },
-   }, },
-   },
{   /* MD5 */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
{.sym = {
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c 
b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
index 32abf5a431..af052202d9 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
@@ -11,6 +11,8 @@
 #include "dpaa2_sec_priv.h"
 #include "dpaa2_sec_logs.h"
 
+#include 
+
 struct dpaa2_sec_raw_dp_ctx {
dpaa2_sec_session *session;
uint32_t tail;
@@ -73,14 +75,114 @@ build_raw_dp_auth_fd(uint8_t *drv_ctx,
   void *userdata,
   struct qbman_fd *fd)
 {
-   RTE_SET_USED(drv_ctx);
-   RTE_SET_USED(sgl);
RTE_SET_USED(iv);
-   RTE_SET_USED(digest);
RTE_SET_USED(auth_iv);
-   RTE_SET_USED(ofs);
-   RTE_SET_USED(userdata);
-   RTE_SET_USED(fd);
+
+   dpaa2_sec_session *sess =
+   ((struct dpaa2_sec_raw_dp_ctx *)drv_ctx)->session;
+   struct qbman_fle *fle, *sge, *ip_fle, *op_fle;
+   struct sec_flow_context *flc;
+   int total_len = 0, data_len = 0, data_offset;
+   uint8_t *old_digest;
+   struct ctxt_priv *priv = sess->ctxt;
+   unsigned int i;
+
+   for (i = 0; i < sgl->num; i++)
+   total_len += sgl->vec[i].len;
+
+   data_len = total_len - ofs.ofs.auth.head - ofs.ofs.auth.tail;
+   data_offset = ofs.ofs.auth.head;
+
+   if (sess->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2 ||
+   sess->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+   if ((data_len & 7) || (data_offset & 7)) {
+   DPAA2_SEC_ERR("AUTH: len/offset must be full bytes");
+   return -ENOTSUP;
+   }
+
+   data_len = data_len >> 3;
+   data_offset = data_offset >> 3;
+   }
+   fle = (struct qbman_fle *)rte_malloc(NULL,
+   FLE_SG_MEM_SIZE(2 * sgl->num),
+   RTE_CACHE_LINE_SIZE);
+   if (unlikely(!fle)) {
+   DPAA2_SEC_ERR("AUTH SG: Memory alloc failed for SGE");
+   return -ENOMEM;
+   }
+   memset(fle, 0, FLE_SG_MEM_SIZE(2*sgl->num));
+   /* first FLE entry used to store mbuf and session ctxt */
+   DPAA2_SET_FLE_ADDR(fle, (size_t)userdata);
+   DPAA2_FLE_SAVE_CTXT(fle, (ptrdiff_t)priv);
+   op_fle = fle + 1;
+   ip_fle = fle + 2;
+   sge = fle + 3;
+
+   flc = &priv->flc_desc[DESC_INITFINAL].flc;
+
+   /* sg FD */
+   DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+   DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(op_fle));
+   DPAA2_SET_FD_COMPOUND_FMT(fd);
+
+   /* o/p fle */
+   DPAA2_SET_FLE_ADDR(op_fle,
+   DPAA2_VADDR_TO_IOVA(digest->va));
+   op_fle->length = sess->digest_length;
+
+   /* i/p fle */
+   DPAA2_SET_FLE_SG_EXT(ip_fle);
+   DPAA2_SET_FLE_ADDR(ip_fle, DPAA2_VADDR_TO_IOVA(sge));
+   ip_fle->length = data_len;
+
+   if (sess->iv.length) {
+   uint8_t *iv_ptr;
+
+   iv_ptr = rte_crypto_op_ctod_offset(userdata, uint8_t *,
+   sess->iv.offset);
+
+   if (sess->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2) {
+   iv_ptr = conv_to_snow_f9_iv(iv_ptr);
+   sg

[dpdk-dev] [PATCH v2 07/15] crypto/dpaa2_sec: support AUTHENC with raw buffer APIs

2021-09-07 Thread Hemant Agrawal
From: Gagandeep Singh 

This patch supports AUTHENC with raw buufer APIs

Signed-off-by: Gagandeep Singh 
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c | 128 ++--
 1 file changed, 121 insertions(+), 7 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c 
b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
index af052202d9..505431fc23 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
@@ -31,14 +31,128 @@ build_raw_dp_chain_fd(uint8_t *drv_ctx,
   void *userdata,
   struct qbman_fd *fd)
 {
-   RTE_SET_USED(drv_ctx);
-   RTE_SET_USED(sgl);
-   RTE_SET_USED(iv);
-   RTE_SET_USED(digest);
RTE_SET_USED(auth_iv);
-   RTE_SET_USED(ofs);
-   RTE_SET_USED(userdata);
-   RTE_SET_USED(fd);
+
+   dpaa2_sec_session *sess =
+   ((struct dpaa2_sec_raw_dp_ctx *)drv_ctx)->session;
+   struct ctxt_priv *priv = sess->ctxt;
+   struct qbman_fle *fle, *sge, *ip_fle, *op_fle;
+   struct sec_flow_context *flc;
+   int data_len = 0, auth_len = 0, cipher_len = 0;
+   unsigned int i = 0;
+   uint16_t auth_hdr_len = ofs.ofs.cipher.head -
+   ofs.ofs.auth.head;
+
+   uint16_t auth_tail_len = ofs.ofs.auth.tail;
+   uint32_t auth_only_len = (auth_tail_len << 16) | auth_hdr_len;
+   int icv_len = sess->digest_length;
+   uint8_t *old_icv;
+   uint8_t *iv_ptr = iv->va;
+
+   for (i = 0; i < sgl->num; i++)
+   data_len += sgl->vec[i].len;
+
+   cipher_len = data_len - ofs.ofs.cipher.head - ofs.ofs.cipher.tail;
+   auth_len = data_len - ofs.ofs.auth.head - ofs.ofs.auth.tail;
+   /* first FLE entry used to store session ctxt */
+   fle = (struct qbman_fle *)rte_malloc(NULL,
+   FLE_SG_MEM_SIZE(2 * sgl->num),
+   RTE_CACHE_LINE_SIZE);
+   if (unlikely(!fle)) {
+   DPAA2_SEC_ERR("AUTHENC SG: Memory alloc failed for SGE");
+   return -ENOMEM;
+   }
+   memset(fle, 0, FLE_SG_MEM_SIZE(2 * sgl->num));
+   DPAA2_SET_FLE_ADDR(fle, (size_t)userdata);
+   DPAA2_FLE_SAVE_CTXT(fle, (ptrdiff_t)priv);
+
+   op_fle = fle + 1;
+   ip_fle = fle + 2;
+   sge = fle + 3;
+
+   /* Save the shared descriptor */
+   flc = &priv->flc_desc[0].flc;
+
+   /* Configure FD as a FRAME LIST */
+   DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(op_fle));
+   DPAA2_SET_FD_COMPOUND_FMT(fd);
+   DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+   /* Configure Output FLE with Scatter/Gather Entry */
+   DPAA2_SET_FLE_SG_EXT(op_fle);
+   DPAA2_SET_FLE_ADDR(op_fle, DPAA2_VADDR_TO_IOVA(sge));
+
+   if (auth_only_len)
+   DPAA2_SET_FLE_INTERNAL_JD(op_fle, auth_only_len);
+
+   op_fle->length = (sess->dir == DIR_ENC) ?
+   (cipher_len + icv_len) :
+   cipher_len;
+
+   /* Configure Output SGE for Encap/Decap */
+   DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
+   DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.auth.head);
+   sge->length = sgl->vec[0].len - ofs.ofs.auth.head;
+
+   /* o/p segs */
+   for (i = 1; i < sgl->num; i++) {
+   sge++;
+   DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
+   DPAA2_SET_FLE_OFFSET(sge, 0);
+   sge->length = sgl->vec[i].len;
+   }
+
+   if (sess->dir == DIR_ENC) {
+   sge++;
+   DPAA2_SET_FLE_ADDR(sge,
+   digest->iova);
+   sge->length = icv_len;
+   }
+   DPAA2_SET_FLE_FIN(sge);
+
+   sge++;
+
+   /* Configure Input FLE with Scatter/Gather Entry */
+   DPAA2_SET_FLE_ADDR(ip_fle, DPAA2_VADDR_TO_IOVA(sge));
+   DPAA2_SET_FLE_SG_EXT(ip_fle);
+   DPAA2_SET_FLE_FIN(ip_fle);
+
+   ip_fle->length = (sess->dir == DIR_ENC) ?
+   (auth_len + sess->iv.length) :
+   (auth_len + sess->iv.length +
+   icv_len);
+
+   /* Configure Input SGE for Encap/Decap */
+   DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(iv_ptr));
+   sge->length = sess->iv.length;
+
+   sge++;
+   DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
+   DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.auth.head);
+   sge->length = sgl->vec[0].len - ofs.ofs.auth.head;
+
+   for (i = 1; i < sgl->num; i++) {
+   sge++;
+   DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
+   DPAA2_SET_FLE_OFFSET(sge, 0);
+   sge->length = sgl->vec[i].len;
+   }
+
+   if (sess->dir == DIR_DEC) {
+   sge++;
+   old_icv = (uint8_t *)(sge + 1);
+   memcpy(old_icv, digest->va,
+   icv_len);
+   DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_icv));
+   sge->length = icv_len;
+   }
+
+   DPAA2_SET_FLE_FI

[dpdk-dev] [PATCH v2 08/15] crypto/dpaa2_sec: support AEAD with raw buffer APIs

2021-09-07 Thread Hemant Agrawal
From: Gagandeep Singh 

add raw vector API support for AEAD algos.

Signed-off-by: Gagandeep Singh 
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c | 249 +---
 1 file changed, 214 insertions(+), 35 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c 
b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
index 505431fc23..5191e5381c 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
@@ -167,14 +167,126 @@ build_raw_dp_aead_fd(uint8_t *drv_ctx,
   void *userdata,
   struct qbman_fd *fd)
 {
-   RTE_SET_USED(drv_ctx);
-   RTE_SET_USED(sgl);
-   RTE_SET_USED(iv);
-   RTE_SET_USED(digest);
-   RTE_SET_USED(auth_iv);
-   RTE_SET_USED(ofs);
-   RTE_SET_USED(userdata);
-   RTE_SET_USED(fd);
+   dpaa2_sec_session *sess =
+   ((struct dpaa2_sec_raw_dp_ctx *)drv_ctx)->session;
+   struct ctxt_priv *priv = sess->ctxt;
+   struct qbman_fle *fle, *sge, *ip_fle, *op_fle;
+   struct sec_flow_context *flc;
+   uint32_t auth_only_len = sess->ext_params.aead_ctxt.auth_only_len;
+   int icv_len = sess->digest_length;
+   uint8_t *old_icv;
+   uint8_t *IV_ptr = iv->va;
+   unsigned int i = 0;
+   int data_len = 0, aead_len = 0;
+
+   for (i = 0; i < sgl->num; i++)
+   data_len += sgl->vec[i].len;
+
+   aead_len = data_len - ofs.ofs.cipher.head - ofs.ofs.cipher.tail;
+
+   /* first FLE entry used to store mbuf and session ctxt */
+   fle = (struct qbman_fle *)rte_malloc(NULL,
+   FLE_SG_MEM_SIZE(2 * sgl->num),
+   RTE_CACHE_LINE_SIZE);
+   if (unlikely(!fle)) {
+   DPAA2_SEC_ERR("GCM SG: Memory alloc failed for SGE");
+   return -ENOMEM;
+   }
+   memset(fle, 0, FLE_SG_MEM_SIZE(2 * sgl->num));
+   DPAA2_SET_FLE_ADDR(fle, (size_t)userdata);
+   DPAA2_FLE_SAVE_CTXT(fle, (ptrdiff_t)priv);
+
+   op_fle = fle + 1;
+   ip_fle = fle + 2;
+   sge = fle + 3;
+
+   /* Save the shared descriptor */
+   flc = &priv->flc_desc[0].flc;
+
+   /* Configure FD as a FRAME LIST */
+   DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(op_fle));
+   DPAA2_SET_FD_COMPOUND_FMT(fd);
+   DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+   /* Configure Output FLE with Scatter/Gather Entry */
+   DPAA2_SET_FLE_SG_EXT(op_fle);
+   DPAA2_SET_FLE_ADDR(op_fle, DPAA2_VADDR_TO_IOVA(sge));
+
+   if (auth_only_len)
+   DPAA2_SET_FLE_INTERNAL_JD(op_fle, auth_only_len);
+
+   op_fle->length = (sess->dir == DIR_ENC) ?
+   (aead_len + icv_len) :
+   aead_len;
+
+   /* Configure Output SGE for Encap/Decap */
+   DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
+   DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+   sge->length = sgl->vec[0].len - ofs.ofs.cipher.head;
+
+   /* o/p segs */
+   for (i = 1; i < sgl->num; i++) {
+   sge++;
+   DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
+   DPAA2_SET_FLE_OFFSET(sge, 0);
+   sge->length = sgl->vec[i].len;
+   }
+
+   if (sess->dir == DIR_ENC) {
+   sge++;
+   DPAA2_SET_FLE_ADDR(sge, digest->iova);
+   sge->length = icv_len;
+   }
+   DPAA2_SET_FLE_FIN(sge);
+
+   sge++;
+
+   /* Configure Input FLE with Scatter/Gather Entry */
+   DPAA2_SET_FLE_ADDR(ip_fle, DPAA2_VADDR_TO_IOVA(sge));
+   DPAA2_SET_FLE_SG_EXT(ip_fle);
+   DPAA2_SET_FLE_FIN(ip_fle);
+   ip_fle->length = (sess->dir == DIR_ENC) ?
+   (aead_len + sess->iv.length + auth_only_len) :
+   (aead_len + sess->iv.length + auth_only_len +
+   icv_len);
+
+   /* Configure Input SGE for Encap/Decap */
+   DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(IV_ptr));
+   sge->length = sess->iv.length;
+
+   sge++;
+   if (auth_only_len) {
+   DPAA2_SET_FLE_ADDR(sge, auth_iv->iova);
+   sge->length = auth_only_len;
+   sge++;
+   }
+
+   DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
+   DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+   sge->length = sgl->vec[0].len - ofs.ofs.cipher.head;
+
+   /* i/p segs */
+   for (i = 1; i < sgl->num; i++) {
+   sge++;
+   DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
+   DPAA2_SET_FLE_OFFSET(sge, 0);
+   sge->length = sgl->vec[i].len;
+   }
+
+   if (sess->dir == DIR_DEC) {
+   sge++;
+   old_icv = (uint8_t *)(sge + 1);
+   memcpy(old_icv,  digest->va, icv_len);
+   DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_icv));
+   sge->length = icv_len;
+   }
+
+   DPAA2_SET_FLE_FIN(sge);
+   if (auth_only_len) {
+   DPAA2_SET_FLE_INTERNAL_JD(ip_fle, aut

[dpdk-dev] [PATCH v2 09/15] crypto/dpaa2_sec: support OOP with raw buffer API

2021-09-07 Thread Hemant Agrawal
From: Gagandeep Singh 

add support for out of order processing with raw vector APIs.

Signed-off-by: Gagandeep Singh 
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |   1 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c | 156 +++-
 2 files changed, 116 insertions(+), 41 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h 
b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index f6507855e3..db72c11a5f 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -179,6 +179,7 @@ typedef int (*dpaa2_sec_build_fd_t)(
 
 typedef int (*dpaa2_sec_build_raw_dp_fd_t)(uint8_t *drv_ctx,
   struct rte_crypto_sgl *sgl,
+  struct rte_crypto_sgl *dest_sgl,
   struct rte_crypto_va_iova_ptr *iv,
   struct rte_crypto_va_iova_ptr *digest,
   struct rte_crypto_va_iova_ptr *auth_iv,
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c 
b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
index 5191e5381c..51e316cc00 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
@@ -24,6 +24,7 @@ struct dpaa2_sec_raw_dp_ctx {
 static int
 build_raw_dp_chain_fd(uint8_t *drv_ctx,
   struct rte_crypto_sgl *sgl,
+  struct rte_crypto_sgl *dest_sgl,
   struct rte_crypto_va_iova_ptr *iv,
   struct rte_crypto_va_iova_ptr *digest,
   struct rte_crypto_va_iova_ptr *auth_iv,
@@ -89,17 +90,33 @@ build_raw_dp_chain_fd(uint8_t *drv_ctx,
(cipher_len + icv_len) :
cipher_len;
 
-   /* Configure Output SGE for Encap/Decap */
-   DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-   DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.auth.head);
-   sge->length = sgl->vec[0].len - ofs.ofs.auth.head;
+   /* OOP */
+   if (dest_sgl) {
+   /* Configure Output SGE for Encap/Decap */
+   DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[0].iova);
+   DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+   sge->length = dest_sgl->vec[0].len - ofs.ofs.cipher.head;
 
-   /* o/p segs */
-   for (i = 1; i < sgl->num; i++) {
-   sge++;
-   DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-   DPAA2_SET_FLE_OFFSET(sge, 0);
-   sge->length = sgl->vec[i].len;
+   /* o/p segs */
+   for (i = 1; i < dest_sgl->num; i++) {
+   sge++;
+   DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[i].iova);
+   DPAA2_SET_FLE_OFFSET(sge, 0);
+   sge->length = dest_sgl->vec[i].len;
+   }
+   } else {
+   /* Configure Output SGE for Encap/Decap */
+   DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
+   DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+   sge->length = sgl->vec[0].len - ofs.ofs.cipher.head;
+
+   /* o/p segs */
+   for (i = 1; i < sgl->num; i++) {
+   sge++;
+   DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
+   DPAA2_SET_FLE_OFFSET(sge, 0);
+   sge->length = sgl->vec[i].len;
+   }
}
 
if (sess->dir == DIR_ENC) {
@@ -160,6 +177,7 @@ build_raw_dp_chain_fd(uint8_t *drv_ctx,
 static int
 build_raw_dp_aead_fd(uint8_t *drv_ctx,
   struct rte_crypto_sgl *sgl,
+  struct rte_crypto_sgl *dest_sgl,
   struct rte_crypto_va_iova_ptr *iv,
   struct rte_crypto_va_iova_ptr *digest,
   struct rte_crypto_va_iova_ptr *auth_iv,
@@ -219,17 +237,33 @@ build_raw_dp_aead_fd(uint8_t *drv_ctx,
(aead_len + icv_len) :
aead_len;
 
-   /* Configure Output SGE for Encap/Decap */
-   DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-   DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
-   sge->length = sgl->vec[0].len - ofs.ofs.cipher.head;
+   /* OOP */
+   if (dest_sgl) {
+   /* Configure Output SGE for Encap/Decap */
+   DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[0].iova);
+   DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+   sge->length = dest_sgl->vec[0].len - ofs.ofs.cipher.head;
 
-   /* o/p segs */
-   for (i = 1; i < sgl->num; i++) {
-   sge++;
-   DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-   DPAA2_SET_FLE_OFFSET(sge, 0);
-   sge->length = sgl->vec[i].len;
+   /* o/p segs */
+   for (i = 1; i < dest_sgl->num; i++) {
+   sge++;
+   DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[i].iova);
+   DPAA2_SET_FLE_OFFSET(sge, 0);
+  

[dpdk-dev] [PATCH v2 14/15] test/crypto: add raw API test for dpaax

2021-09-07 Thread Hemant Agrawal
This patch add support for raw API tests for
dpaa_sec and dpaa2_sec platforms.

Signed-off-by: Gagandeep Singh 
Signed-off-by: Hemant Agrawal 
---
 app/test/test_cryptodev.c | 116 +++---
 1 file changed, 109 insertions(+), 7 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index ed63524edc..de4fb0f3d1 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -175,11 +175,11 @@ process_sym_raw_dp_op(uint8_t dev_id, uint16_t qp_id,
 {
struct rte_crypto_sym_op *sop = op->sym;
struct rte_crypto_op *ret_op = NULL;
-   struct rte_crypto_vec data_vec[UINT8_MAX];
+   struct rte_crypto_vec data_vec[UINT8_MAX], dest_data_vec[UINT8_MAX];
struct rte_crypto_va_iova_ptr cipher_iv, digest, aad_auth_iv;
union rte_crypto_sym_ofs ofs;
struct rte_crypto_sym_vec vec;
-   struct rte_crypto_sgl sgl;
+   struct rte_crypto_sgl sgl, dest_sgl;
uint32_t max_len;
union rte_cryptodev_session_ctx sess;
uint32_t count = 0;
@@ -315,6 +315,19 @@ process_sym_raw_dp_op(uint8_t dev_id, uint16_t qp_id,
}
 
sgl.num = n;
+   /* Out of place */
+   if (sop->m_dst != NULL) {
+   dest_sgl.vec = dest_data_vec;
+   vec.dest_sgl = &dest_sgl;
+   n = rte_crypto_mbuf_to_vec(sop->m_dst, 0, max_len,
+   dest_data_vec, RTE_DIM(dest_data_vec));
+   if (n < 0 || n > sop->m_dst->nb_segs) {
+   op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+   goto exit;
+   }
+   dest_sgl.num = n;
+   } else
+   vec.dest_sgl = NULL;
 
if (rte_cryptodev_raw_enqueue_burst(ctx, &vec, ofs, (void **)&op,
&enqueue_status) < 1) {
@@ -8305,10 +8318,21 @@ test_pdcp_proto_SGL(int i, int oop,
int to_trn_tbl[16];
int segs = 1;
unsigned int trn_data = 0;
+   struct rte_cryptodev_info dev_info;
+   uint64_t feat_flags;
struct rte_security_ctx *ctx = (struct rte_security_ctx *)
rte_cryptodev_get_sec_ctx(
ts_params->valid_devs[0]);
+   struct rte_mbuf *temp_mbuf;
+
+   rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+   feat_flags = dev_info.feature_flags;
 
+   if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+   (!(feat_flags & RTE_CRYPTODEV_FF_SYM_RAW_DP))) {
+   printf("Device does not support RAW data-path APIs.\n");
+   return -ENOTSUP;
+   }
/* Verify the capabilities */
struct rte_security_capability_idx sec_cap_idx;
 
@@ -8492,8 +8516,23 @@ test_pdcp_proto_SGL(int i, int oop,
ut_params->op->sym->m_dst = ut_params->obuf;
 
/* Process crypto operation */
-   if (process_crypto_request(ts_params->valid_devs[0], ut_params->op)
-   == NULL) {
+   temp_mbuf = ut_params->op->sym->m_src;
+   if (global_api_test_type == CRYPTODEV_RAW_API_TEST) {
+   /* filling lengths */
+   while (temp_mbuf) {
+   ut_params->op->sym->cipher.data.length
+   += temp_mbuf->pkt_len;
+   ut_params->op->sym->auth.data.length
+   += temp_mbuf->pkt_len;
+   temp_mbuf = temp_mbuf->next;
+   }
+   process_sym_raw_dp_op(ts_params->valid_devs[0], 0,
+   ut_params->op, 1, 1, 0, 0);
+   } else {
+   ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+   ut_params->op);
+   }
+   if (ut_params->op == NULL) {
printf("TestCase %s()-%d line %d failed %s: ",
__func__, i, __LINE__,
"failed to process sym crypto op");
@@ -9934,6 +9973,7 @@ test_authenticated_encryption_oop(const struct 
aead_test_data *tdata)
int retval;
uint8_t *ciphertext, *auth_tag;
uint16_t plaintext_pad_len;
+   struct rte_cryptodev_info dev_info;
 
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -9943,7 +9983,11 @@ test_authenticated_encryption_oop(const struct 
aead_test_data *tdata)
&cap_idx) == NULL)
return TEST_SKIPPED;
 
-   if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+   rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+   uint64_t feat_flags = dev_info.feature_flags;
+
+   if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+   (!(feat_flags & RTE_CRYPTODEV_FF_SYM_RAW_DP)))
return TEST_SKIPPED;
 
/* not supported with CPU crypto */
@@ -9980,7 +10024,11 @@ test_authenticated_encryption_oop(const struct 
aead_tes

[dpdk-dev] [PATCH v2 15/15] test/crypto: add raw API support in 5G algos

2021-09-07 Thread Hemant Agrawal
This patch add support for RAW API testing with ZUC
and SNOW test cases.

Signed-off-by: Gagandeep Singh 
Signed-off-by: Hemant Agrawal 
---
 app/test/test_cryptodev.c | 57 ++-
 1 file changed, 51 insertions(+), 6 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index de4fb0f3d1..0ee603b1b5 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -368,6 +368,7 @@ process_sym_raw_dp_op(uint8_t dev_id, uint16_t qp_id,
}
 
op->status = (count == MAX_RAW_DEQUEUE_COUNT + 1 || ret_op != op ||
+   ret_op->status == RTE_CRYPTO_OP_STATUS_ERROR ||
n_success < 1) ? RTE_CRYPTO_OP_STATUS_ERROR :
RTE_CRYPTO_OP_STATUS_SUCCESS;
 
@@ -4152,6 +4153,16 @@ test_snow3g_encryption_oop(const struct snow3g_test_data 
*tdata)
int retval;
unsigned plaintext_pad_len;
unsigned plaintext_len;
+   struct rte_cryptodev_info dev_info;
+
+   rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+   uint64_t feat_flags = dev_info.feature_flags;
+
+   if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+   (!(feat_flags & RTE_CRYPTODEV_FF_SYM_RAW_DP))) {
+   printf("Device does not support RAW data-path APIs.\n");
+   return -ENOTSUP;
+   }
 
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -4207,7 +4218,11 @@ test_snow3g_encryption_oop(const struct snow3g_test_data 
*tdata)
if (retval < 0)
return retval;
 
-   ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+   if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+   process_sym_raw_dp_op(ts_params->valid_devs[0], 0,
+   ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+   else
+   ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
 
@@ -4267,6 +4282,12 @@ test_snow3g_encryption_oop_sgl(const struct 
snow3g_test_data *tdata)
return TEST_SKIPPED;
}
 
+   if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+   (!(feat_flags & RTE_CRYPTODEV_FF_SYM_RAW_DP))) {
+   printf("Device does not support RAW data-path APIs.\n");
+   return -ENOTSUP;
+   }
+
/* Create SNOW 3G session */
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
@@ -4301,7 +4322,11 @@ test_snow3g_encryption_oop_sgl(const struct 
snow3g_test_data *tdata)
if (retval < 0)
return retval;
 
-   ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+   if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+   process_sym_raw_dp_op(ts_params->valid_devs[0], 0,
+   ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+   else
+   ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
 
@@ -4428,7 +4453,11 @@ test_snow3g_encryption_offset_oop(const struct 
snow3g_test_data *tdata)
if (retval < 0)
return retval;
 
-   ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+   if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+   process_sym_raw_dp_op(ts_params->valid_devs[0], 0,
+   ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+   else
+   ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
 
@@ -4559,7 +4588,16 @@ static int test_snow3g_decryption_oop(const struct 
snow3g_test_data *tdata)
uint8_t *plaintext, *ciphertext;
unsigned ciphertext_pad_len;
unsigned ciphertext_len;
+   struct rte_cryptodev_info dev_info;
+
+   rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+   uint64_t feat_flags = dev_info.feature_flags;
 
+   if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+   (!(feat_flags & RTE_CRYPTODEV_FF_SYM_RAW_DP))) {
+   printf("Device does not support RAW data-path APIs.\n");
+   return -ENOTSUP;
+   }
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
@@ -4617,7 +4655,11 @@ static int test_snow3g_decryption_oop(const struct 
snow3g_test_data *tdata)
if (retval < 0)
return retval;
 
-   ut_par

[dpdk-dev] [PATCH v2 13/15] crypto/dpaa_sec: support AEAD and proto with raw APIs

2021-09-07 Thread Hemant Agrawal
From: Gagandeep Singh 

This add support for AEAD and proto offload with raw APIs
for dpaa_sec driver.

Signed-off-by: Gagandeep Singh 
---
 drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c | 293 ++
 1 file changed, 293 insertions(+)

diff --git a/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c 
b/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
index 4e34629f18..b0c22a7c26 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
@@ -218,6 +218,163 @@ build_dpaa_raw_dp_auth_fd(uint8_t *drv_ctx,
return cf;
 }
 
+static inline struct dpaa_sec_job *
+build_raw_cipher_auth_gcm_sg(uint8_t *drv_ctx,
+   struct rte_crypto_sgl *sgl,
+   struct rte_crypto_sgl *dest_sgl,
+   struct rte_crypto_va_iova_ptr *iv,
+   struct rte_crypto_va_iova_ptr *digest,
+   struct rte_crypto_va_iova_ptr *auth_iv,
+   union rte_crypto_sym_ofs ofs,
+   void *userdata,
+   struct qm_fd *fd)
+{
+   dpaa_sec_session *ses =
+   ((struct dpaa_sec_raw_dp_ctx *)drv_ctx)->session;
+   struct dpaa_sec_job *cf;
+   struct dpaa_sec_op_ctx *ctx;
+   struct qm_sg_entry *sg, *out_sg, *in_sg;
+   uint8_t extra_req_segs;
+   uint8_t *IV_ptr = iv->va;
+   int data_len = 0, aead_len = 0;
+   unsigned int i;
+
+   for (i = 0; i < sgl->num; i++)
+   data_len += sgl->vec[i].len;
+
+   extra_req_segs = 4;
+   aead_len = data_len - ofs.ofs.cipher.head - ofs.ofs.cipher.tail;
+
+   if (ses->auth_only_len)
+   extra_req_segs++;
+
+   if (sgl->num > MAX_SG_ENTRIES) {
+   DPAA_SEC_DP_ERR("AEAD: Max sec segs supported is %d",
+   MAX_SG_ENTRIES);
+   return NULL;
+   }
+
+   ctx = dpaa_sec_alloc_raw_ctx(ses,  sgl->num * 2 + extra_req_segs);
+   if (!ctx)
+   return NULL;
+
+   cf = &ctx->job;
+   ctx->userdata = (void *)userdata;
+
+   rte_prefetch0(cf->sg);
+
+   /* output */
+   out_sg = &cf->sg[0];
+   out_sg->extension = 1;
+   if (is_encode(ses))
+   out_sg->length = aead_len + ses->digest_length;
+   else
+   out_sg->length = aead_len;
+
+   /* output sg entries */
+   sg = &cf->sg[2];
+   qm_sg_entry_set64(out_sg, rte_dpaa_mem_vtop(sg));
+   cpu_to_hw_sg(out_sg);
+
+   if (dest_sgl) {
+   /* 1st seg */
+   qm_sg_entry_set64(sg, dest_sgl->vec[0].iova);
+   sg->length = dest_sgl->vec[0].len - ofs.ofs.cipher.head;
+   sg->offset = ofs.ofs.cipher.head;
+
+   /* Successive segs */
+   for (i = 1; i < dest_sgl->num; i++) {
+   cpu_to_hw_sg(sg);
+   sg++;
+   qm_sg_entry_set64(sg, dest_sgl->vec[i].iova);
+   sg->length = dest_sgl->vec[i].len;
+   }
+   } else {
+   /* 1st seg */
+   qm_sg_entry_set64(sg, sgl->vec[0].iova);
+   sg->length = sgl->vec[0].len - ofs.ofs.cipher.head;
+   sg->offset = ofs.ofs.cipher.head;
+
+   /* Successive segs */
+   for (i = 1; i < sgl->num; i++) {
+   cpu_to_hw_sg(sg);
+   sg++;
+   qm_sg_entry_set64(sg, sgl->vec[i].iova);
+   sg->length = sgl->vec[i].len;
+   }
+
+   }
+
+   if (is_encode(ses)) {
+   cpu_to_hw_sg(sg);
+   /* set auth output */
+   sg++;
+   qm_sg_entry_set64(sg, digest->iova);
+   sg->length = ses->digest_length;
+   }
+   sg->final = 1;
+   cpu_to_hw_sg(sg);
+
+   /* input */
+   in_sg = &cf->sg[1];
+   in_sg->extension = 1;
+   in_sg->final = 1;
+   if (is_encode(ses))
+   in_sg->length = ses->iv.length + aead_len
+   + ses->auth_only_len;
+   else
+   in_sg->length = ses->iv.length + aead_len
+   + ses->auth_only_len + ses->digest_length;
+
+   /* input sg entries */
+   sg++;
+   qm_sg_entry_set64(in_sg, rte_dpaa_mem_vtop(sg));
+   cpu_to_hw_sg(in_sg);
+
+   /* 1st seg IV */
+   qm_sg_entry_set64(sg, rte_dpaa_mem_vtop(IV_ptr));
+   sg->length = ses->iv.length;
+   cpu_to_hw_sg(sg);
+
+   /* 2 seg auth only */
+   if (ses->auth_only_len) {
+   sg++;
+   qm_sg_entry_set64(sg, auth_iv->iova);
+   sg->length = ses->auth_only_len;
+   cpu_to_hw_sg(sg);
+   }
+
+   /* 3rd seg */
+   sg++;
+   qm_sg_entry_set64(sg, sgl->vec[0].iova);
+   sg->length = sgl->vec[0].len - ofs.ofs.cipher.head;
+   sg->offset = ofs.ofs.cipher.head;
+
+   /* Suc

[dpdk-dev] [PATCH v2 12/15] crypto/dpaa_sec: support authonly and chain with raw APIs

2021-09-07 Thread Hemant Agrawal
From: Gagandeep Singh 

This patch improves the raw vector support in dpaa_sec driver
for authonly and chain usecase.

Signed-off-by: Gagandeep Singh 
---
 drivers/crypto/dpaa_sec/dpaa_sec.h|   3 +-
 drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c | 296 +-
 2 files changed, 287 insertions(+), 12 deletions(-)

diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.h 
b/drivers/crypto/dpaa_sec/dpaa_sec.h
index f6e83d46e7..2e0ab93ff0 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.h
@@ -135,7 +135,8 @@ typedef struct dpaa_sec_job* 
(*dpaa_sec_build_raw_dp_fd_t)(uint8_t *drv_ctx,
struct rte_crypto_va_iova_ptr *digest,
struct rte_crypto_va_iova_ptr *auth_iv,
union rte_crypto_sym_ofs ofs,
-   void *userdata);
+   void *userdata,
+   struct qm_fd *fd);
 
 typedef struct dpaa_sec_session_entry {
struct sec_cdb cdb; /**< cmd block associated with qp */
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c 
b/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
index ee0ca2e0d5..4e34629f18 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
@@ -12,6 +12,7 @@
 #endif
 
 /* RTA header files */
+#include 
 #include 
 
 #include 
@@ -26,6 +27,17 @@ struct dpaa_sec_raw_dp_ctx {
uint16_t cached_dequeue;
 };
 
+static inline int
+is_encode(dpaa_sec_session *ses)
+{
+   return ses->dir == DIR_ENC;
+}
+
+static inline int is_decode(dpaa_sec_session *ses)
+{
+   return ses->dir == DIR_DEC;
+}
+
 static __rte_always_inline int
 dpaa_sec_raw_enqueue_done(void *qp_data, uint8_t *drv_ctx, uint32_t n)
 {
@@ -82,18 +94,276 @@ build_dpaa_raw_dp_auth_fd(uint8_t *drv_ctx,
struct rte_crypto_va_iova_ptr *digest,
struct rte_crypto_va_iova_ptr *auth_iv,
union rte_crypto_sym_ofs ofs,
-   void *userdata)
+   void *userdata,
+   struct qm_fd *fd)
 {
-   RTE_SET_USED(drv_ctx);
-   RTE_SET_USED(sgl);
RTE_SET_USED(dest_sgl);
RTE_SET_USED(iv);
-   RTE_SET_USED(digest);
RTE_SET_USED(auth_iv);
-   RTE_SET_USED(ofs);
-   RTE_SET_USED(userdata);
+   RTE_SET_USED(fd);
 
-   return NULL;
+   dpaa_sec_session *ses =
+   ((struct dpaa_sec_raw_dp_ctx *)drv_ctx)->session;
+   struct dpaa_sec_job *cf;
+   struct dpaa_sec_op_ctx *ctx;
+   struct qm_sg_entry *sg, *out_sg, *in_sg;
+   phys_addr_t start_addr;
+   uint8_t *old_digest, extra_segs;
+   int data_len, data_offset, total_len = 0;
+   unsigned int i;
+
+   for (i = 0; i < sgl->num; i++)
+   total_len += sgl->vec[i].len;
+
+   data_len = total_len - ofs.ofs.auth.head - ofs.ofs.auth.tail;
+   data_offset =  ofs.ofs.auth.head;
+
+   /* Support only length in bits for SNOW3G and ZUC */
+
+   if (is_decode(ses))
+   extra_segs = 3;
+   else
+   extra_segs = 2;
+
+   if (sgl->num > MAX_SG_ENTRIES) {
+   DPAA_SEC_DP_ERR("Auth: Max sec segs supported is %d",
+   MAX_SG_ENTRIES);
+   return NULL;
+   }
+   ctx = dpaa_sec_alloc_raw_ctx(ses, sgl->num * 2 + extra_segs);
+   if (!ctx)
+   return NULL;
+
+   cf = &ctx->job;
+   ctx->userdata = (void *)userdata;
+   old_digest = ctx->digest;
+
+   /* output */
+   out_sg = &cf->sg[0];
+   qm_sg_entry_set64(out_sg, digest->iova);
+   out_sg->length = ses->digest_length;
+   cpu_to_hw_sg(out_sg);
+
+   /* input */
+   in_sg = &cf->sg[1];
+   /* need to extend the input to a compound frame */
+   in_sg->extension = 1;
+   in_sg->final = 1;
+   in_sg->length = data_len;
+   qm_sg_entry_set64(in_sg, rte_dpaa_mem_vtop(&cf->sg[2]));
+
+   /* 1st seg */
+   sg = in_sg + 1;
+
+   if (ses->iv.length) {
+   uint8_t *iv_ptr;
+
+   iv_ptr = rte_crypto_op_ctod_offset(userdata, uint8_t *,
+  ses->iv.offset);
+
+   if (ses->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2) {
+   iv_ptr = conv_to_snow_f9_iv(iv_ptr);
+   sg->length = 12;
+   } else if (ses->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+   iv_ptr = conv_to_zuc_eia_iv(iv_ptr);
+   sg->length = 8;
+   } else {
+   sg->length = ses->iv.length;
+   }
+   qm_sg_entry_set64(sg, rte_dpaa_mem_vtop(iv_ptr));
+   in_sg->length += sg->length;
+   cpu_to_hw_sg(sg);
+   sg++;
+   }
+
+   qm_sg_entry_set64(sg, sgl->vec[0].iova);
+   sg->offset = data_offset;
+
+   if (data_len <= (int)(

Re: [dpdk-dev] [PATCH] app/testpmd: fix random number of Tx segments

2021-09-07 Thread Li, Xiaoyun



> -Original Message-
> From: Zhang, AlvinX 
> Sent: Tuesday, September 7, 2021 10:25
> To: Li, Xiaoyun ; Ananyev, Konstantin
> 
> Cc: dev@dpdk.org; sta...@dpdk.org
> Subject: RE: [PATCH] app/testpmd: fix random number of Tx segments
> 
> > -Original Message-
> > From: Li, Xiaoyun 
> > Sent: Monday, September 6, 2021 6:55 PM
> > To: Zhang, AlvinX ; Ananyev, Konstantin
> > 
> > Cc: dev@dpdk.org; sta...@dpdk.org
> > Subject: RE: [PATCH] app/testpmd: fix random number of Tx segments
> >
> >
> >
> > > -Original Message-
> > > From: Zhang, AlvinX 
> > > Sent: Monday, September 6, 2021 18:04
> > > To: Li, Xiaoyun ; Ananyev, Konstantin
> > > 
> > > Cc: dev@dpdk.org; sta...@dpdk.org
> > > Subject: RE: [PATCH] app/testpmd: fix random number of Tx segments
> > >
> > > > -Original Message-
> > > > From: Li, Xiaoyun 
> > > > Sent: Monday, September 6, 2021 4:59 PM
> > > > To: Zhang, AlvinX ; Ananyev, Konstantin
> > > > 
> > > > Cc: dev@dpdk.org; sta...@dpdk.org
> > > > Subject: RE: [PATCH] app/testpmd: fix random number of Tx segments
> > > >
> > > > Hi
> > > >
> > > > > -Original Message-
> > > > > From: Zhang, AlvinX 
> > > > > Sent: Thursday, September 2, 2021 16:20
> > > > > To: Li, Xiaoyun ; Ananyev, Konstantin
> > > > > 
> > > > > Cc: dev@dpdk.org; Zhang, AlvinX ;
> > > > > sta...@dpdk.org
> > > > > Subject: [PATCH] app/testpmd: fix random number of Tx segments
> > > > >
> > > > > When random number of segments in Tx packets is enabled, the
> > > > > total data space length of all segments must be greater or equal
> > > > > than the size of an Eth/IP/UDP/timestamp packet, that's total 14
> > > > > + 20 +
> > > > > 8 +
> > > > > 16 bytes. Otherwise the Tx engine may cause the application to crash.
> > > > >
> > > > > Bugzilla ID: 797
> > > > > Fixes: 79bec05b32b7 ("app/testpmd: add ability to split outgoing
> > > > > packets")
> > > > > Cc: sta...@dpdk.org
> > > > >
> > > > > Signed-off-by: Alvin Zhang 
> > > > > ---
> > > > >  app/test-pmd/config.c  | 16 +++-
> > > > > app/test-pmd/testpmd.c
> > > > > |  5
> > > > > +  app/test-pmd/testpmd.h |  5 +  app/test-pmd/txonly.c
> > > > > + |
> > > > > + 7
> > > > > + +--
> > > > >  4 files changed, 26 insertions(+), 7 deletions(-)
> > > > >
> > > > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index
> > > > > 31d8ba1..5105b3b 100644
> > > > > --- a/app/test-pmd/config.c
> > > > > +++ b/app/test-pmd/config.c
> > > > > @@ -3837,10 +3837,11 @@ struct igb_ring_desc_16_bytes {
> > > > >* Check that each segment length is greater or equal than
> > > > >* the mbuf data size.
> > > > >* Check also that the total packet length is greater or equal 
> > > > > than the
> > > > > -  * size of an empty UDP/IP packet (sizeof(struct rte_ether_hdr) 
> > > > > +
> > > > > -  * 20 + 8).
> > > > > +  * size of an Eth/IP/UDP + timestamp packet
> > > > > +  * (sizeof(struct rte_ether_hdr) + 20 + 8 + 16).
> > > >
> > > > I don't really agree on this. Most of the time, txonly generate
> > > > packets with Eth/IP/UDP. It's not fair to limit the hdr length to
> > > > include
> > > timestamp in all cases.
> > > > And to be honest, I don't see why you need to add
> > > > "tx_pkt_nb_min_segs". It's only used in txonly when
> > > > "TX_PKT_SPLIT_RND". So this issue is because when
> > > > "TX_PKT_SPLIT_RND", the
> > > random nb_segs is not enough for the hdr.
> > > >
> > > > But if you read txonly carefully, if "TX_PKT_SPLIT_RND", the first
> > > > segment length should be equal or greater than 42 (14+20+8).
> > > > Because when "TX_PKT_SPLIT_RND", update_pkt_header() should be
> > > > called. And that function doesn't deal with header in multi-segments.
> > > > I think there's bug here.
> > > >
> > > > So I think you should only add a check in pkt_burst_prepare() in 
> > > > txonly().
> > > > if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND) ||
> > > > txonly_multi_flow)
> > > > +   if (tx_pkt_seg_lengths[0] < 42) {
> > > > +   err_log;
> > > > +   return false;
> > > > +   }
> > > > update_pkt_header(pkt, pkt_len);
> 
> 
> As above, If user have below configuration, there will no one packet be sent 
> out
> and have lots and lots of repeated annoying logs.
> testpmd> set fwd txonly
> Set txonly packet forwarding mode
> testpmd> set txpkts 40,64
> testpmd> set txsplit rand
> testpmd> start
> txonly packet forwarding - ports=1 - cores=1 - streams=4 - NUMA support
> enabled, MP allocation mode: native Logical Core 2 (socket 0) forwards packets
> on 4 streams:
>   RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
>   RX P=0/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
>   RX P=0/Q=2 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
>   RX P=0/Q=3 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
> 
>   txonly packet for

[dpdk-dev] [PATCH v2 10/15] crypto/dpaa2_sec: enhance error checks with raw buffer APIs

2021-09-07 Thread Hemant Agrawal
From: Gagandeep Singh 

This patch improves error conditions and support of
Wireless algos with raw buffers.

Signed-off-by: Gagandeep Singh 
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c | 31 -
 1 file changed, 6 insertions(+), 25 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c 
b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
index 51e316cc00..25364454c9 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
@@ -355,16 +355,7 @@ build_raw_dp_auth_fd(uint8_t *drv_ctx,
data_len = total_len - ofs.ofs.auth.head - ofs.ofs.auth.tail;
data_offset = ofs.ofs.auth.head;
 
-   if (sess->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2 ||
-   sess->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
-   if ((data_len & 7) || (data_offset & 7)) {
-   DPAA2_SEC_ERR("AUTH: len/offset must be full bytes");
-   return -ENOTSUP;
-   }
-
-   data_len = data_len >> 3;
-   data_offset = data_offset >> 3;
-   }
+   /* For SNOW3G and ZUC, lengths in bits only supported */
fle = (struct qbman_fle *)rte_malloc(NULL,
FLE_SG_MEM_SIZE(2 * sgl->num),
RTE_CACHE_LINE_SIZE);
@@ -609,17 +600,7 @@ build_raw_dp_cipher_fd(uint8_t *drv_ctx,
data_len = total_len - ofs.ofs.cipher.head - ofs.ofs.cipher.tail;
data_offset = ofs.ofs.cipher.head;
 
-   if (sess->cipher_alg == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
-   sess->cipher_alg == RTE_CRYPTO_CIPHER_ZUC_EEA3) {
-   if ((data_len & 7) || (data_offset & 7)) {
-   DPAA2_SEC_ERR("CIPHER: len/offset must be full bytes");
-   return -ENOTSUP;
-   }
-
-   data_len = data_len >> 3;
-   data_offset = data_offset >> 3;
-   }
-
+   /* For SNOW3G and ZUC, lengths in bits only supported */
/* first FLE entry used to store mbuf and session ctxt */
fle = (struct qbman_fle *)rte_malloc(NULL,
FLE_SG_MEM_SIZE(2*sgl->num),
@@ -878,7 +859,7 @@ dpaa2_sec_raw_dequeue_burst(void *qp_data, uint8_t *drv_ctx,
struct qbman_result *dq_storage;
uint32_t fqid = dpaa2_qp->rx_vq.fqid;
int ret, num_rx = 0;
-   uint8_t is_last = 0, status;
+   uint8_t is_last = 0, status, is_success = 0;
struct qbman_swp *swp;
const struct qbman_fd *fd;
struct qbman_pull_desc pulldesc;
@@ -957,11 +938,11 @@ dpaa2_sec_raw_dequeue_burst(void *qp_data, uint8_t 
*drv_ctx,
/* TODO Parse SEC errors */
DPAA2_SEC_ERR("SEC returned Error - %x",
  fd->simple.frc);
-   status = RTE_CRYPTO_OP_STATUS_ERROR;
+   is_success = false;
} else {
-   status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+   is_success = true;
}
-   post_dequeue(user_data, num_rx, status);
+   post_dequeue(user_data, num_rx, is_success);
 
num_rx++;
dq_storage++;
-- 
2.17.1



[dpdk-dev] [PATCH v2 11/15] crypto/dpaa_sec: support raw datapath APIs

2021-09-07 Thread Hemant Agrawal
From: Gagandeep Singh 

This patch add raw vector API framework for dpaa_sec driver.

Signed-off-by: Gagandeep Singh 
---
 doc/guides/rel_notes/release_21_11.rst|   4 +
 drivers/crypto/dpaa_sec/dpaa_sec.c|  23 +-
 drivers/crypto/dpaa_sec/dpaa_sec.h|  39 +-
 drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c | 485 ++
 drivers/crypto/dpaa_sec/meson.build   |   4 +-
 5 files changed, 541 insertions(+), 14 deletions(-)
 create mode 100644 drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c

diff --git a/doc/guides/rel_notes/release_21_11.rst 
b/doc/guides/rel_notes/release_21_11.rst
index 9cbe960dbe..0afd21812f 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -76,6 +76,10 @@ New Features
 
   * Added raw vector datapath API support
 
+* **Updated NXP dpaa_sec crypto PMD.**
+
+  * Added raw vector datapath API support
+
 
 Removed Items
 -
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c 
b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 19d4684e24..7534f80195 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -45,10 +45,7 @@
 #include 
 #include 
 
-static uint8_t cryptodev_driver_id;
-
-static int
-dpaa_sec_attach_sess_q(struct dpaa_sec_qp *qp, dpaa_sec_session *sess);
+uint8_t dpaa_cryptodev_driver_id;
 
 static inline void
 dpaa_sec_op_ending(struct dpaa_sec_op_ctx *ctx)
@@ -1745,8 +1742,8 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op 
**ops,
case RTE_CRYPTO_OP_WITH_SESSION:
ses = (dpaa_sec_session *)
get_sym_session_private_data(
-   op->sym->session,
-   cryptodev_driver_id);
+   op->sym->session,
+   dpaa_cryptodev_driver_id);
break;
 #ifdef RTE_LIB_SECURITY
case RTE_CRYPTO_OP_SECURITY_SESSION:
@@ -2307,7 +2304,7 @@ dpaa_sec_detach_rxq(struct dpaa_sec_dev_private *qi, 
struct qman_fq *fq)
return -1;
 }
 
-static int
+int
 dpaa_sec_attach_sess_q(struct dpaa_sec_qp *qp, dpaa_sec_session *sess)
 {
int ret;
@@ -3115,7 +3112,7 @@ dpaa_sec_dev_infos_get(struct rte_cryptodev *dev,
info->feature_flags = dev->feature_flags;
info->capabilities = dpaa_sec_capabilities;
info->sym.max_nb_sessions = internals->max_nb_sessions;
-   info->driver_id = cryptodev_driver_id;
+   info->driver_id = dpaa_cryptodev_driver_id;
}
 }
 
@@ -3311,7 +3308,10 @@ static struct rte_cryptodev_ops crypto_ops = {
.queue_pair_release   = dpaa_sec_queue_pair_release,
.sym_session_get_size = dpaa_sec_sym_session_get_size,
.sym_session_configure= dpaa_sec_sym_session_configure,
-   .sym_session_clear= dpaa_sec_sym_session_clear
+   .sym_session_clear= dpaa_sec_sym_session_clear,
+   /* Raw data-path API related operations */
+   .sym_get_raw_dp_ctx_size = dpaa_sec_get_dp_ctx_size,
+   .sym_configure_raw_dp_ctx = dpaa_sec_configure_raw_dp_ctx,
 };
 
 #ifdef RTE_LIB_SECURITY
@@ -3362,7 +3362,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
 
PMD_INIT_FUNC_TRACE();
 
-   cryptodev->driver_id = cryptodev_driver_id;
+   cryptodev->driver_id = dpaa_cryptodev_driver_id;
cryptodev->dev_ops = &crypto_ops;
 
cryptodev->enqueue_burst = dpaa_sec_enqueue_burst;
@@ -3371,6 +3371,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
RTE_CRYPTODEV_FF_HW_ACCELERATED |
RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
RTE_CRYPTODEV_FF_SECURITY |
+   RTE_CRYPTODEV_FF_SYM_RAW_DP |
RTE_CRYPTODEV_FF_IN_PLACE_SGL |
RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
@@ -3536,5 +3537,5 @@ static struct cryptodev_driver dpaa_sec_crypto_drv;
 
 RTE_PMD_REGISTER_DPAA(CRYPTODEV_NAME_DPAA_SEC_PMD, rte_dpaa_sec_driver);
 RTE_PMD_REGISTER_CRYPTO_DRIVER(dpaa_sec_crypto_drv, rte_dpaa_sec_driver.driver,
-   cryptodev_driver_id);
+   dpaa_cryptodev_driver_id);
 RTE_LOG_REGISTER(dpaa_logtype_sec, pmd.crypto.dpaa, NOTICE);
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.h 
b/drivers/crypto/dpaa_sec/dpaa_sec.h
index 368699678b..f6e83d46e7 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.h
@@ -19,6 +19,8 @@
 #define AES_CTR_IV_LEN 16
 #define AES_GCM_IV_LEN 12
 
+extern uint8_t dpaa_cryptodev_driver_id;
+
 #define DPAA_IPv6_DEFAULT_VTC_FLOW 0x6000
 
 /* Minimum job descriptor consists of a oneword job descriptor HEADER and
@@ -117,6 +119,24 @@ struct sec_p

Re: [dpdk-dev] [PATCH v3 1/3] eventdev: add rx queue info get api

2021-09-07 Thread Jerin Jacob
 in

On Tue, Sep 7, 2021 at 12:15 PM Ganapati Kundapura
 wrote:
>
> Added rte_event_eth_rx_adapter_queue_info_get() API to get rx queue
> information - event queue identifier, flags for handling received packets,
> schedular type, event priority, polling frequency of the receive queue
> and flow identifier in rte_event_eth_rx_adapter_queue_info structure
>
> Signed-off-by: Ganapati Kundapura 
>
> ---
> v3:
> * Split single patch into implementaion, test and document updation
>   patches separately

Please squash 1/3 and 3/3.

>
> v2:
> * Fixed build issue due to missing entry in version.map
>
> v1:
> * Initial patch with implementaion, test and doc together
> ---
>  lib/eventdev/eventdev_pmd.h | 31 ++
>  lib/eventdev/rte_event_eth_rx_adapter.c | 76 
> +
>  lib/eventdev/rte_event_eth_rx_adapter.h | 71 ++
>  lib/eventdev/version.map|  1 +
>  4 files changed, 179 insertions(+)
>
> diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
> index 0f724ac..20cc0a7 100644
> --- a/lib/eventdev/eventdev_pmd.h
> +++ b/lib/eventdev/eventdev_pmd.h
> @@ -561,6 +561,35 @@ typedef int (*eventdev_eth_rx_adapter_queue_del_t)
> const struct rte_eth_dev *eth_dev,
> int32_t rx_queue_id);
>
> +struct rte_event_eth_rx_adapter_queue_info;
> +
> +/**
> + * Retrieve information about Rx queue. This callback is invoked if
> + * the caps returned from the eventdev_eth_rx_adapter_caps_get(, eth_port_id)
> + * has RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT set.

It will useful for !RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT case too.



> + *
> + * @param dev
> + *  Event device pointer
> + *
> + * @param eth_dev
> + *  Ethernet device pointer
> + *
> + * @param rx_queue_id
> + *  Ethernet device receive queue index.
> + *
> + * @param[out] info
> + *  Pointer to rte_event_eth_rx_adapter_queue_info structure
> + *
> + * @return
> + *  - 0: Success
> + *  - <0: Error code on failure.
> + */
> +typedef int (*eventdev_eth_rx_adapter_queue_info_get_t)
> +   (const struct rte_eventdev *dev,
> +   const struct rte_eth_dev *eth_dev,
> +   uint16_t rx_queue_id,
> +   struct rte_event_eth_rx_adapter_queue_info *info);
> +
>  /**
>   * Start ethernet Rx adapter. This callback is invoked if
>   * the caps returned from eventdev_eth_rx_adapter_caps_get(.., eth_port_id)
> @@ -1107,6 +1136,8 @@ struct rte_eventdev_ops {
> /**< Add Rx queues to ethernet Rx adapter */
> eventdev_eth_rx_adapter_queue_del_t eth_rx_adapter_queue_del;
> /**< Delete Rx queues from ethernet Rx adapter */
> +   eventdev_eth_rx_adapter_queue_info_get_t 
> eth_rx_adapter_queue_info_get;
> +   /**< Get Rx adapter queue info */
> eventdev_eth_rx_adapter_start_t eth_rx_adapter_start;
> /**< Start ethernet Rx adapter */
> eventdev_eth_rx_adapter_stop_t eth_rx_adapter_stop;
> diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c 
> b/lib/eventdev/rte_event_eth_rx_adapter.c
> index 7c94c73..98184fb 100644
> --- a/lib/eventdev/rte_event_eth_rx_adapter.c
> +++ b/lib/eventdev/rte_event_eth_rx_adapter.c
> @@ -2811,3 +2811,79 @@ rte_event_eth_rx_adapter_cb_register(uint8_t id,
>
> return 0;
>  }
> +
> +int
> +rte_event_eth_rx_adapter_queue_info_get(uint8_t id, uint16_t eth_dev_id,
> +   uint16_t rx_queue_id,
> +   struct rte_event_eth_rx_adapter_queue_info *info)
> +{
> +   struct rte_eventdev *dev;
> +   struct eth_device_info *dev_info;
> +   struct rte_event_eth_rx_adapter *rx_adapter;
> +   struct eth_rx_queue_info *queue_info;
> +   struct rte_event *qi_ev;
> +   int ret;
> +   uint32_t cap;
> +
> +   RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
> +   RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL);
> +
> +   if (rx_queue_id >= rte_eth_devices[eth_dev_id].data->nb_rx_queues) {
> +   RTE_EDEV_LOG_ERR("Invalid rx queue_id %u", rx_queue_id);
> +   return -EINVAL;
> +   }
> +
> +   if (info == NULL) {
> +   RTE_EDEV_LOG_ERR("Rx queue info cannot be NULL");
> +   return -EINVAL;
> +   }
> +
> +   rx_adapter = rxa_id_to_adapter(id);
> +   if (rx_adapter == NULL)
> +   return -EINVAL;
> +
> +   dev = &rte_eventdevs[rx_adapter->eventdev_id];
> +   ret = rte_event_eth_rx_adapter_caps_get(rx_adapter->eventdev_id,
> +   eth_dev_id,
> +   &cap);
> +   if (ret) {
> +   RTE_EDEV_LOG_ERR("Failed to get adapter caps edev %" PRIu8
> +"eth port %" PRIu16, id, eth_dev_id);
> +   return ret;
> +   }
> +
> +   if (cap & RTE_EVENT_ETH_R

Re: [dpdk-dev] [PATCH] net/octeontx2: configure MTU value correctly

2021-09-07 Thread Jerin Jacob
On Tue, Aug 10, 2021 at 12:52 PM Hanumanth Reddy Pothula
 wrote:
>
> Update MTU value based on PTP enable status and reserve eight
> bytes in TX path to accommodate VLAN tags.
>
> If PTP is enabled maximum allowed MTU is 9200 otherwise it's 9208.
>
> Signed-off-by: Hanumanth Reddy Pothula 


Updated as

[for-next-net]dell[dpdk-next-net-mrvl] $ git show
commit b6b92bb4bf28c39e55a538741cf408041a28f412 (HEAD -> for-next-net)
Author: Hanumanth Reddy Pothula 
Date:   Tue Aug 10 12:51:00 2021 +0530

net/octeontx2: fix MTU value

Update MTU value based on PTP enable status and reserve eight
bytes in TX path to accommodate VLAN tags.

If PTP is enabled maximum allowed MTU is 9200 otherwise it's 9208.

Fixes: b5dc3140448e ("net/octeontx2: support base PTP")
Cc: sta...@dpdk.org

Signed-off-by: Hanumanth Reddy Pothula 
Acked-by: Jerin Jacob 


Applied to dpdk-next-net-mrvl/for-next-net. Thanks





> ---
>  drivers/net/octeontx2/otx2_ethdev_ops.c | 8 +++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c 
> b/drivers/net/octeontx2/otx2_ethdev_ops.c
> index 5a4501208e..552e6bd43d 100644
> --- a/drivers/net/octeontx2/otx2_ethdev_ops.c
> +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
> @@ -17,7 +17,8 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> struct nix_frs_cfg *req;
> int rc;
>
> -   frame_size += NIX_TIMESYNC_RX_OFFSET * otx2_ethdev_is_ptp_en(dev);
> +   if (dev->configured && otx2_ethdev_is_ptp_en(dev))
> +   frame_size += NIX_TIMESYNC_RX_OFFSET;
>
> /* Check if MTU is within the allowed range */
> if (frame_size < NIX_MIN_FRS || frame_size > NIX_MAX_FRS)
> @@ -547,6 +548,11 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct 
> rte_eth_dev_info *devinfo)
> devinfo->max_vfs = pci_dev->max_vfs;
> devinfo->max_mtu = devinfo->max_rx_pktlen - NIX_L2_OVERHEAD;
> devinfo->min_mtu = devinfo->min_rx_bufsize - NIX_L2_OVERHEAD;
> +   if (dev->configured && otx2_ethdev_is_ptp_en(dev)) {
> +   devinfo->max_mtu -=  NIX_TIMESYNC_RX_OFFSET;
> +   devinfo->min_mtu -=  NIX_TIMESYNC_RX_OFFSET;
> +   devinfo->max_rx_pktlen -= NIX_TIMESYNC_RX_OFFSET;
> +   }
>
> devinfo->rx_offload_capa = dev->rx_offload_capa;
> devinfo->tx_offload_capa = dev->tx_offload_capa;
> --
> 2.25.1
>


Re: [dpdk-dev] [PATCH v13] eventdev: simplify Rx adapter event vector config

2021-09-07 Thread Jerin Jacob
On Fri, Aug 20, 2021 at 1:04 PM Naga Harish K, S V
 wrote:
>
>
>
> -Original Message-
> From: Jayatheerthan, Jay 
> Sent: Wednesday, August 18, 2021 1:53 PM
> To: pbhagavat...@marvell.com; jer...@marvell.com; Ray Kinsella 
> ; Shijith Thotton ; Naga Harish K, S V 
> 
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v13] eventdev: simplify Rx adapter event 
> vector config
>
> HI Harish,
> Could you review this patch ?
>
> -Jay
>
>
> > -Original Message-
> > From: pbhagavat...@marvell.com 
> > Sent: Wednesday, August 18, 2021 12:27 PM
> > To: jer...@marvell.com; Ray Kinsella ; Pavan Nikhilesh
> > ; Shijith Thotton ;
> > Jayatheerthan, Jay 
> > Cc: dev@dpdk.org
> > Subject: [dpdk-dev] [PATCH v13] eventdev: simplify Rx adapter event
> > vector config
> >
> > From: Pavan Nikhilesh 
> >
> > Include vector configuration into the structure
> > ``rte_event_eth_rx_adapter_queue_conf`` that is used to configure Rx
> > adapter ethernet device Rx queue parameters.
> > This simplifies event vector configuration as it avoids splitting
> > configuration per Rx queue.
> >
> > Signed-off-by: Pavan Nikhilesh 
> > Acked-by: Jay Jayatheerthan 
> > ---
> >  v13 Changes:
> >  - Fix cnxk driver compilation.
> >  v12 Changes:
> >  - Remove deprication notice.
> >  - Remove unnecessary change Id.
> >
> >  app/test-eventdev/test_pipeline_common.c |  16 +-
> >  doc/guides/rel_notes/deprecation.rst |   9 --
> >  drivers/event/cnxk/cn10k_eventdev.c  |  77 --
> >  drivers/event/cnxk/cnxk_eventdev_adptr.c |  41 ++
> >  lib/eventdev/eventdev_pmd.h  |  29 
> >  lib/eventdev/rte_event_eth_rx_adapter.c  | 179
> > ---  lib/eventdev/rte_event_eth_rx_adapter.h  |  30 
> >  lib/eventdev/version.map |   1 -
> >  8 files changed, 104 insertions(+), 278 deletions(-)
> >
> > diff --git a/app/test-eventdev/test_pipeline_common.c
> > b/app/test-eventdev/test_pipeline_common.c
> > index 6ee530d4cd..2697547641 100644
> > --- a/app/test-eventdev/test_pipeline_common.c
> > +++ b/app/test-eventdev/test_pipeline_common.c
> > @@ -332,7 +332,6 @@ pipeline_event_rx_adapter_setup(struct evt_options 
> > *opt, uint8_t stride,
> >   uint16_t prod;
> >   struct rte_mempool *vector_pool = NULL;
> >   struct rte_event_eth_rx_adapter_queue_conf queue_conf;
> > - struct rte_event_eth_rx_adapter_event_vector_config vec_conf;
> >
> >   memset(&queue_conf, 0,
> >   sizeof(struct rte_event_eth_rx_adapter_queue_conf));
> > @@ -398,8 +397,12 @@ pipeline_event_rx_adapter_setup(struct evt_options 
> > *opt, uint8_t stride,
> >   }
> >
> >   if (cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR) {
> > + queue_conf.vector_sz = opt->vector_size;
> > + queue_conf.vector_timeout_ns =
> > + opt->vector_tmo_nsec;
> >   queue_conf.rx_queue_flags |=
> >   RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR;
> > + queue_conf.vector_mp = vector_pool;
> >   } else {
> >   evt_err("Rx adapter doesn't support event 
> > vector");
> >   return -EINVAL;
> > @@ -419,17 +422,6 @@ pipeline_event_rx_adapter_setup(struct evt_options 
> > *opt, uint8_t stride,
> >   return ret;
> >   }
> >
> > - if (opt->ena_vector) {
> > - vec_conf.vector_sz = opt->vector_size;
> > - vec_conf.vector_timeout_ns = opt->vector_tmo_nsec;
> > - vec_conf.vector_mp = vector_pool;
> > - if 
> > (rte_event_eth_rx_adapter_queue_event_vector_config(
> > - prod, prod, -1, &vec_conf) < 0) {
> > - evt_err("Failed to configure event 
> > vectorization for Rx adapter");
> > - return -EINVAL;
> > - }
> > - }
> > -
> >   if (!(cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) {
> >   uint32_t service_id = -1U;
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst
> > b/doc/guides/rel_notes/deprecation.rst
> > index 76a4abfd6b..2c37d7222c 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -257,15 +257,6 @@ Deprecation Notices
> >An 8-byte reserved field will be added to the structure 
> > ``rte_event_timer`` to
> >support future extensions.
> >
> > -* eventdev: The structure ``rte_event_eth_rx_adapter_queue_conf``
> > will be
> > -  extended to include
> > ``rte_event_eth_rx_adapter_event_vector_config`` elements
> > -  and the function
> > ``rte_event_eth_rx_adapter_queue_event_vector_config`` will
> > -  be removed in DPDK 21.11.
> > -
> > -  An application can enable event vector

Re: [dpdk-dev] [PATCH v3] eventdev: update crypto adapter metadata structures

2021-09-07 Thread Jerin Jacob
On Tue, Aug 31, 2021 at 1:27 PM Shijith Thotton  wrote:
>
> In crypto adapter metadata, reserved bytes in request info structure is
> a space holder for response info. It enforces an order of operation if
> the structures are updated using memcpy to avoid overwriting response
> info. It is logical to move the reserved space out of request info. It
> also solves the ordering issue mentioned before.
>
> This patch removes the reserve field from request info and makes event
> crypto metadata type to structure from union to make space for response
> info.
>
> App and drivers are updated as per metadata change.
>
> Signed-off-by: Shijith Thotton 
> Acked-by: Anoob Joseph 


@Gujjar, Abhinandan S  @Akhil Goyal

Could you review the crypto adapter-related change?



> ---
> v3:
> * Updated ABI section of release notes.
>
> v2:
> * Updated deprecation notice.
>
> v1:
> * Rebased.
>
>  app/test/test_event_crypto_adapter.c  | 14 +++---
>  doc/guides/rel_notes/deprecation.rst  |  6 --
>  doc/guides/rel_notes/release_21_11.rst|  2 ++
>  drivers/crypto/octeontx/otx_cryptodev_ops.c   |  8 
>  drivers/crypto/octeontx2/otx2_cryptodev_ops.c |  4 ++--
>  .../event/octeontx2/otx2_evdev_crypto_adptr_tx.h  |  4 ++--
>  lib/eventdev/rte_event_crypto_adapter.c   |  8 
>  lib/eventdev/rte_event_crypto_adapter.h   | 15 +--
>  8 files changed, 26 insertions(+), 35 deletions(-)
>
> diff --git a/app/test/test_event_crypto_adapter.c 
> b/app/test/test_event_crypto_adapter.c
> index 3ad20921e2..0d73694d3a 100644
> --- a/app/test/test_event_crypto_adapter.c
> +++ b/app/test/test_event_crypto_adapter.c
> @@ -168,7 +168,7 @@ test_op_forward_mode(uint8_t session_less)
>  {
> struct rte_crypto_sym_xform cipher_xform;
> struct rte_cryptodev_sym_session *sess;
> -   union rte_event_crypto_metadata m_data;
> +   struct rte_event_crypto_metadata m_data;
> struct rte_crypto_sym_op *sym_op;
> struct rte_crypto_op *op;
> struct rte_mbuf *m;
> @@ -368,7 +368,7 @@ test_op_new_mode(uint8_t session_less)
>  {
> struct rte_crypto_sym_xform cipher_xform;
> struct rte_cryptodev_sym_session *sess;
> -   union rte_event_crypto_metadata m_data;
> +   struct rte_event_crypto_metadata m_data;
> struct rte_crypto_sym_op *sym_op;
> struct rte_crypto_op *op;
> struct rte_mbuf *m;
> @@ -406,7 +406,7 @@ test_op_new_mode(uint8_t session_less)
> if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA) {
> /* Fill in private user data information */
> rte_memcpy(&m_data.response_info, &response_info,
> -  sizeof(m_data));
> +  sizeof(response_info));
> rte_cryptodev_sym_session_set_user_data(sess,
> &m_data, sizeof(m_data));
> }
> @@ -426,7 +426,7 @@ test_op_new_mode(uint8_t session_less)
> op->private_data_offset = len;
> /* Fill in private data information */
> rte_memcpy(&m_data.response_info, &response_info,
> -  sizeof(m_data));
> +  sizeof(response_info));
> rte_memcpy((uint8_t *)op + len, &m_data, sizeof(m_data));
> }
>
> @@ -519,7 +519,7 @@ configure_cryptodev(void)
> DEFAULT_NUM_XFORMS *
> sizeof(struct rte_crypto_sym_xform) +
> MAXIMUM_IV_LENGTH +
> -   sizeof(union rte_event_crypto_metadata),
> +   sizeof(struct rte_event_crypto_metadata),
> rte_socket_id());
> if (params.op_mpool == NULL) {
> RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
> @@ -549,12 +549,12 @@ configure_cryptodev(void)
>  * to include the session headers & private data
>  */
> session_size = 
> rte_cryptodev_sym_get_private_session_size(TEST_CDEV_ID);
> -   session_size += sizeof(union rte_event_crypto_metadata);
> +   session_size += sizeof(struct rte_event_crypto_metadata);
>
> params.session_mpool = rte_cryptodev_sym_session_pool_create(
> "CRYPTO_ADAPTER_SESSION_MP",
> MAX_NB_SESSIONS, 0, 0,
> -   sizeof(union rte_event_crypto_metadata),
> +   sizeof(struct rte_event_crypto_metadata),
> SOCKET_ID_ANY);
> TEST_ASSERT_NOT_NULL(params.session_mpool,
> "session mempool allocation failed\n");
> diff --git a/doc/guides/rel_notes/deprecation.rst 
> b/doc/guides/rel_notes/deprecation.rst
> index 76a4abfd6b..58ee95c020 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -2

Re: [dpdk-dev] [EXT] Re: [PATCH v3] eventdev: update crypto adapter metadata structures

2021-09-07 Thread Akhil Goyal
> On Tue, Aug 31, 2021 at 1:27 PM Shijith Thotton 
> wrote:
> >
> > In crypto adapter metadata, reserved bytes in request info structure is
> > a space holder for response info. It enforces an order of operation if
> > the structures are updated using memcpy to avoid overwriting response
> > info. It is logical to move the reserved space out of request info. It
> > also solves the ordering issue mentioned before.
> >
> > This patch removes the reserve field from request info and makes event
> > crypto metadata type to structure from union to make space for response
> > info.
> >
> > App and drivers are updated as per metadata change.
> >
> > Signed-off-by: Shijith Thotton 
> > Acked-by: Anoob Joseph 
> 
> 
> @Gujjar, Abhinandan S  @Akhil Goyal
> 
> Could you review the crypto adapter-related change?
> 
Acked-by: Akhil Goyal 


[dpdk-dev] [PATCH v3 01/10] crypto/dpaa_sec: support DES-CBC

2021-09-07 Thread Hemant Agrawal
From: Gagandeep Singh 

add DES-CBC support and enable available cipher-only
test cases.

Signed-off-by: Gagandeep Singh 
---
 doc/guides/cryptodevs/features/dpaa_sec.ini |  1 +
 drivers/crypto/dpaa_sec/dpaa_sec.c  | 13 +
 drivers/crypto/dpaa_sec/dpaa_sec.h  | 20 
 3 files changed, 34 insertions(+)

diff --git a/doc/guides/cryptodevs/features/dpaa_sec.ini 
b/doc/guides/cryptodevs/features/dpaa_sec.ini
index 243f3e1d67..5d0d04d601 100644
--- a/doc/guides/cryptodevs/features/dpaa_sec.ini
+++ b/doc/guides/cryptodevs/features/dpaa_sec.ini
@@ -24,6 +24,7 @@ AES CBC (256) = Y
 AES CTR (128) = Y
 AES CTR (192) = Y
 AES CTR (256) = Y
+DES CBC   = Y
 3DES CBC  = Y
 SNOW3G UEA2   = Y
 ZUC EEA3  = Y
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c 
b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 7534f80195..0a58f4e917 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -451,6 +451,7 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
switch (ses->cipher_alg) {
case RTE_CRYPTO_CIPHER_AES_CBC:
case RTE_CRYPTO_CIPHER_3DES_CBC:
+   case RTE_CRYPTO_CIPHER_DES_CBC:
case RTE_CRYPTO_CIPHER_AES_CTR:
case RTE_CRYPTO_CIPHER_3DES_CTR:
shared_desc_len = cnstr_shdsc_blkcipher(
@@ -2040,6 +2041,10 @@ dpaa_sec_cipher_init(struct rte_cryptodev *dev 
__rte_unused,
session->cipher_key.alg = OP_ALG_ALGSEL_AES;
session->cipher_key.algmode = OP_ALG_AAI_CBC;
break;
+   case RTE_CRYPTO_CIPHER_DES_CBC:
+   session->cipher_key.alg = OP_ALG_ALGSEL_DES;
+   session->cipher_key.algmode = OP_ALG_AAI_CBC;
+   break;
case RTE_CRYPTO_CIPHER_3DES_CBC:
session->cipher_key.alg = OP_ALG_ALGSEL_3DES;
session->cipher_key.algmode = OP_ALG_AAI_CBC;
@@ -2215,6 +2220,10 @@ dpaa_sec_chain_init(struct rte_cryptodev *dev 
__rte_unused,
session->cipher_key.alg = OP_ALG_ALGSEL_AES;
session->cipher_key.algmode = OP_ALG_AAI_CBC;
break;
+   case RTE_CRYPTO_CIPHER_DES_CBC:
+   session->cipher_key.alg = OP_ALG_ALGSEL_DES;
+   session->cipher_key.algmode = OP_ALG_AAI_CBC;
+   break;
case RTE_CRYPTO_CIPHER_3DES_CBC:
session->cipher_key.alg = OP_ALG_ALGSEL_3DES;
session->cipher_key.algmode = OP_ALG_AAI_CBC;
@@ -2664,6 +2673,10 @@ dpaa_sec_ipsec_proto_init(struct rte_crypto_cipher_xform 
*cipher_xform,
session->cipher_key.alg = OP_PCL_IPSEC_AES_CBC;
session->cipher_key.algmode = OP_ALG_AAI_CBC;
break;
+   case RTE_CRYPTO_CIPHER_DES_CBC:
+   session->cipher_key.alg = OP_PCL_IPSEC_DES;
+   session->cipher_key.algmode = OP_ALG_AAI_CBC;
+   break;
case RTE_CRYPTO_CIPHER_3DES_CBC:
session->cipher_key.alg = OP_PCL_IPSEC_3DES;
session->cipher_key.algmode = OP_ALG_AAI_CBC;
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.h 
b/drivers/crypto/dpaa_sec/dpaa_sec.h
index 2e0ab93ff0..9685010f3f 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.h
@@ -482,6 +482,26 @@ static const struct rte_cryptodev_capabilities 
dpaa_sec_capabilities[] = {
}, }
}, }
},
+   {   /* DES CBC */
+   .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+   {.sym = {
+   .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+   {.cipher = {
+   .algo = RTE_CRYPTO_CIPHER_DES_CBC,
+   .block_size = 8,
+   .key_size = {
+   .min = 8,
+   .max = 8,
+   .increment = 0
+   },
+   .iv_size = {
+   .min = 8,
+   .max = 8,
+   .increment = 0
+   }
+   }, }
+   }, }
+   },
{   /* 3DES CBC */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
{.sym = {
-- 
2.17.1



[dpdk-dev] [PATCH v3 02/10] crypto/dpaa_sec: support non-HMAC auth algos

2021-09-07 Thread Hemant Agrawal
From: Gagandeep Singh 

This patch add support for non-HMAC, md5, shax algos.

Signed-off-by: Gagandeep Singh 
---
 doc/guides/cryptodevs/features/dpaa_sec.ini |   8 +-
 drivers/crypto/dpaa_sec/dpaa_sec.c  |  55 +++--
 drivers/crypto/dpaa_sec/dpaa_sec.h  | 126 
 3 files changed, 180 insertions(+), 9 deletions(-)

diff --git a/doc/guides/cryptodevs/features/dpaa_sec.ini 
b/doc/guides/cryptodevs/features/dpaa_sec.ini
index 5d0d04d601..eab14da96c 100644
--- a/doc/guides/cryptodevs/features/dpaa_sec.ini
+++ b/doc/guides/cryptodevs/features/dpaa_sec.ini
@@ -33,11 +33,17 @@ ZUC EEA3  = Y
 ; Supported authentication algorithms of the 'dpaa_sec' crypto driver.
 ;
 [Auth]
+MD5  = Y
 MD5 HMAC = Y
+SHA1 = Y
 SHA1 HMAC= Y
+SHA224   = Y
 SHA224 HMAC  = Y
+SHA256   = Y
 SHA256 HMAC  = Y
+SHA384   = Y
 SHA384 HMAC  = Y
+SHA512   = Y
 SHA512 HMAC  = Y
 SNOW3G UIA2  = Y
 ZUC EIA3 = Y
@@ -53,4 +59,4 @@ AES GCM (256) = Y
 ;
 ; Supported Asymmetric algorithms of the 'dpaa_sec' crypto driver.
 ;
-[Asymmetric]
\ No newline at end of file
+[Asymmetric]
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c 
b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 0a58f4e917..4f5d9d7f49 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -486,6 +486,18 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
alginfo_a.algtype = ses->auth_key.alg;
alginfo_a.algmode = ses->auth_key.algmode;
switch (ses->auth_alg) {
+   case RTE_CRYPTO_AUTH_MD5:
+   case RTE_CRYPTO_AUTH_SHA1:
+   case RTE_CRYPTO_AUTH_SHA224:
+   case RTE_CRYPTO_AUTH_SHA256:
+   case RTE_CRYPTO_AUTH_SHA384:
+   case RTE_CRYPTO_AUTH_SHA512:
+   shared_desc_len = cnstr_shdsc_hash(
+   cdb->sh_desc, true,
+   swap, SHR_NEVER, &alginfo_a,
+   !ses->dir,
+   ses->digest_length);
+   break;
case RTE_CRYPTO_AUTH_MD5_HMAC:
case RTE_CRYPTO_AUTH_SHA1_HMAC:
case RTE_CRYPTO_AUTH_SHA224_HMAC:
@@ -2077,43 +2089,70 @@ dpaa_sec_auth_init(struct rte_cryptodev *dev 
__rte_unused,
 {
session->ctxt = DPAA_SEC_AUTH;
session->auth_alg = xform->auth.algo;
-   session->auth_key.data = rte_zmalloc(NULL, xform->auth.key.length,
+   session->auth_key.length = xform->auth.key.length;
+   if (xform->auth.key.length) {
+   session->auth_key.data =
+   rte_zmalloc(NULL, xform->auth.key.length,
 RTE_CACHE_LINE_SIZE);
-   if (session->auth_key.data == NULL && xform->auth.key.length > 0) {
-   DPAA_SEC_ERR("No Memory for auth key");
-   return -ENOMEM;
+   if (session->auth_key.data == NULL) {
+   DPAA_SEC_ERR("No Memory for auth key");
+   return -ENOMEM;
+   }
+   memcpy(session->auth_key.data, xform->auth.key.data,
+   xform->auth.key.length);
+
}
-   session->auth_key.length = xform->auth.key.length;
session->digest_length = xform->auth.digest_length;
if (session->cipher_alg == RTE_CRYPTO_CIPHER_NULL) {
session->iv.offset = xform->auth.iv.offset;
session->iv.length = xform->auth.iv.length;
}
 
-   memcpy(session->auth_key.data, xform->auth.key.data,
-  xform->auth.key.length);
-
switch (xform->auth.algo) {
+   case RTE_CRYPTO_AUTH_SHA1:
+   session->auth_key.alg = OP_ALG_ALGSEL_SHA1;
+   session->auth_key.algmode = OP_ALG_AAI_HASH;
+   break;
case RTE_CRYPTO_AUTH_SHA1_HMAC:
session->auth_key.alg = OP_ALG_ALGSEL_SHA1;
session->auth_key.algmode = OP_ALG_AAI_HMAC;
break;
+   case RTE_CRYPTO_AUTH_MD5:
+   session->auth_key.alg = OP_ALG_ALGSEL_MD5;
+   session->auth_key.algmode = OP_ALG_AAI_HASH;
+   break;
case RTE_CRYPTO_AUTH_MD5_HMAC:
session->auth_key.alg = OP_ALG_ALGSEL_MD5;
session->auth_key.algmode = OP_ALG_AAI_HMAC;
break;
+   case RTE_CRYPTO_AUTH_SHA224:
+   session->auth_key.alg = OP_ALG_ALGSEL_SHA224;
+   session->auth_key.algmode = OP_ALG_AAI_HASH;
+   break;
case RTE_CRYPTO_AUTH_SHA224_HMAC:
session->auth_key.alg = OP_ALG_ALGSEL_SHA224;
session->auth_key.algmode = OP_ALG_AAI_HMAC;
break;
+   case RTE_CRYPTO_AUTH_SHA256:
+   session->auth_key.alg = OP_ALG_ALGSEL_SHA256;
+   session

[dpdk-dev] [PATCH v3 03/10] crypto/dpaa_sec: support AES-XCBC-MAC

2021-09-07 Thread Hemant Agrawal
From: Gagandeep Singh 

This patch adds support for AES-XCBC-MAC algo.

Signed-off-by: Gagandeep Singh 
---
 doc/guides/cryptodevs/features/dpaa_sec.ini |  1 +
 drivers/crypto/dpaa_sec/dpaa_sec.c  | 21 -
 2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/doc/guides/cryptodevs/features/dpaa_sec.ini 
b/doc/guides/cryptodevs/features/dpaa_sec.ini
index eab14da96c..d7bc319373 100644
--- a/doc/guides/cryptodevs/features/dpaa_sec.ini
+++ b/doc/guides/cryptodevs/features/dpaa_sec.ini
@@ -47,6 +47,7 @@ SHA512   = Y
 SHA512 HMAC  = Y
 SNOW3G UIA2  = Y
 ZUC EIA3 = Y
+AES XCBC MAC = Y
 
 ;
 ; Supported AEAD algorithms of the 'dpaa_sec' crypto driver.
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c 
b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 4f5d9d7f49..dab0ad28c0 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -524,6 +524,14 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
!ses->dir,
ses->digest_length);
break;
+   case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+   shared_desc_len = cnstr_shdsc_aes_mac(
+   cdb->sh_desc,
+   true, swap, SHR_NEVER,
+   &alginfo_a,
+   !ses->dir,
+   ses->digest_length);
+   break;
default:
DPAA_SEC_ERR("unsupported auth alg %u", ses->auth_alg);
}
@@ -2165,6 +2173,10 @@ dpaa_sec_auth_init(struct rte_cryptodev *dev 
__rte_unused,
session->auth_key.alg = OP_ALG_ALGSEL_ZUCA;
session->auth_key.algmode = OP_ALG_AAI_F9;
break;
+   case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+   session->auth_key.alg = OP_ALG_ALGSEL_AES;
+   session->auth_key.algmode = OP_ALG_AAI_XCBC_MAC;
+   break;
default:
DPAA_SEC_ERR("Crypto: Unsupported Auth specified %u",
  xform->auth.algo);
@@ -2246,6 +2258,10 @@ dpaa_sec_chain_init(struct rte_cryptodev *dev 
__rte_unused,
session->auth_key.alg = OP_ALG_ALGSEL_SHA512;
session->auth_key.algmode = OP_ALG_AAI_HMAC;
break;
+   case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+   session->auth_key.alg = OP_ALG_ALGSEL_AES;
+   session->auth_key.algmode = OP_ALG_AAI_XCBC_MAC;
+   break;
default:
DPAA_SEC_ERR("Crypto: Unsupported Auth specified %u",
  auth_xform->algo);
@@ -2685,8 +2701,11 @@ dpaa_sec_ipsec_proto_init(struct rte_crypto_cipher_xform 
*cipher_xform,
case RTE_CRYPTO_AUTH_NULL:
session->auth_key.alg = OP_PCL_IPSEC_HMAC_NULL;
break;
-   case RTE_CRYPTO_AUTH_SHA224_HMAC:
case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+   session->auth_key.alg = OP_PCL_IPSEC_AES_XCBC_MAC_96;
+   session->auth_key.algmode = OP_ALG_AAI_XCBC_MAC;
+   break;
+   case RTE_CRYPTO_AUTH_SHA224_HMAC:
case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
case RTE_CRYPTO_AUTH_SHA1:
case RTE_CRYPTO_AUTH_SHA256:
-- 
2.17.1



[dpdk-dev] [PATCH v3 04/10] crypto/dpaa_sec: add support for AES CMAC integrity check

2021-09-07 Thread Hemant Agrawal
From: Gagandeep Singh 

This patch adds support for AES_CMAC integrity in non-security mode.

Signed-off-by: Gagandeep Singh 
---
 doc/guides/cryptodevs/features/dpaa_sec.ini |  1 +
 drivers/crypto/dpaa_sec/dpaa_sec.c  | 10 +
 drivers/crypto/dpaa_sec/dpaa_sec.h  | 43 +
 3 files changed, 54 insertions(+)

diff --git a/doc/guides/cryptodevs/features/dpaa_sec.ini 
b/doc/guides/cryptodevs/features/dpaa_sec.ini
index d7bc319373..6a8f77fb1d 100644
--- a/doc/guides/cryptodevs/features/dpaa_sec.ini
+++ b/doc/guides/cryptodevs/features/dpaa_sec.ini
@@ -48,6 +48,7 @@ SHA512 HMAC  = Y
 SNOW3G UIA2  = Y
 ZUC EIA3 = Y
 AES XCBC MAC = Y
+AES CMAC (128) = Y
 
 ;
 ; Supported AEAD algorithms of the 'dpaa_sec' crypto driver.
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c 
b/drivers/crypto/dpaa_sec/dpaa_sec.c
index dab0ad28c0..7d3f971f3c 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -525,6 +525,7 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
ses->digest_length);
break;
case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+   case RTE_CRYPTO_AUTH_AES_CMAC:
shared_desc_len = cnstr_shdsc_aes_mac(
cdb->sh_desc,
true, swap, SHR_NEVER,
@@ -2177,6 +2178,10 @@ dpaa_sec_auth_init(struct rte_cryptodev *dev 
__rte_unused,
session->auth_key.alg = OP_ALG_ALGSEL_AES;
session->auth_key.algmode = OP_ALG_AAI_XCBC_MAC;
break;
+   case RTE_CRYPTO_AUTH_AES_CMAC:
+   session->auth_key.alg = OP_ALG_ALGSEL_AES;
+   session->auth_key.algmode = OP_ALG_AAI_CMAC;
+   break;
default:
DPAA_SEC_ERR("Crypto: Unsupported Auth specified %u",
  xform->auth.algo);
@@ -2262,6 +2267,10 @@ dpaa_sec_chain_init(struct rte_cryptodev *dev 
__rte_unused,
session->auth_key.alg = OP_ALG_ALGSEL_AES;
session->auth_key.algmode = OP_ALG_AAI_XCBC_MAC;
break;
+   case RTE_CRYPTO_AUTH_AES_CMAC:
+   session->auth_key.alg = OP_ALG_ALGSEL_AES;
+   session->auth_key.algmode = OP_ALG_AAI_CMAC;
+   break;
default:
DPAA_SEC_ERR("Crypto: Unsupported Auth specified %u",
  auth_xform->algo);
@@ -2697,6 +2706,7 @@ dpaa_sec_ipsec_proto_init(struct rte_crypto_cipher_xform 
*cipher_xform,
break;
case RTE_CRYPTO_AUTH_AES_CMAC:
session->auth_key.alg = OP_PCL_IPSEC_AES_CMAC_96;
+   session->auth_key.algmode = OP_ALG_AAI_CMAC;
break;
case RTE_CRYPTO_AUTH_NULL:
session->auth_key.alg = OP_PCL_IPSEC_HMAC_NULL;
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.h 
b/drivers/crypto/dpaa_sec/dpaa_sec.h
index 153747c87c..faa740618f 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.h
@@ -738,6 +738,49 @@ static const struct rte_cryptodev_capabilities 
dpaa_sec_capabilities[] = {
}, }
}, }
},
+   {   /* AES CMAC */
+   .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+   {.sym = {
+   .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+   {.auth = {
+   .algo = RTE_CRYPTO_AUTH_AES_CMAC,
+   .block_size = 16,
+   .key_size = {
+   .min = 1,
+   .max = 16,
+   .increment = 1
+   },
+   .digest_size = {
+   .min = 12,
+   .max = 16,
+   .increment = 4
+   },
+   .iv_size = { 0 }
+   }, }
+   }, }
+   },
+   {   /* AES XCBC HMAC */
+   .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+   {.sym = {
+   .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+   {.auth = {
+   .algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,
+   .block_size = 16,
+   .key_size = {
+   .min = 1,
+   .max = 16,
+   .increment = 1
+   },
+   .digest_size = {
+   .min = 12,
+   .max = 16,
+   .increment = 4
+  

[dpdk-dev] [PATCH v3 05/10] common/dpaax: caamflib load correct HFN from DESCBUF

2021-09-07 Thread Hemant Agrawal
From: Franck LENORMAND 

The offset of the HFn word and Bearer/Dir word is different
depending on type of PDB.

The wrong value was used.

This patch address this issue

Signed-off-by: Franck LENORMAND 
---
 drivers/common/dpaax/caamflib/desc/pdcp.h |  7 +-
 drivers/common/dpaax/caamflib/desc/sdap.h | 96 ++-
 2 files changed, 80 insertions(+), 23 deletions(-)

diff --git a/drivers/common/dpaax/caamflib/desc/pdcp.h 
b/drivers/common/dpaax/caamflib/desc/pdcp.h
index 659e289a45..e97d58cbc1 100644
--- a/drivers/common/dpaax/caamflib/desc/pdcp.h
+++ b/drivers/common/dpaax/caamflib/desc/pdcp.h
@@ -270,6 +270,9 @@ enum pdb_type_e {
PDCP_PDB_TYPE_INVALID
 };
 
+#define REDUCED_PDB_DESCBUF_HFN_BEARER_DIR_OFFSET 4
+#define FULL_PDB_DESCBUF_HFN_BEARER_DIR_OFFSET 8
+
 /**
  * rta_inline_pdcp_query() - Provide indications if a key can be passed as
  *   immediate data or shall be referenced in a
@@ -2564,11 +2567,11 @@ insert_hfn_ov_op(struct program *p,
return 0;
 
case PDCP_PDB_TYPE_REDUCED_PDB:
-   hfn_pdb_offset = 4;
+   hfn_pdb_offset = REDUCED_PDB_DESCBUF_HFN_BEARER_DIR_OFFSET;
break;
 
case PDCP_PDB_TYPE_FULL_PDB:
-   hfn_pdb_offset = 8;
+   hfn_pdb_offset = FULL_PDB_DESCBUF_HFN_BEARER_DIR_OFFSET;
break;
 
default:
diff --git a/drivers/common/dpaax/caamflib/desc/sdap.h 
b/drivers/common/dpaax/caamflib/desc/sdap.h
index 6523db1733..f1c49ea3e6 100644
--- a/drivers/common/dpaax/caamflib/desc/sdap.h
+++ b/drivers/common/dpaax/caamflib/desc/sdap.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2020 NXP
+ * Copyright 2020-2021 NXP
  */
 
 #ifndef __DESC_SDAP_H__
@@ -109,12 +109,17 @@ static inline int pdcp_sdap_insert_no_int_op(struct 
program *p,
 bool swap __maybe_unused,
 struct alginfo *cipherdata,
 unsigned int dir,
-enum pdcp_sn_size sn_size)
+enum pdcp_sn_size sn_size,
+enum pdb_type_e pdb_type)
 {
int op;
uint32_t sn_mask = 0;
uint32_t length = 0;
uint32_t offset = 0;
+   int hfn_bearer_dir_offset_in_descbuf =
+   (pdb_type == PDCP_PDB_TYPE_FULL_PDB) ?
+   FULL_PDB_DESCBUF_HFN_BEARER_DIR_OFFSET :
+   REDUCED_PDB_DESCBUF_HFN_BEARER_DIR_OFFSET;
 
if (pdcp_sdap_get_sn_parameters(sn_size, swap, &offset, &length,
&sn_mask))
@@ -137,7 +142,8 @@ static inline int pdcp_sdap_insert_no_int_op(struct program 
*p,
SEQSTORE(p, MATH0, offset, length, 0);
 
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
-   MOVEB(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
+   MOVEB(p, DESCBUF, hfn_bearer_dir_offset_in_descbuf,
+   MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
 
MATHB(p, SEQINSZ, SUB, MATH3, VSEQINSZ, 4, 0);
@@ -190,9 +196,14 @@ pdcp_sdap_insert_enc_only_op(struct program *p, bool swap 
__maybe_unused,
 struct alginfo *cipherdata,
 struct alginfo *authdata __maybe_unused,
 unsigned int dir, enum pdcp_sn_size sn_size,
-unsigned char era_2_sw_hfn_ovrd __maybe_unused)
+unsigned char era_2_sw_hfn_ovrd __maybe_unused,
+enum pdb_type_e pdb_type)
 {
uint32_t offset = 0, length = 0, sn_mask = 0;
+   int hfn_bearer_dir_offset_in_descbuf =
+   (pdb_type == PDCP_PDB_TYPE_FULL_PDB) ?
+   FULL_PDB_DESCBUF_HFN_BEARER_DIR_OFFSET :
+   REDUCED_PDB_DESCBUF_HFN_BEARER_DIR_OFFSET;
 
if (pdcp_sdap_get_sn_parameters(sn_size, swap, &offset, &length,
&sn_mask))
@@ -217,7 +228,8 @@ pdcp_sdap_insert_enc_only_op(struct program *p, bool swap 
__maybe_unused,
/* Word (32 bit) swap */
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
/* Load words from PDB: word 02 (HFN) + word 03 (bearer_dir)*/
-   MOVEB(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
+   MOVEB(p, DESCBUF, hfn_bearer_dir_offset_in_descbuf,
+   MATH2, 0, 8, WAITCOMP | IMMED);
/* Create basic IV */
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
 
@@ -309,13 +321,18 @@ static inline int
 pdcp_sdap_insert_snoop_op(struct program *p, bool swap __maybe_unused,
  struct alginfo *cipherdata, struct alginfo *authdata,
  unsigned int dir, enum pdcp_sn_size sn_size,
- unsigned char era_2_sw_hfn_ovrd __maybe_unused)
+

[dpdk-dev] [PATCH v3 06/10] common/dpaax: caamflib do not clear DPOVRD

2021-09-07 Thread Hemant Agrawal
From: Franck LENORMAND 

For SDAP, we are not using the protocol operation to perform
4G/LTE operation so the DPOVRD option is not used.

Removing it save some space in the descriptor buffer and
execution time.

Signed-off-by: Franck LENORMAND 
---
 drivers/common/dpaax/caamflib/desc/pdcp.h | 14 --
 drivers/common/dpaax/caamflib/desc/sdap.h |  2 +-
 2 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/drivers/common/dpaax/caamflib/desc/pdcp.h 
b/drivers/common/dpaax/caamflib/desc/pdcp.h
index e97d58cbc1..5b3d846099 100644
--- a/drivers/common/dpaax/caamflib/desc/pdcp.h
+++ b/drivers/common/dpaax/caamflib/desc/pdcp.h
@@ -2546,7 +2546,8 @@ static inline int
 insert_hfn_ov_op(struct program *p,
 uint32_t shift,
 enum pdb_type_e pdb_type,
-unsigned char era_2_sw_hfn_ovrd)
+unsigned char era_2_sw_hfn_ovrd,
+bool clear_dpovrd_at_end)
 {
uint32_t imm = PDCP_DPOVRD_HFN_OV_EN;
uint16_t hfn_pdb_offset;
@@ -2597,13 +2598,14 @@ insert_hfn_ov_op(struct program *p,
MATHB(p, MATH0, SHLD, MATH0, MATH0, 8, 0);
MOVE(p, MATH0, 0, DESCBUF, hfn_pdb_offset, 4, IMMED);
 
-   if (rta_sec_era >= RTA_SEC_ERA_8)
+   if (clear_dpovrd_at_end && (rta_sec_era >= RTA_SEC_ERA_8)) {
/*
 * For ERA8, DPOVRD could be handled by the PROTOCOL command
 * itself. For now, this is not done. Thus, clear DPOVRD here
 * to alleviate any side-effects.
 */
MATHB(p, DPOVRD, AND, ZERO, DPOVRD, 4, STL);
+   }
 
SET_LABEL(p, keyjump);
PATCH_JUMP(p, pkeyjump, keyjump);
@@ -2989,7 +2991,7 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
SET_LABEL(p, pdb_end);
 
err = insert_hfn_ov_op(p, sn_size, pdb_type,
-  era_2_sw_hfn_ovrd);
+  era_2_sw_hfn_ovrd, true);
if (err)
return err;
 
@@ -3143,7 +3145,7 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
SET_LABEL(p, pdb_end);
 
err = insert_hfn_ov_op(p, sn_size, pdb_type,
-  era_2_sw_hfn_ovrd);
+  era_2_sw_hfn_ovrd, true);
if (err)
return err;
 
@@ -3319,7 +3321,7 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
}
SET_LABEL(p, pdb_end);
 
-   err = insert_hfn_ov_op(p, sn_size, pdb_type, era_2_sw_hfn_ovrd);
+   err = insert_hfn_ov_op(p, sn_size, pdb_type, era_2_sw_hfn_ovrd, true);
if (err)
return err;
 
@@ -3523,7 +3525,7 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
}
SET_LABEL(p, pdb_end);
 
-   err = insert_hfn_ov_op(p, sn_size, pdb_type, era_2_sw_hfn_ovrd);
+   err = insert_hfn_ov_op(p, sn_size, pdb_type, era_2_sw_hfn_ovrd, true);
if (err)
return err;
 
diff --git a/drivers/common/dpaax/caamflib/desc/sdap.h 
b/drivers/common/dpaax/caamflib/desc/sdap.h
index f1c49ea3e6..d5d5850b4f 100644
--- a/drivers/common/dpaax/caamflib/desc/sdap.h
+++ b/drivers/common/dpaax/caamflib/desc/sdap.h
@@ -990,7 +990,7 @@ cnstr_shdsc_pdcp_sdap_u_plane(uint32_t *descbuf,
SET_LABEL(p, pdb_end);
 
/* Inser the HFN override operation */
-   err = insert_hfn_ov_op(p, sn_size, pdb_type, era_2_sw_hfn_ovrd);
+   err = insert_hfn_ov_op(p, sn_size, pdb_type, era_2_sw_hfn_ovrd, false);
if (err)
return err;
 
-- 
2.17.1



[dpdk-dev] [PATCH v3 07/10] common/dpaax: enhance caamflib with inline keys

2021-09-07 Thread Hemant Agrawal
From: Franck LENORMAND 

The space in descriptor buffer is scarce as it is limited to
64 words for platforms except ERA10 (which has 128).

As the descriptors are processed with QI, it adds some words
to the descriptor which is passed.

Some descriptors used for SDAP were using too much words reaching
the limit.

This patch reduces the number of words used by removing the inlining
of some keys (done for performance) in order to have working
descriptors.

Signed-off-by: Franck LENORMAND 
---
 drivers/common/dpaax/caamflib/desc/sdap.h   | 61 -
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 28 --
 2 files changed, 81 insertions(+), 8 deletions(-)

diff --git a/drivers/common/dpaax/caamflib/desc/sdap.h 
b/drivers/common/dpaax/caamflib/desc/sdap.h
index d5d5850b4f..b2497a5424 100644
--- a/drivers/common/dpaax/caamflib/desc/sdap.h
+++ b/drivers/common/dpaax/caamflib/desc/sdap.h
@@ -20,6 +20,63 @@
 #define SDAP_BITS_SIZE (SDAP_BYTE_SIZE * 8)
 #endif
 
+/**
+ * rta_inline_pdcp_query() - Provide indications if a key can be passed as
+ *   immediate data or shall be referenced in a
+ *   shared descriptor.
+ * Return: 0 if data can be inlined or 1 if referenced.
+ */
+static inline int
+rta_inline_pdcp_sdap_query(enum auth_type_pdcp auth_alg,
+ enum cipher_type_pdcp cipher_alg,
+ enum pdcp_sn_size sn_size,
+ int8_t hfn_ovd)
+{
+   int nb_key_to_inline = 0;
+
+   if ((cipher_alg != PDCP_CIPHER_TYPE_NULL) &&
+   (auth_alg != PDCP_AUTH_TYPE_NULL))
+   return 2;
+   else
+   return 0;
+
+   /**
+* Shared Descriptors for some of the cases does not fit in the
+* MAX_DESC_SIZE of the descriptor
+* The cases which exceed are for RTA_SEC_ERA=8 and HFN override
+* enabled and 12/18 bit uplane and either of following Algo combo.
+* - AES-SNOW
+* - AES-ZUC
+* - SNOW-SNOW
+* - SNOW-ZUC
+* - ZUC-SNOW
+* - ZUC-SNOW
+*
+* We cannot make inline for all cases, as this will impact performance
+* due to extra memory accesses for the keys.
+*/
+
+   /* Inline only the cipher key */
+   if ((rta_sec_era == RTA_SEC_ERA_8) && hfn_ovd &&
+   ((sn_size == PDCP_SN_SIZE_12) ||
+(sn_size == PDCP_SN_SIZE_18)) &&
+   (cipher_alg != PDCP_CIPHER_TYPE_NULL) &&
+   ((auth_alg == PDCP_AUTH_TYPE_SNOW) ||
+(auth_alg == PDCP_AUTH_TYPE_ZUC))) {
+
+   nb_key_to_inline++;
+
+   /* Sub case where inlining another key is required */
+   if ((cipher_alg == PDCP_CIPHER_TYPE_AES) &&
+   (auth_alg == PDCP_AUTH_TYPE_SNOW))
+   nb_key_to_inline++;
+   }
+
+   /* Inline both keys */
+
+   return nb_key_to_inline;
+}
+
 static inline void key_loading_opti(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata)
@@ -788,8 +845,8 @@ pdcp_sdap_insert_cplane_null_op(struct program *p,
   unsigned char era_2_sw_hfn_ovrd,
   enum pdb_type_e pdb_type __maybe_unused)
 {
-   return pdcp_insert_cplane_int_only_op(p, swap, cipherdata, authdata,
-   dir, sn_size, era_2_sw_hfn_ovrd);
+   return pdcp_insert_cplane_null_op(p, swap, cipherdata, authdata, dir,
+ sn_size, era_2_sw_hfn_ovrd);
 }
 
 static inline int
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c 
b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index fe90d9d2d8..cca1963da9 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -3254,12 +3254,28 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
goto out;
}
 
-   if (rta_inline_pdcp_query(authdata.algtype,
-   cipherdata.algtype,
-   session->pdcp.sn_size,
-   session->pdcp.hfn_ovd)) {
-   cipherdata.key = DPAA2_VADDR_TO_IOVA(cipherdata.key);
-   cipherdata.key_type = RTA_DATA_PTR;
+   if (pdcp_xform->sdap_enabled) {
+   int nb_keys_to_inline =
+   rta_inline_pdcp_sdap_query(authdata.algtype,
+   cipherdata.algtype,
+   session->pdcp.sn_size,
+   session->pdcp.hfn_ovd);
+   if (nb_keys_to_inline >= 1) {
+   cipherdata.key = DPAA2_VADDR_TO_IOVA(cipherdata.key);
+   cipherdata.key_type = RTA_DATA_PTR;
+   }
+   if (nb_keys_to_inline >= 2) {
+   authdata.key 

[dpdk-dev] [PATCH v3 08/10] common/dpaax: fix IV value for shortMAC-I for SNOW algo

2021-09-07 Thread Hemant Agrawal
From: Gagandeep Singh 

The logic was incorecly doing conditional swap. It need to
be bit swap always.

Fixes: 73a24060cd70 ("crypto/dpaa2_sec: add sample PDCP descriptor APIs")
Cc: sta...@dpdk.org

Signed-off-by: Gagandeep Singh 
---
 drivers/common/dpaax/caamflib/desc/pdcp.h | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/common/dpaax/caamflib/desc/pdcp.h 
b/drivers/common/dpaax/caamflib/desc/pdcp.h
index 5b3d846099..8e8daf5ba8 100644
--- a/drivers/common/dpaax/caamflib/desc/pdcp.h
+++ b/drivers/common/dpaax/caamflib/desc/pdcp.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
  * Copyright 2008-2013 Freescale Semiconductor, Inc.
- * Copyright 2019-2020 NXP
+ * Copyright 2019-2021 NXP
  */
 
 #ifndef __DESC_PDCP_H__
@@ -3715,9 +3715,10 @@ cnstr_shdsc_pdcp_short_mac(uint32_t *descbuf,
break;
 
case PDCP_AUTH_TYPE_SNOW:
+   /* IV calculation based on 3GPP specs. 36331, section:5.3.7.4 */
iv[0] = 0x;
-   iv[1] = swap ? swab32(0x0400) : 0x0400;
-   iv[2] = swap ? swab32(0xF800) : 0xF800;
+   iv[1] = swab32(0x0400);
+   iv[2] = swab32(0xF800);
 
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
-- 
2.17.1



[dpdk-dev] [PATCH v3 09/10] crypto/dpaa_sec: force inline of the keys to save space

2021-09-07 Thread Hemant Agrawal
From: Gagandeep Singh 

This patch improve storage and performance by force inline
of the keys.

Signed-off-by: Franck LENORMAND 
Signed-off-by: Gagandeep Singh 
---
 drivers/crypto/dpaa_sec/dpaa_sec.c | 35 ++
 1 file changed, 26 insertions(+), 9 deletions(-)

diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c 
b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 7d3f971f3c..74f30bc5a4 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2017-2019 NXP
+ *   Copyright 2017-2021 NXP
  *
  */
 
@@ -260,14 +260,31 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
p_authdata = &authdata;
}
 
-   if (rta_inline_pdcp_query(authdata.algtype,
-   cipherdata.algtype,
-   ses->pdcp.sn_size,
-   ses->pdcp.hfn_ovd)) {
-   cipherdata.key =
-   (size_t)rte_dpaa_mem_vtop((void *)
-   (size_t)cipherdata.key);
-   cipherdata.key_type = RTA_DATA_PTR;
+   if (ses->pdcp.sdap_enabled) {
+   int nb_keys_to_inline =
+   rta_inline_pdcp_sdap_query(authdata.algtype,
+   cipherdata.algtype,
+   ses->pdcp.sn_size,
+   ses->pdcp.hfn_ovd);
+   if (nb_keys_to_inline >= 1) {
+   cipherdata.key = (size_t)rte_dpaa_mem_vtop((void *)
+   (size_t)cipherdata.key);
+   cipherdata.key_type = RTA_DATA_PTR;
+   }
+   if (nb_keys_to_inline >= 2) {
+   authdata.key = (size_t)rte_dpaa_mem_vtop((void *)
+   (size_t)authdata.key);
+   authdata.key_type = RTA_DATA_PTR;
+   }
+   } else {
+   if (rta_inline_pdcp_query(authdata.algtype,
+   cipherdata.algtype,
+   ses->pdcp.sn_size,
+   ses->pdcp.hfn_ovd)) {
+   cipherdata.key = (size_t)rte_dpaa_mem_vtop((void *)
+   (size_t)cipherdata.key);
+   cipherdata.key_type = RTA_DATA_PTR;
+   }
}
 
if (ses->pdcp.domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
-- 
2.17.1



[dpdk-dev] [PATCH v3 10/10] crypto/dpaa2_sec: add error packet counters

2021-09-07 Thread Hemant Agrawal
This patch add support to also counter err pkt counter per queue.
This also enhances few related debug prints.

Signed-off-by: Hemant Agrawal 
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c 
b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index cca1963da9..c000da18dc 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1702,8 +1702,9 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op 
**ops,
 
if (unlikely(fd->simple.frc)) {
/* TODO Parse SEC errors */
-   DPAA2_SEC_ERR("SEC returned Error - %x",
+   DPAA2_SEC_DP_ERR("SEC returned Error - %x\n",
  fd->simple.frc);
+   dpaa2_qp->rx_vq.err_pkts += 1;
ops[num_rx]->status = RTE_CRYPTO_OP_STATUS_ERROR;
} else {
ops[num_rx]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
@@ -1715,7 +1716,8 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op 
**ops,
 
dpaa2_qp->rx_vq.rx_pkts += num_rx;
 
-   DPAA2_SEC_DP_DEBUG("SEC Received %d Packets\n", num_rx);
+   DPAA2_SEC_DP_DEBUG("SEC RX pkts %d err pkts %" PRIu64 "\n", num_rx,
+   dpaa2_qp->rx_vq.err_pkts);
/*Return the total number of packets received to DPAA2 app*/
return num_rx;
 }
-- 
2.17.1



Re: [dpdk-dev] [PATCH] RFC: ethdev: add reassembly offload

2021-09-07 Thread Ferruh Yigit
On 8/23/2021 11:02 AM, Akhil Goyal wrote:
> Reassembly is a costly operation if it is done in
> software, however, if it is offloaded to HW, it can
> considerably save application cycles.
> The operation becomes even more costlier if IP fragmants
> are encrypted.
> 
> To resolve above two issues, a new offload
> DEV_RX_OFFLOAD_REASSEMBLY is introduced in ethdev for
> devices which can attempt reassembly of packets in hardware.
> rte_eth_dev_info is added with the reassembly capabilities
> which a device can support.
> Now, if IP fragments are encrypted, reassembly can also be
> attempted while doing inline IPsec processing.
> This is controlled by a flag in rte_security_ipsec_sa_options
> to enable reassembly of encrypted IP fragments in the inline
> path.
> 
> The resulting reassembled packet would be a typical
> segmented mbuf in case of success.
> 
> And if reassembly of fragments is failed or is incomplete (if
> fragments do not come before the reass_timeout), the mbuf is
> updated with an ol_flag PKT_RX_REASSEMBLY_INCOMPLETE and
> mbuf is returned as is. Now application may decide the fate
> of the packet to wait more for fragments to come or drop.
> 
> Signed-off-by: Akhil Goyal 
> ---
>  lib/ethdev/rte_ethdev.c |  1 +
>  lib/ethdev/rte_ethdev.h | 18 +-
>  lib/mbuf/rte_mbuf_core.h|  3 ++-
>  lib/security/rte_security.h | 10 ++
>  4 files changed, 30 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 9d95cd11e1..1ab3a093cf 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -119,6 +119,7 @@ static const struct {
>   RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
>   RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
>   RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
> + RTE_RX_OFFLOAD_BIT2STR(REASSEMBLY),
>   RTE_RX_OFFLOAD_BIT2STR(SCATTER),
>   RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
>   RTE_RX_OFFLOAD_BIT2STR(SECURITY),
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index d2b27c351f..e89a4dc1eb 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -1360,6 +1360,7 @@ struct rte_eth_conf {
>  #define DEV_RX_OFFLOAD_VLAN_FILTER   0x0200
>  #define DEV_RX_OFFLOAD_VLAN_EXTEND   0x0400
>  #define DEV_RX_OFFLOAD_JUMBO_FRAME   0x0800
> +#define DEV_RX_OFFLOAD_REASSEMBLY0x1000

previous '0x1000' was 'DEV_RX_OFFLOAD_CRC_STRIP', it has been long that
offload has been removed, but not sure if it cause any problem to re-use it.

>  #define DEV_RX_OFFLOAD_SCATTER   0x2000
>  /**
>   * Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
> @@ -1477,6 +1478,20 @@ struct rte_eth_dev_portconf {
>   */
>  #define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID (UINT16_MAX)
>  
> +/**
> + * Reassembly capabilities that a device can support.
> + * The device which can support reassembly offload should set
> + * DEV_RX_OFFLOAD_REASSEMBLY
> + */
> +struct rte_eth_reass_capa {
> + /** Maximum time in ns that a fragment can wait for further fragments */
> + uint64_t reass_timeout;
> + /** Maximum number of fragments that device can reassemble */
> + uint16_t max_frags;
> + /** Reserved for future capabilities */
> + uint16_t reserved[3];
> +};
> +

I wonder if there is any other hardware around supports reassembly offload, it
would be good to get more feedback on the capabilities list.

>  /**
>   * Ethernet device associated switch information
>   */
> @@ -1582,8 +1597,9 @@ struct rte_eth_dev_info {
>* embedded managed interconnect/switch.
>*/
>   struct rte_eth_switch_info switch_info;
> + /* Reassembly capabilities of a device for reassembly offload */
> + struct rte_eth_reass_capa reass_capa;
>  
> - uint64_t reserved_64s[2]; /**< Reserved for future fields */

Reserved fields were added to be able to update the struct without breaking the
ABI, so that a critical change doesn't have to wait until next ABI break 
release.
Since this is ABI break release, we can keep the reserved field and add the new
struct. Or this can be an opportunity to get rid of the reserved field.

Personally I have no objection to get rid of the reserved field, but better to
agree on this explicitly.

>   void *reserved_ptrs[2];   /**< Reserved for future fields */
>  };
>  
> diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
> index bb38d7f581..cea25c87f7 100644
> --- a/lib/mbuf/rte_mbuf_core.h
> +++ b/lib/mbuf/rte_mbuf_core.h
> @@ -200,10 +200,11 @@ extern "C" {
>  #define PKT_RX_OUTER_L4_CKSUM_BAD(1ULL << 21)
>  #define PKT_RX_OUTER_L4_CKSUM_GOOD   (1ULL << 22)
>  #define PKT_RX_OUTER_L4_CKSUM_INVALID((1ULL << 21) | (1ULL << 22))
> +#define PKT_RX_REASSEMBLY_INCOMPLETE (1ULL << 23)
>  

Similar comment with Andrew's, what is the expectation from application if this
flag exists? Can we drop it to simplify the logic in the application?

>  /* add new RX flags here, don't for

[dpdk-dev] [PATCH v4 1/2] eventdev: add rx queue info get api

2021-09-07 Thread Ganapati Kundapura
Added rte_event_eth_rx_adapter_queue_info_get() API to get rx queue
information - event queue identifier, flags for handling received packets,
schedular type, event priority, polling frequency of the receive queue
and flow identifier in rte_event_eth_rx_adapter_queue_info structure

Signed-off-by: Ganapati Kundapura 

---
v4:
* squashed 1/3 and 3/3

v3:
* Split single patch into implementaion, test and document updation
  patches separately

v2:
* Fixed build issue due to missing entry in version.map

v1:
* Initial patch with implementaion, test and doc together

Signed-off-by: Ganapati Kundapura 
---
 .../prog_guide/event_ethernet_rx_adapter.rst   |  8 +++
 lib/eventdev/eventdev_pmd.h| 31 +
 lib/eventdev/rte_event_eth_rx_adapter.c| 76 ++
 lib/eventdev/rte_event_eth_rx_adapter.h| 71 
 lib/eventdev/version.map   |  1 +
 5 files changed, 187 insertions(+)

diff --git a/doc/guides/prog_guide/event_ethernet_rx_adapter.rst 
b/doc/guides/prog_guide/event_ethernet_rx_adapter.rst
index c01e5a9..9897985 100644
--- a/doc/guides/prog_guide/event_ethernet_rx_adapter.rst
+++ b/doc/guides/prog_guide/event_ethernet_rx_adapter.rst
@@ -146,6 +146,14 @@ if the callback is supported, and the counts maintained by 
the service function,
 if one exists. The service function also maintains a count of cycles for which
 it was not able to enqueue to the event device.
 
+Getting Adapter queue info
+~~
+
+The  ``rte_event_eth_rx_adapter_queue_info_get()`` function reports
+flags for handling received packets, event queue identifier, scheduar type,
+event priority, polling frequency of the receive queue and flow identifier
+in struct ``rte_event_eth_rx_adapter_queue_info``.
+
 Interrupt Based Rx Queues
 ~~
 
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index 0f724ac..20cc0a7 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -561,6 +561,35 @@ typedef int (*eventdev_eth_rx_adapter_queue_del_t)
const struct rte_eth_dev *eth_dev,
int32_t rx_queue_id);
 
+struct rte_event_eth_rx_adapter_queue_info;
+
+/**
+ * Retrieve information about Rx queue. This callback is invoked if
+ * the caps returned from the eventdev_eth_rx_adapter_caps_get(, eth_port_id)
+ * has RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT set.
+ *
+ * @param dev
+ *  Event device pointer
+ *
+ * @param eth_dev
+ *  Ethernet device pointer
+ *
+ * @param rx_queue_id
+ *  Ethernet device receive queue index.
+ *
+ * @param[out] info
+ *  Pointer to rte_event_eth_rx_adapter_queue_info structure
+ *
+ * @return
+ *  - 0: Success
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_rx_adapter_queue_info_get_t)
+   (const struct rte_eventdev *dev,
+   const struct rte_eth_dev *eth_dev,
+   uint16_t rx_queue_id,
+   struct rte_event_eth_rx_adapter_queue_info *info);
+
 /**
  * Start ethernet Rx adapter. This callback is invoked if
  * the caps returned from eventdev_eth_rx_adapter_caps_get(.., eth_port_id)
@@ -1107,6 +1136,8 @@ struct rte_eventdev_ops {
/**< Add Rx queues to ethernet Rx adapter */
eventdev_eth_rx_adapter_queue_del_t eth_rx_adapter_queue_del;
/**< Delete Rx queues from ethernet Rx adapter */
+   eventdev_eth_rx_adapter_queue_info_get_t eth_rx_adapter_queue_info_get;
+   /**< Get Rx adapter queue info */
eventdev_eth_rx_adapter_start_t eth_rx_adapter_start;
/**< Start ethernet Rx adapter */
eventdev_eth_rx_adapter_stop_t eth_rx_adapter_stop;
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c 
b/lib/eventdev/rte_event_eth_rx_adapter.c
index 7c94c73..98184fb 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -2811,3 +2811,79 @@ rte_event_eth_rx_adapter_cb_register(uint8_t id,
 
return 0;
 }
+
+int
+rte_event_eth_rx_adapter_queue_info_get(uint8_t id, uint16_t eth_dev_id,
+   uint16_t rx_queue_id,
+   struct rte_event_eth_rx_adapter_queue_info *info)
+{
+   struct rte_eventdev *dev;
+   struct eth_device_info *dev_info;
+   struct rte_event_eth_rx_adapter *rx_adapter;
+   struct eth_rx_queue_info *queue_info;
+   struct rte_event *qi_ev;
+   int ret;
+   uint32_t cap;
+
+   RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+   RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL);
+
+   if (rx_queue_id >= rte_eth_devices[eth_dev_id].data->nb_rx_queues) {
+   RTE_EDEV_LOG_ERR("Invalid rx queue_id %u", rx_queue_id);
+   return -EINVAL;
+   }
+
+   if (info == NULL) {
+   RTE_EDEV_LOG_ERR("Rx queue info cannot be NULL");
+   re

[dpdk-dev] [PATCH v4 2/2] test/event: Add rx queue info get test in rx adapter autotest

2021-09-07 Thread Ganapati Kundapura
Add unit tests for rte_event_eth_rx_adapter_queue_info_get()
in rx adapter autotest

Signed-off-by: Ganapati Kundapura 
---
 app/test/test_event_eth_rx_adapter.c | 22 ++
 1 file changed, 22 insertions(+)

diff --git a/app/test/test_event_eth_rx_adapter.c 
b/app/test/test_event_eth_rx_adapter.c
index 9198767..c642e1b 100644
--- a/app/test/test_event_eth_rx_adapter.c
+++ b/app/test/test_event_eth_rx_adapter.c
@@ -750,6 +750,27 @@ adapter_stats(void)
return TEST_SUCCESS;
 }
 
+static int
+adapter_queue_info(void)
+{
+   int err;
+   struct rte_event_eth_rx_adapter_queue_info queue_info;
+
+   err = rte_event_eth_rx_adapter_queue_info_get(TEST_INST_ID, TEST_DEV_ID,
+ 0, &queue_info);
+   TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+   err = rte_event_eth_rx_adapter_queue_info_get(TEST_INST_ID, TEST_DEV_ID,
+ -1, &queue_info);
+   TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+   err = rte_event_eth_rx_adapter_queue_info_get(TEST_INST_ID, TEST_DEV_ID,
+ 0, NULL);
+   TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+   return TEST_SUCCESS;
+}
+
 static struct unit_test_suite event_eth_rx_tests = {
.suite_name = "rx event eth adapter test suite",
.setup = testsuite_setup,
@@ -762,6 +783,7 @@ static struct unit_test_suite event_eth_rx_tests = {
adapter_multi_eth_add_del),
TEST_CASE_ST(adapter_create, adapter_free, adapter_start_stop),
TEST_CASE_ST(adapter_create, adapter_free, adapter_stats),
+   TEST_CASE_ST(adapter_create, adapter_free, adapter_queue_info),
TEST_CASES_END() /**< NULL terminate unit test array */
}
 };
-- 
2.6.4



Re: [dpdk-dev] [PATCH v3 1/3] eventdev: add rx queue info get api

2021-09-07 Thread Kundapura, Ganapati


> -Original Message-
> From: Jerin Jacob 
> Sent: 07 September 2021 13:42
> To: Kundapura, Ganapati 
> Cc: Jayatheerthan, Jay ; dpdk-dev
> ; Pavan Nikhilesh 
> Subject: Re: [PATCH v3 1/3] eventdev: add rx queue info get api
> 
>  in
> 
> On Tue, Sep 7, 2021 at 12:15 PM Ganapati Kundapura
>  wrote:
> >
> > Added rte_event_eth_rx_adapter_queue_info_get() API to get rx queue
> > information - event queue identifier, flags for handling received
> > packets, schedular type, event priority, polling frequency of the
> > receive queue and flow identifier in
> > rte_event_eth_rx_adapter_queue_info structure
> >
> > Signed-off-by: Ganapati Kundapura 
> >
> > ---
> > v3:
> > * Split single patch into implementaion, test and document updation
> >   patches separately
> 
> Please squash 1/3 and 3/3.
> 
Squashed 1/3 and 3/3
> >
> > v2:
> > * Fixed build issue due to missing entry in version.map
> >
> > v1:
> > * Initial patch with implementaion, test and doc together
> > ---
> >  lib/eventdev/eventdev_pmd.h | 31 ++
> >  lib/eventdev/rte_event_eth_rx_adapter.c | 76
> > +
> lib/eventdev/rte_event_eth_rx_adapter.h | 71
> ++
> >  lib/eventdev/version.map|  1 +
> >  4 files changed, 179 insertions(+)
> >
> > diff --git a/lib/eventdev/eventdev_pmd.h
> b/lib/eventdev/eventdev_pmd.h
> > index 0f724ac..20cc0a7 100644
> > --- a/lib/eventdev/eventdev_pmd.h
> > +++ b/lib/eventdev/eventdev_pmd.h
> > @@ -561,6 +561,35 @@ typedef int
> (*eventdev_eth_rx_adapter_queue_del_t)
> > const struct rte_eth_dev *eth_dev,
> > int32_t rx_queue_id);
> >
> > +struct rte_event_eth_rx_adapter_queue_info;
> > +
> > +/**
> > + * Retrieve information about Rx queue. This callback is invoked if
> > + * the caps returned from the eventdev_eth_rx_adapter_caps_get(,
> > +eth_port_id)
> > + * has RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT set.
> 
> It will useful for !RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT case
> too.
> 
> 
> 
> > + *
> > + * @param dev
> > + *  Event device pointer
> > + *
> > + * @param eth_dev
> > + *  Ethernet device pointer
> > + *
> > + * @param rx_queue_id
> > + *  Ethernet device receive queue index.
> > + *
> > + * @param[out] info
> > + *  Pointer to rte_event_eth_rx_adapter_queue_info structure
> > + *
> > + * @return
> > + *  - 0: Success
> > + *  - <0: Error code on failure.
> > + */
> > +typedef int (*eventdev_eth_rx_adapter_queue_info_get_t)
> > +   (const struct rte_eventdev *dev,
> > +   const struct rte_eth_dev *eth_dev,
> > +   uint16_t rx_queue_id,
> > +   struct rte_event_eth_rx_adapter_queue_info
> > +*info);
> > +
> >  /**
> >   * Start ethernet Rx adapter. This callback is invoked if
> >   * the caps returned from eventdev_eth_rx_adapter_caps_get(..,
> > eth_port_id) @@ -1107,6 +1136,8 @@ struct rte_eventdev_ops {
> > /**< Add Rx queues to ethernet Rx adapter */
> > eventdev_eth_rx_adapter_queue_del_t eth_rx_adapter_queue_del;
> > /**< Delete Rx queues from ethernet Rx adapter */
> > +   eventdev_eth_rx_adapter_queue_info_get_t
> eth_rx_adapter_queue_info_get;
> > +   /**< Get Rx adapter queue info */
> > eventdev_eth_rx_adapter_start_t eth_rx_adapter_start;
> > /**< Start ethernet Rx adapter */
> > eventdev_eth_rx_adapter_stop_t eth_rx_adapter_stop; diff --git
> > a/lib/eventdev/rte_event_eth_rx_adapter.c
> > b/lib/eventdev/rte_event_eth_rx_adapter.c
> > index 7c94c73..98184fb 100644
> > --- a/lib/eventdev/rte_event_eth_rx_adapter.c
> > +++ b/lib/eventdev/rte_event_eth_rx_adapter.c
> > @@ -2811,3 +2811,79 @@ rte_event_eth_rx_adapter_cb_register(uint8_t
> > id,
> >
> > return 0;
> >  }
> > +
> > +int
> > +rte_event_eth_rx_adapter_queue_info_get(uint8_t id, uint16_t
> eth_dev_id,
> > +   uint16_t rx_queue_id,
> > +   struct rte_event_eth_rx_adapter_queue_info
> > +*info) {
> > +   struct rte_eventdev *dev;
> > +   struct eth_device_info *dev_info;
> > +   struct rte_event_eth_rx_adapter *rx_adapter;
> > +   struct eth_rx_queue_info *queue_info;
> > +   struct rte_event *qi_ev;
> > +   int ret;
> > +   uint32_t cap;
> > +
> > +   RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
> > +   RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL);
> > +
> > +   if (rx_queue_id >= rte_eth_devices[eth_dev_id].data-
> >nb_rx_queues) {
> > +   RTE_EDEV_LOG_ERR("Invalid rx queue_id %u", rx_queue_id);
> > +   return -EINVAL;
> > +   }
> > +
> > +   if (info == NULL) {
> > +   RTE_EDEV_LOG_ERR("Rx queue info cannot be NULL");
> > +   return -EINVAL;
> > +   }
> > +
> > +   rx_adapter = rxa_id_to_adapter(id);
> > +   if (rx_adapter == NULL)
> > +   

Re: [dpdk-dev] [RFC V1] examples/l3fwd-power: fix memory leak for rte_pci_device

2021-09-07 Thread Thomas Monjalon
07/09/2021 05:41, Huisong Li:
> Calling rte_eth_dev_close() will release resources of eth device and close
> it. But rte_pci_device struct isn't released when app exit, which will lead
> to memory leak.

That's a PMD issue.
When the last port of a PCI device is closed, the device should be freed.

> + /* Retrieve device address in eth device before closing it. */
> + eth_dev = &rte_eth_devices[portid];

You should not access this array, considered internal.

> + rte_dev = eth_dev->device;
>   rte_eth_dev_close(portid);
> + ret = rte_dev_remove(rte_dev);





[dpdk-dev] [PATCH v2 0/5] support of MAC-I

2021-09-07 Thread Gagandeep Singh
This series add support of Message Authentication Code
- Integrity on DPAAX platforms.

v2-change-log:
* update commit message
* merged an existing patch with this series:
https://patches.dpdk.org/project/dpdk/patch/20210825081837.23830-1-hemant.agra...@nxp.com/mbox/

Gagandeep Singh (4):
  common/dpaax: fix IV value for shortMAC-I for SNOW algo
  test/crypto: add pdcp security short MAC-I support
  crypto/dpaa2_sec: add PDCP short MAC-I support
  crypto/dpaa_sec: add pdcp short MAC-I support

Hemant Agrawal (1):
  security: support PDCP short MAC-I

 app/test-crypto-perf/cperf_options_parsing.c  |   8 +-
 app/test/test_cryptodev.c |  48 
 ...est_cryptodev_security_pdcp_test_vectors.h | 105 +-
 doc/guides/prog_guide/rte_security.rst|  11 +-
 doc/guides/tools/cryptoperf.rst   |   4 +-
 drivers/common/dpaax/caamflib/desc/pdcp.h |   7 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   |  29 +++--
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h |   9 ++
 drivers/crypto/dpaa_sec/dpaa_sec.c|   3 +
 drivers/crypto/dpaa_sec/dpaa_sec.h|  11 +-
 lib/security/rte_security.h   |   1 +
 11 files changed, 215 insertions(+), 21 deletions(-)

-- 
2.25.1



[dpdk-dev] [PATCH v2 1/5] common/dpaax: fix IV value for shortMAC-I for SNOW algo

2021-09-07 Thread Gagandeep Singh
The logic was incorecly doing conditional swap. It need to
be bit swap always.

Fixes: 73a24060cd70 ("crypto/dpaa2_sec: add sample PDCP descriptor APIs")
Cc: sta...@dpdk.org

Signed-off-by: Gagandeep Singh 
---
 drivers/common/dpaax/caamflib/desc/pdcp.h | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/common/dpaax/caamflib/desc/pdcp.h 
b/drivers/common/dpaax/caamflib/desc/pdcp.h
index 659e289a45..041c66cfba 100644
--- a/drivers/common/dpaax/caamflib/desc/pdcp.h
+++ b/drivers/common/dpaax/caamflib/desc/pdcp.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
  * Copyright 2008-2013 Freescale Semiconductor, Inc.
- * Copyright 2019-2020 NXP
+ * Copyright 2019-2021 NXP
  */
 
 #ifndef __DESC_PDCP_H__
@@ -3710,9 +3710,10 @@ cnstr_shdsc_pdcp_short_mac(uint32_t *descbuf,
break;
 
case PDCP_AUTH_TYPE_SNOW:
+   /* IV calculation based on 3GPP specs. 36331, section:5.3.7.4 */
iv[0] = 0x;
-   iv[1] = swap ? swab32(0x0400) : 0x0400;
-   iv[2] = swap ? swab32(0xF800) : 0xF800;
+   iv[1] = swab32(0x0400);
+   iv[2] = swab32(0xF800);
 
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
-- 
2.25.1



[dpdk-dev] [PATCH v2 2/5] security: support PDCP short MAC-I

2021-09-07 Thread Gagandeep Singh
From: Hemant Agrawal 

This patch add support to handle PDCP short MAC-I domain
along with standard control and data domains as it has to
be treated as special case with PDCP protocol offload support.

ShortMAC-I is the 16 least significant bits of calculated MAC-I. Usually
when a RRC message is exchanged between UE and eNodeB it is integrity &
ciphered protected.

MAC-I = f(key, varShortMAC-I, count, bearer, direction).
Here varShortMAC-I is prepared by using (current cellId, pci of source cell
and C-RNTI of old cell). Other parameters like count, bearer and
direction set to all 1.

Signed-off-by: Gagandeep Singh 
Signed-off-by: Hemant Agrawal 
---
 app/test-crypto-perf/cperf_options_parsing.c |  8 ++-
 doc/guides/prog_guide/rte_security.rst   | 11 -
 doc/guides/tools/cryptoperf.rst  |  4 ++--
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c  | 25 ++--
 lib/security/rte_security.h  |  1 +
 5 files changed, 33 insertions(+), 16 deletions(-)

diff --git a/app/test-crypto-perf/cperf_options_parsing.c 
b/app/test-crypto-perf/cperf_options_parsing.c
index e84f56cfaa..0348972c85 100644
--- a/app/test-crypto-perf/cperf_options_parsing.c
+++ b/app/test-crypto-perf/cperf_options_parsing.c
@@ -662,7 +662,8 @@ parse_pdcp_sn_sz(struct cperf_options *opts, const char 
*arg)
 
 const char *cperf_pdcp_domain_strs[] = {
[RTE_SECURITY_PDCP_MODE_CONTROL] = "control",
-   [RTE_SECURITY_PDCP_MODE_DATA] = "data"
+   [RTE_SECURITY_PDCP_MODE_DATA] = "data",
+   [RTE_SECURITY_PDCP_MODE_SHORT_MAC] = "short_mac"
 };
 
 static int
@@ -677,6 +678,11 @@ parse_pdcp_domain(struct cperf_options *opts, const char 
*arg)
cperf_pdcp_domain_strs
[RTE_SECURITY_PDCP_MODE_DATA],
RTE_SECURITY_PDCP_MODE_DATA
+   },
+   {
+   cperf_pdcp_domain_strs
+   [RTE_SECURITY_PDCP_MODE_SHORT_MAC],
+   RTE_SECURITY_PDCP_MODE_SHORT_MAC
}
};
 
diff --git a/doc/guides/prog_guide/rte_security.rst 
b/doc/guides/prog_guide/rte_security.rst
index f72bc8a78f..ad92c16868 100644
--- a/doc/guides/prog_guide/rte_security.rst
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -1,5 +1,5 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
-Copyright 2017,2020 NXP
+Copyright 2017,2020-2021 NXP
 
 
 
@@ -408,6 +408,15 @@ PMD which supports the IPsec and PDCP protocol.
 },
 .crypto_capabilities = pmd_capabilities
 },
+   { /* PDCP Lookaside Protocol offload short MAC-I */
+.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+.protocol = RTE_SECURITY_PROTOCOL_PDCP,
+.pdcp = {
+.domain = RTE_SECURITY_PDCP_MODE_SHORT_MAC,
+.capa_flags = 0
+},
+.crypto_capabilities = pmd_capabilities
+},
 {
 .action = RTE_SECURITY_ACTION_TYPE_NONE
 }
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index be3109054d..d3963f23e3 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -316,9 +316,9 @@ The following are the application command-line options:
 Set PDCP sequence number size(n) in bits. Valid values of n will
 be 5/7/12/15/18.
 
-* ``--pdcp-domain ``
+* ``--pdcp-domain ``
 
-Set PDCP domain to specify Control/user plane.
+Set PDCP domain to specify short_mac/control/user plane.
 
 * ``--docsis-hdr-sz ``
 
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c 
b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 1ccead3641..4438486a8b 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -3102,7 +3102,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
struct rte_security_pdcp_xform *pdcp_xform = &conf->pdcp;
struct rte_crypto_sym_xform *xform = conf->crypto_xform;
struct rte_crypto_auth_xform *auth_xform = NULL;
-   struct rte_crypto_cipher_xform *cipher_xform;
+   struct rte_crypto_cipher_xform *cipher_xform = NULL;
dpaa2_sec_session *session = (dpaa2_sec_session *)sess;
struct ctxt_priv *priv;
struct dpaa2_sec_dev_private *dev_priv = dev->data->dev_private;
@@ -3134,18 +3134,18 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
flc = &priv->flc_desc[0].flc;
 
/* find xfrm types */
-   if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER && xform->next == NULL) {
-   cipher_xform = &xform->cipher;
-   } else if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
-  xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
-   session->ext_params.aead_ctxt.auth_cipher_text = true;
+   if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
cipher_xform 

[dpdk-dev] [PATCH v2 3/5] test/crypto: add pdcp security short MAC-I support

2021-09-07 Thread Gagandeep Singh
This patch add support to test the pdcp short mac
packets support in crypto.

Signed-off-by: Gagandeep Singh 
---
 app/test/test_cryptodev.c |  48 
 ...est_cryptodev_security_pdcp_test_vectors.h | 105 +-
 2 files changed, 152 insertions(+), 1 deletion(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 9ad0b37473..86809de90b 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -8768,6 +8768,50 @@ test_PDCP_SDAP_PROTO_encap_all(void)
return (all_err == TEST_SUCCESS) ? TEST_SUCCESS : TEST_FAILED;
 }
 
+static int
+test_PDCP_PROTO_short_mac(void)
+{
+   int i = 0, size = 0;
+   int err, all_err = TEST_SUCCESS;
+   const struct pdcp_short_mac_test *cur_test;
+
+   size = RTE_DIM(list_pdcp_smac_tests);
+
+   for (i = 0; i < size; i++) {
+   cur_test = &list_pdcp_smac_tests[i];
+   err = test_pdcp_proto(
+   i, 0, RTE_CRYPTO_CIPHER_OP_ENCRYPT,
+   RTE_CRYPTO_AUTH_OP_GENERATE, cur_test->data_in,
+   cur_test->in_len, cur_test->data_out,
+   cur_test->in_len + ((cur_test->auth_key) ? 4 : 0),
+   RTE_CRYPTO_CIPHER_NULL, NULL,
+   0, cur_test->param.auth_alg,
+   cur_test->auth_key, cur_test->param.auth_key_len,
+   0, cur_test->param.domain, 0, 0,
+   0, 0, 0);
+   if (err) {
+   printf("\t%d) %s: Short MAC test failed\n",
+   cur_test->test_idx,
+   cur_test->param.name);
+   err = TEST_FAILED;
+   } else {
+   printf("\t%d) %s: Short MAC test PASS\n",
+   cur_test->test_idx,
+   cur_test->param.name);
+   rte_hexdump(stdout, "MAC I",
+   cur_test->data_out + cur_test->in_len + 2,
+   2);
+   err = TEST_SUCCESS;
+   }
+   all_err += err;
+   }
+
+   printf("Success: %d, Failure: %d\n", size + all_err, -all_err);
+
+   return (all_err == TEST_SUCCESS) ? TEST_SUCCESS : TEST_FAILED;
+
+}
+
 static int
 test_PDCP_SDAP_PROTO_decap_all(void)
 {
@@ -14039,6 +14083,8 @@ static struct unit_test_suite 
cryptodev_snow3g_testsuite  = {
TEST_CASE_ST(ut_setup, ut_teardown,
test_snow3g_encryption_test_case_5),
 
+   TEST_CASE_ST(ut_setup, ut_teardown,
+   test_PDCP_PROTO_short_mac),
TEST_CASE_ST(ut_setup, ut_teardown,
test_snow3g_encryption_test_case_1_oop),
TEST_CASE_ST(ut_setup, ut_teardown,
@@ -14279,6 +14325,8 @@ static struct unit_test_suite 
cryptodev_kasumi_testsuite  = {
TEST_CASE_ST(ut_setup, ut_teardown,
test_kasumi_decryption_test_case_1_oop),
 
+   TEST_CASE_ST(ut_setup, ut_teardown,
+   test_PDCP_PROTO_short_mac),
TEST_CASE_ST(ut_setup, ut_teardown,
test_kasumi_cipher_auth_test_case_1),
 
diff --git a/app/test/test_cryptodev_security_pdcp_test_vectors.h 
b/app/test/test_cryptodev_security_pdcp_test_vectors.h
index 703076479d..81fd6e606b 100644
--- a/app/test/test_cryptodev_security_pdcp_test_vectors.h
+++ b/app/test/test_cryptodev_security_pdcp_test_vectors.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  * Copyright (C) 2015-2016 Freescale Semiconductor,Inc.
- * Copyright 2018-2019 NXP
+ * Copyright 2018-2021 NXP
  */
 
 #ifndef SECURITY_PDCP_TEST_VECTOR_H_
@@ -28,6 +28,109 @@ struct pdcp_test_param {
const char *name;
 };
 
+struct pdcp_short_mac_test {
+   uint32_t test_idx;
+   struct pdcp_short_mac_test_param {
+   enum rte_security_pdcp_domain domain;
+   enum rte_crypto_auth_algorithm auth_alg;
+   uint8_t auth_key_len;
+   const char *name;
+   } param;
+   const uint8_t *auth_key;
+   const uint8_t *data_in;
+   uint32_t in_len;
+   const uint8_t *data_out;
+};
+
+static const struct pdcp_short_mac_test list_pdcp_smac_tests[] = {
+   {
+   .test_idx = 1,
+   .param = {.name = "PDCP-SMAC SNOW3G UIA2",
+   .auth_alg = RTE_CRYPTO_AUTH_SNOW3G_UIA2,
+   .domain = RTE_SECURITY_PDCP_MODE_SHORT_MAC,
+   .auth_key_len = 16,
+   },
+   .auth_key = (uint8_t[]){ 0x2b, 0xd6, 0x45, 0x9f, 0x82, 0xc5,
+0xb3, 0x00, 0x95, 0x2c, 0x49, 0x10,
+0x48, 0x81, 0xff, 0x48 },
+   .data_in = (uint8_t[]){ 0x33, 0x32, 0x34, 0x

[dpdk-dev] [PATCH v2 4/5] crypto/dpaa2_sec: add PDCP short MAC-I support

2021-09-07 Thread Gagandeep Singh
This patch add PDCP short mac support in dpaa2_sec driver.

Signed-off-by: Gagandeep Singh 
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 4 
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   | 9 +
 2 files changed, 13 insertions(+)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c 
b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 4438486a8b..0d3a7989cd 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -3291,6 +3291,10 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
pdcp_xform->hfn_threshold,
&cipherdata, &authdata,
0);
+
+   } else if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_SHORT_MAC) {
+   bufsize = cnstr_shdsc_pdcp_short_mac(priv->flc_desc[0].desc,
+1, swap, &authdata);
} else {
if (session->dir == DIR_ENC) {
if (pdcp_xform->sdap_enabled)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h 
b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index 7dbc69f6cb..8dee0a4bda 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -941,6 +941,15 @@ static const struct rte_security_capability 
dpaa2_sec_security_cap[] = {
},
.crypto_capabilities = dpaa2_pdcp_capabilities
},
+   { /* PDCP Lookaside Protocol offload Short MAC */
+   .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+   .protocol = RTE_SECURITY_PROTOCOL_PDCP,
+   .pdcp = {
+   .domain = RTE_SECURITY_PDCP_MODE_SHORT_MAC,
+   .capa_flags = 0
+   },
+   .crypto_capabilities = dpaa2_pdcp_capabilities
+   },
{
.action = RTE_SECURITY_ACTION_TYPE_NONE
}
-- 
2.25.1



[dpdk-dev] [PATCH v2 5/5] crypto/dpaa_sec: add pdcp short MAC-I support

2021-09-07 Thread Gagandeep Singh
This patch add pdcp security short MAC-I support for
dpaa_sec driver.

Signed-off-by: Gagandeep Singh 
---
 drivers/crypto/dpaa_sec/dpaa_sec.c |  3 +++
 drivers/crypto/dpaa_sec/dpaa_sec.h | 11 ++-
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c 
b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 19d4684e24..59ac74f3d8 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -294,6 +294,9 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
ses->pdcp.hfn_threshold,
&cipherdata, &authdata,
0);
+   } else if (ses->pdcp.domain == RTE_SECURITY_PDCP_MODE_SHORT_MAC) {
+   shared_desc_len = cnstr_shdsc_pdcp_short_mac(cdb->sh_desc,
+1, swap, &authdata);
} else {
if (ses->dir == DIR_ENC) {
if (ses->pdcp.sdap_enabled)
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.h 
b/drivers/crypto/dpaa_sec/dpaa_sec.h
index 368699678b..2ab9c69bb6 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2016-2020 NXP
+ *   Copyright 2016-2021 NXP
  *
  */
 
@@ -769,6 +769,15 @@ static const struct rte_security_capability 
dpaa_sec_security_cap[] = {
},
.crypto_capabilities = dpaa_pdcp_capabilities
},
+   { /* PDCP Lookaside Protocol offload Short MAC */
+   .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+   .protocol = RTE_SECURITY_PROTOCOL_PDCP,
+   .pdcp = {
+   .domain = RTE_SECURITY_PDCP_MODE_SHORT_MAC,
+   .capa_flags = 0
+   },
+   .crypto_capabilities = dpaa_pdcp_capabilities
+   },
{
.action = RTE_SECURITY_ACTION_TYPE_NONE
}
-- 
2.25.1



Re: [dpdk-dev] [PATCH v3] eventdev: update crypto adapter metadata structures

2021-09-07 Thread Gujjar, Abhinandan S
Hi Shijith,

> -Original Message-
> From: Shijith Thotton 
> Sent: Tuesday, August 31, 2021 1:27 PM
> To: dev@dpdk.org
> Cc: Shijith Thotton ; jer...@marvell.com;
> ano...@marvell.com; pbhagavat...@marvell.com; gak...@marvell.com;
> Gujjar, Abhinandan S ; Ray Kinsella
> ; Ankur Dwivedi 
> Subject: [PATCH v3] eventdev: update crypto adapter metadata structures
> 
> In crypto adapter metadata, reserved bytes in request info structure is a
> space holder for response info. It enforces an order of operation if the
> structures are updated using memcpy to avoid overwriting response info. It
> is logical to move the reserved space out of request info. It also solves the
> ordering issue mentioned before.
I would like to understand what kind of ordering issue you have faced with
the current approach. Could you please give an example/sequence and explain?

> 
> This patch removes the reserve field from request info and makes event
> crypto metadata type to structure from union to make space for response
> info.
> 
> App and drivers are updated as per metadata change.
> 
> Signed-off-by: Shijith Thotton 
> Acked-by: Anoob Joseph 
> ---
> v3:
> * Updated ABI section of release notes.
> 
> v2:
> * Updated deprecation notice.
> 
> v1:
> * Rebased.
> 
>  app/test/test_event_crypto_adapter.c  | 14 +++---
>  doc/guides/rel_notes/deprecation.rst  |  6 --
>  doc/guides/rel_notes/release_21_11.rst|  2 ++
>  drivers/crypto/octeontx/otx_cryptodev_ops.c   |  8 
>  drivers/crypto/octeontx2/otx2_cryptodev_ops.c |  4 ++--
>  .../event/octeontx2/otx2_evdev_crypto_adptr_tx.h  |  4 ++--
>  lib/eventdev/rte_event_crypto_adapter.c   |  8 
>  lib/eventdev/rte_event_crypto_adapter.h   | 15 +--
>  8 files changed, 26 insertions(+), 35 deletions(-)
> 
> diff --git a/app/test/test_event_crypto_adapter.c
> b/app/test/test_event_crypto_adapter.c
> index 3ad20921e2..0d73694d3a 100644
> --- a/app/test/test_event_crypto_adapter.c
> +++ b/app/test/test_event_crypto_adapter.c
> @@ -168,7 +168,7 @@ test_op_forward_mode(uint8_t session_less)  {
>   struct rte_crypto_sym_xform cipher_xform;
>   struct rte_cryptodev_sym_session *sess;
> - union rte_event_crypto_metadata m_data;
> + struct rte_event_crypto_metadata m_data;
>   struct rte_crypto_sym_op *sym_op;
>   struct rte_crypto_op *op;
>   struct rte_mbuf *m;
> @@ -368,7 +368,7 @@ test_op_new_mode(uint8_t session_less)  {
>   struct rte_crypto_sym_xform cipher_xform;
>   struct rte_cryptodev_sym_session *sess;
> - union rte_event_crypto_metadata m_data;
> + struct rte_event_crypto_metadata m_data;
>   struct rte_crypto_sym_op *sym_op;
>   struct rte_crypto_op *op;
>   struct rte_mbuf *m;
> @@ -406,7 +406,7 @@ test_op_new_mode(uint8_t session_less)
>   if (cap &
> RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA) {
>   /* Fill in private user data information */
>   rte_memcpy(&m_data.response_info,
> &response_info,
> -sizeof(m_data));
> +sizeof(response_info));
>   rte_cryptodev_sym_session_set_user_data(sess,
>   &m_data, sizeof(m_data));
>   }
> @@ -426,7 +426,7 @@ test_op_new_mode(uint8_t session_less)
>   op->private_data_offset = len;
>   /* Fill in private data information */
>   rte_memcpy(&m_data.response_info, &response_info,
> -sizeof(m_data));
> +sizeof(response_info));
>   rte_memcpy((uint8_t *)op + len, &m_data, sizeof(m_data));
>   }
> 
> @@ -519,7 +519,7 @@ configure_cryptodev(void)
>   DEFAULT_NUM_XFORMS *
>   sizeof(struct rte_crypto_sym_xform) +
>   MAXIMUM_IV_LENGTH +
> - sizeof(union rte_event_crypto_metadata),
> + sizeof(struct rte_event_crypto_metadata),
>   rte_socket_id());
>   if (params.op_mpool == NULL) {
>   RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
> @@ -549,12 +549,12 @@ configure_cryptodev(void)
>* to include the session headers & private data
>*/
>   session_size =
> rte_cryptodev_sym_get_private_session_size(TEST_CDEV_ID);
> - session_size += sizeof(union rte_event_crypto_metadata);
> + session_size += sizeof(struct rte_event_crypto_metadata);
> 
>   params.session_mpool = rte_cryptodev_sym_session_pool_create(
>   "CRYPTO_ADAPTER_SESSION_MP",
>   MAX_NB_SESSIONS, 0, 0,
> - sizeof(union rte_event_crypto_metadata),
> + sizeof(struct rte_event_crypto_metadata),
>   SOCKET_ID_ANY);
>   TEST_ASSERT_NOT_NULL(params.session_mpool,
>   

[dpdk-dev] [PATCH v4 1/2] eventdev: add rx queue info get api

2021-09-07 Thread Ganapati Kundapura
Added rte_event_eth_rx_adapter_queue_info_get() API to get rx queue
information - event queue identifier, flags for handling received packets,
schedular type, event priority, polling frequency of the receive queue
and flow identifier in rte_event_eth_rx_adapter_queue_info structure

Signed-off-by: Ganapati Kundapura 

---
v4:
* squashed 1/3 and 3/3

v3:
* Split single patch into implementaion, test and document updation
  patches separately

v2:
* Fixed build issue due to missing entry in version.map

v1:
* Initial patch with implementaion, test and doc together
---
 .../prog_guide/event_ethernet_rx_adapter.rst   |  8 +++
 lib/eventdev/eventdev_pmd.h| 31 +
 lib/eventdev/rte_event_eth_rx_adapter.c| 76 ++
 lib/eventdev/rte_event_eth_rx_adapter.h| 71 
 lib/eventdev/version.map   |  1 +
 5 files changed, 187 insertions(+)

diff --git a/doc/guides/prog_guide/event_ethernet_rx_adapter.rst 
b/doc/guides/prog_guide/event_ethernet_rx_adapter.rst
index c01e5a9..9897985 100644
--- a/doc/guides/prog_guide/event_ethernet_rx_adapter.rst
+++ b/doc/guides/prog_guide/event_ethernet_rx_adapter.rst
@@ -146,6 +146,14 @@ if the callback is supported, and the counts maintained by 
the service function,
 if one exists. The service function also maintains a count of cycles for which
 it was not able to enqueue to the event device.
 
+Getting Adapter queue info
+~~
+
+The  ``rte_event_eth_rx_adapter_queue_info_get()`` function reports
+flags for handling received packets, event queue identifier, scheduar type,
+event priority, polling frequency of the receive queue and flow identifier
+in struct ``rte_event_eth_rx_adapter_queue_info``.
+
 Interrupt Based Rx Queues
 ~~
 
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index 0f724ac..20cc0a7 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -561,6 +561,35 @@ typedef int (*eventdev_eth_rx_adapter_queue_del_t)
const struct rte_eth_dev *eth_dev,
int32_t rx_queue_id);
 
+struct rte_event_eth_rx_adapter_queue_info;
+
+/**
+ * Retrieve information about Rx queue. This callback is invoked if
+ * the caps returned from the eventdev_eth_rx_adapter_caps_get(, eth_port_id)
+ * has RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT set.
+ *
+ * @param dev
+ *  Event device pointer
+ *
+ * @param eth_dev
+ *  Ethernet device pointer
+ *
+ * @param rx_queue_id
+ *  Ethernet device receive queue index.
+ *
+ * @param[out] info
+ *  Pointer to rte_event_eth_rx_adapter_queue_info structure
+ *
+ * @return
+ *  - 0: Success
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_rx_adapter_queue_info_get_t)
+   (const struct rte_eventdev *dev,
+   const struct rte_eth_dev *eth_dev,
+   uint16_t rx_queue_id,
+   struct rte_event_eth_rx_adapter_queue_info *info);
+
 /**
  * Start ethernet Rx adapter. This callback is invoked if
  * the caps returned from eventdev_eth_rx_adapter_caps_get(.., eth_port_id)
@@ -1107,6 +1136,8 @@ struct rte_eventdev_ops {
/**< Add Rx queues to ethernet Rx adapter */
eventdev_eth_rx_adapter_queue_del_t eth_rx_adapter_queue_del;
/**< Delete Rx queues from ethernet Rx adapter */
+   eventdev_eth_rx_adapter_queue_info_get_t eth_rx_adapter_queue_info_get;
+   /**< Get Rx adapter queue info */
eventdev_eth_rx_adapter_start_t eth_rx_adapter_start;
/**< Start ethernet Rx adapter */
eventdev_eth_rx_adapter_stop_t eth_rx_adapter_stop;
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c 
b/lib/eventdev/rte_event_eth_rx_adapter.c
index 7c94c73..98184fb 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -2811,3 +2811,79 @@ rte_event_eth_rx_adapter_cb_register(uint8_t id,
 
return 0;
 }
+
+int
+rte_event_eth_rx_adapter_queue_info_get(uint8_t id, uint16_t eth_dev_id,
+   uint16_t rx_queue_id,
+   struct rte_event_eth_rx_adapter_queue_info *info)
+{
+   struct rte_eventdev *dev;
+   struct eth_device_info *dev_info;
+   struct rte_event_eth_rx_adapter *rx_adapter;
+   struct eth_rx_queue_info *queue_info;
+   struct rte_event *qi_ev;
+   int ret;
+   uint32_t cap;
+
+   RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+   RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL);
+
+   if (rx_queue_id >= rte_eth_devices[eth_dev_id].data->nb_rx_queues) {
+   RTE_EDEV_LOG_ERR("Invalid rx queue_id %u", rx_queue_id);
+   return -EINVAL;
+   }
+
+   if (info == NULL) {
+   RTE_EDEV_LOG_ERR("Rx queue info cannot be NULL");
+   return -EINVAL;
+   }
+
+   rx

[dpdk-dev] [PATCH v4 2/2] test/event: Add rx queue info get test in rx adapter autotest

2021-09-07 Thread Ganapati Kundapura
Add unit tests for rte_event_eth_rx_adapter_queue_info_get()
in rx adapter autotest

Signed-off-by: Ganapati Kundapura 
---
 app/test/test_event_eth_rx_adapter.c | 22 ++
 1 file changed, 22 insertions(+)

diff --git a/app/test/test_event_eth_rx_adapter.c 
b/app/test/test_event_eth_rx_adapter.c
index 9198767..c642e1b 100644
--- a/app/test/test_event_eth_rx_adapter.c
+++ b/app/test/test_event_eth_rx_adapter.c
@@ -750,6 +750,27 @@ adapter_stats(void)
return TEST_SUCCESS;
 }
 
+static int
+adapter_queue_info(void)
+{
+   int err;
+   struct rte_event_eth_rx_adapter_queue_info queue_info;
+
+   err = rte_event_eth_rx_adapter_queue_info_get(TEST_INST_ID, TEST_DEV_ID,
+ 0, &queue_info);
+   TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+   err = rte_event_eth_rx_adapter_queue_info_get(TEST_INST_ID, TEST_DEV_ID,
+ -1, &queue_info);
+   TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+   err = rte_event_eth_rx_adapter_queue_info_get(TEST_INST_ID, TEST_DEV_ID,
+ 0, NULL);
+   TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+   return TEST_SUCCESS;
+}
+
 static struct unit_test_suite event_eth_rx_tests = {
.suite_name = "rx event eth adapter test suite",
.setup = testsuite_setup,
@@ -762,6 +783,7 @@ static struct unit_test_suite event_eth_rx_tests = {
adapter_multi_eth_add_del),
TEST_CASE_ST(adapter_create, adapter_free, adapter_start_stop),
TEST_CASE_ST(adapter_create, adapter_free, adapter_stats),
+   TEST_CASE_ST(adapter_create, adapter_free, adapter_queue_info),
TEST_CASES_END() /**< NULL terminate unit test array */
}
 };
-- 
2.6.4



[dpdk-dev] [PATCH 2/3] crypto/aesni_mb: add AES CCM 192-bit key support

2021-09-07 Thread Radu Nicolau
Add support for 192 bit keys for AES CCM algorithm.

Signed-off-by: Declan Doherty 
Signed-off-by: Radu Nicolau 
---
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 6 ++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c | 2 +-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c 
b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index b8ab84e215..6419aed123 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -689,6 +689,12 @@ aesni_mb_set_session_aead_parameters(const MB_MGR *mb_mgr,
sess->cipher.expanded_aes_keys.encode,
sess->cipher.expanded_aes_keys.decode);
break;
+   case AES_192_BYTES:
+   sess->cipher.key_length_in_bytes = AES_192_BYTES;
+   IMB_AES_KEYEXP_192(mb_mgr, xform->aead.key.data,
+   sess->cipher.expanded_aes_keys.encode,
+   sess->cipher.expanded_aes_keys.decode);
+   break;
case AES_256_BYTES:
sess->cipher.key_length_in_bytes = AES_256_BYTES;
IMB_AES_KEYEXP_256(mb_mgr, xform->aead.key.data,
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c 
b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
index ebf75198ae..5b89be04fb 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -402,7 +402,7 @@ static const struct rte_cryptodev_capabilities 
aesni_mb_pmd_capabilities[] = {
.min = 16,
 #if IMB_VERSION(0, 54, 2) <= IMB_VERSION_NUM
.max = 32,
-   .increment = 16
+   .increment = 8
 #else
.max = 16,
.increment = 0
-- 
2.25.1



[dpdk-dev] [PATCH 1/3] crypto/aesni_mb: add NULL/NULL support

2021-09-07 Thread Radu Nicolau
Add support for NULL/NULL xform.

Signed-off-by: Declan Doherty 
Signed-off-by: Radu Nicolau 
---
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c|  3 ++
 .../crypto/aesni_mb/rte_aesni_mb_pmd_ops.c| 38 +++
 2 files changed, 41 insertions(+)

diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c 
b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index a01c826a3c..b8ab84e215 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -462,6 +462,9 @@ aesni_mb_set_session_cipher_parameters(const MB_MGR *mb_mgr,
 
/* Select cipher mode */
switch (xform->cipher.algo) {
+   case RTE_CRYPTO_CIPHER_NULL:
+   sess->cipher.mode = NULL_CIPHER;
+   return 0;
case RTE_CRYPTO_CIPHER_AES_CBC:
sess->cipher.mode = CBC;
is_aes = 1;
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c 
b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
index fc7fdfec8e..ebf75198ae 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -502,6 +502,44 @@ static const struct rte_cryptodev_capabilities 
aesni_mb_pmd_capabilities[] = {
}, }
}, }
},
+   {   /* NULL (AUTH) */
+   .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+   {.sym = {
+   .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+   {.auth = {
+   .algo = RTE_CRYPTO_AUTH_NULL,
+   .block_size = 1,
+   .key_size = {
+   .min = 0,
+   .max = 0,
+   .increment = 0
+   },
+   .digest_size = {
+   .min = 0,
+   .max = 0,
+   .increment = 0
+   },
+   .iv_size = { 0 }
+   }, },
+   }, },
+   },
+   {   /* NULL (CIPHER) */
+   .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+   {.sym = {
+   .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+   {.cipher = {
+   .algo = RTE_CRYPTO_CIPHER_NULL,
+   .block_size = 1,
+   .key_size = {
+   .min = 0,
+   .max = 0,
+   .increment = 0
+   },
+   .iv_size = { 0 }
+   }, },
+   }, }
+   },
+
 #if IMB_VERSION(0, 53, 0) <= IMB_VERSION_NUM
{   /* AES ECB */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
-- 
2.25.1



[dpdk-dev] [PATCH 3/3] crypto/aesni_gcm: add AES CCM support

2021-09-07 Thread Radu Nicolau
Add support for AES CCM.

Signed-off-by: Declan Doherty 
Signed-off-by: Radu Nicolau 
---
 drivers/crypto/aesni_gcm/aesni_gcm_pmd.c |  8 +++---
 drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c | 30 
 2 files changed, 34 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c 
b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 886e2a5aaa..ee36b36f42 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -73,13 +73,13 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops 
*gcm_ops,
key = auth_xform->auth.key.data;
sess->req_digest_length = auth_xform->auth.digest_length;
 
-   /* AES-GCM */
+   /* AES-GCM - AES-CCM */
} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
aead_xform = xform;
-
-   if (aead_xform->aead.algo != RTE_CRYPTO_AEAD_AES_GCM) {
+   if ((aead_xform->aead.algo != RTE_CRYPTO_AEAD_AES_GCM) &&
+   (aead_xform->aead.algo != RTE_CRYPTO_AEAD_AES_CCM)) {
AESNI_GCM_LOG(ERR, "The only combined operation "
-   "supported is AES GCM");
+   "supported is AES GCM/CCM");
return -ENOTSUP;
}
 
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c 
b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
index 18dbc4c18c..989d42b4b7 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
@@ -66,6 +66,36 @@ static const struct rte_cryptodev_capabilities 
aesni_gcm_pmd_capabilities[] = {
}, }
}, }
},
+   {   /* AES CCM */
+   .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+   {.sym = {
+   .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+   {.aead = {
+   .algo = RTE_CRYPTO_AEAD_AES_CCM,
+   .block_size = 16,
+   .key_size = {
+   .min = 16,
+   .max = 32,
+   .increment = 8
+   },
+   .digest_size = {
+   .min = 1,
+   .max = 16,
+   .increment = 1
+   },
+   .aad_size = {
+   .min = 0,
+   .max = 65535,
+   .increment = 1
+   },
+   .iv_size = {
+   .min = 12,
+   .max = 12,
+   .increment = 0
+   }
+   }, }
+   }, }
+   },
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
-- 
2.25.1



Re: [dpdk-dev] [PATCH v3 1/3] eventdev: add rx queue info get api

2021-09-07 Thread Jerin Jacob
On Tue, Sep 7, 2021 at 2:20 PM Kundapura, Ganapati
 wrote:
>
>
>
> > -Original Message-
> > From: Jerin Jacob 
> > Sent: 07 September 2021 13:42
> > To: Kundapura, Ganapati 
> > Cc: Jayatheerthan, Jay ; dpdk-dev
> > ; Pavan Nikhilesh 
> > Subject: Re: [PATCH v3 1/3] eventdev: add rx queue info get api
> >
> >  in
> >
> > On Tue, Sep 7, 2021 at 12:15 PM Ganapati Kundapura
> >  wrote:
> > >
> > > Added rte_event_eth_rx_adapter_queue_info_get() API to get rx queue
> > > information - event queue identifier, flags for handling received
> > > packets, schedular type, event priority, polling frequency of the
> > > receive queue and flow identifier in
> > > rte_event_eth_rx_adapter_queue_info structure
> > >
> > > Signed-off-by: Ganapati Kundapura 
> > >
> > > ---
> > > v3:
> > > * Split single patch into implementaion, test and document updation
> > >   patches separately
> >
> > > +struct rte_event_eth_rx_adapter_queue_info;
> > > +
> > > +/**
> > > + * Retrieve information about Rx queue. This callback is invoked if
> > > + * the caps returned from the eventdev_eth_rx_adapter_caps_get(,
> > > +eth_port_id)
> > > + * has RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT set.
> >
> > It will useful for !RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT case
> > too.
> >



Missed this comment in v4
> > > diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h
> > > b/lib/eventdev/rte_event_eth_rx_adapter.h
> > > index 182dd2e..75c0010 100644
> > > --- a/lib/eventdev/rte_event_eth_rx_adapter.h
> > > +++ b/lib/eventdev/rte_event_eth_rx_adapter.h
> > > @@ -33,6 +33,7 @@
> > >   *  - rte_event_eth_rx_adapter_stop()
> > >   *  - rte_event_eth_rx_adapter_stats_get()
> > >   *  - rte_event_eth_rx_adapter_stats_reset()
> > > + *  - rte_event_eth_rx_adapter_queue_info_get()
> > >   *
> > >   * The application creates an ethernet to event adapter using
> > >   * rte_event_eth_rx_adapter_create_ext() or
> > > rte_event_eth_rx_adapter_create() @@ -140,6 +141,56 @@ typedef int
> > (*rte_event_eth_rx_adapter_conf_cb) (uint8_t id, uint8_t dev_id,
> > > void *arg);
> > >
> > >  /**
> > > + * Rx queue info
> > > + */
> > > +struct rte_event_eth_rx_adapter_queue_info {
> >
> > Can we avoid the duplication of this structure and use
> > rte_event_eth_rx_adapter_queue_conf instead.
> >
> > API can be rte_event_eth_rx_adapter_queue_conf_get() to align the
> > structure.
> >
> > Also instead of every driver duplicating this code, How about
> > - common code stores the config in
> > rte_event_eth_rx_adapter_queue_add()
> > - common code stores the config in
> > rte_event_eth_rx_adapter_queue_conf_get()
> > - Addtional PMD level API can be given incase, something needs to
> > overridden by Adapter.


Missed addressing this comment in v4.


Re: [dpdk-dev] [PATCH] telemetry: add support for dicts of dicts

2021-09-07 Thread Nicolau, Radu



On 9/6/2021 5:25 PM, Power, Ciara wrote:

Hi Radu,



-Original Message-
From: Nicolau, Radu 
Sent: Friday 3 September 2021 11:57
To: Power, Ciara 
Cc: dev@dpdk.org; Nicolau, Radu ; Doherty, Declan

Subject: [PATCH] telemetry: add support for dicts of dicts

Add support for dicts of dicts to telemetry library.

Signed-off-by: Declan Doherty 
Signed-off-by: Radu Nicolau 
---
5.1


Thanks for this, it will be a good addition to Telemetry.

I think tests should be added with this feature.
Different combinations of data are tested in the test_telemetry_data.c file, 
tests for these nested dicts would be valuable there.

Thanks,
Ciara

Hi Ciara, thanks for reviewing, I will add the tests.


Re: [dpdk-dev] [PATCH v2 01/10] dma/ioat: add device probe and removal functions

2021-09-07 Thread Kevin Laatz

On 03/09/2021 12:17, Conor Walsh wrote:

Add the basic device probe/remove skeleton code and initial documentation
for new IOAT DMA driver. Maintainers update is also included in this
patch.

Signed-off-by: Conor Walsh 
---
  MAINTAINERS|  6 +++
  doc/guides/dmadevs/index.rst   |  1 +
  doc/guides/dmadevs/ioat.rst| 64 
  doc/guides/rel_notes/release_21_11.rst |  7 +--
  drivers/dma/ioat/ioat_dmadev.c | 69 ++
  drivers/dma/ioat/ioat_hw_defs.h| 35 +
  drivers/dma/ioat/ioat_internal.h   | 20 
  drivers/dma/ioat/meson.build   |  7 +++
  drivers/dma/ioat/version.map   |  3 ++
  drivers/dma/meson.build|  1 +
  10 files changed, 210 insertions(+), 3 deletions(-)
  create mode 100644 doc/guides/dmadevs/ioat.rst
  create mode 100644 drivers/dma/ioat/ioat_dmadev.c
  create mode 100644 drivers/dma/ioat/ioat_hw_defs.h
  create mode 100644 drivers/dma/ioat/ioat_internal.h
  create mode 100644 drivers/dma/ioat/meson.build
  create mode 100644 drivers/dma/ioat/version.map


Reviewed-by: Kevin Laatz 


Re: [dpdk-dev] [PATCH v2 10/10] devbind: move ioat device ID for ICX to dmadev category

2021-09-07 Thread Kevin Laatz

On 03/09/2021 12:17, Conor Walsh wrote:

Move Intel IOAT devices on Ice Lake systems from Misc to DMA devices.

Signed-off-by: Conor Walsh 
---
  usertools/dpdk-devbind.py | 5 ++---
  1 file changed, 2 insertions(+), 3 deletions(-)


Reviewed-by: Kevin Laatz 



Re: [dpdk-dev] [PATCH v2 02/10] dma/ioat: create dmadev instances on PCI probe

2021-09-07 Thread Kevin Laatz

On 03/09/2021 12:17, Conor Walsh wrote:

When a suitable device is found during the PCI probe, create a dmadev
instance for each channel. Internal structures and HW definitions required
for device creation are also included.

Signed-off-by: Conor Walsh 
---
  drivers/dma/ioat/ioat_dmadev.c   | 108 ++-
  drivers/dma/ioat/ioat_hw_defs.h  |  45 +
  drivers/dma/ioat/ioat_internal.h |  24 +++
  3 files changed, 175 insertions(+), 2 deletions(-)


Reviewed-by: Kevin Laatz 



Re: [dpdk-dev] [PATCH v2 03/10] dma/ioat: add datapath structures

2021-09-07 Thread Kevin Laatz

On 03/09/2021 12:17, Conor Walsh wrote:

Add data structures required for the data path of IOAT devices.

Signed-off-by: Conor Walsh 
Signed-off-by: Bruce Richardson 
---
  drivers/dma/ioat/ioat_dmadev.c  |  61 -
  drivers/dma/ioat/ioat_hw_defs.h | 214 
  2 files changed, 274 insertions(+), 1 deletion(-)


Reviewed-by: Kevin Laatz 



Re: [dpdk-dev] [PATCH v2 04/10] dma/ioat: add configuration functions

2021-09-07 Thread Kevin Laatz

On 03/09/2021 12:17, Conor Walsh wrote:

Add functions for device configuration. The info_get and close functions
are included here also. info_get can be useful for checking successful
configuration and close is used by the dmadev api when releasing a
configured device.

Signed-off-by: Conor Walsh 
---
  doc/guides/dmadevs/ioat.rst| 24 ++
  drivers/dma/ioat/ioat_dmadev.c | 88 ++
  2 files changed, 112 insertions(+)


Reviewed-by: Kevin Laatz 



Re: [dpdk-dev] [PATCH v2 05/10] dma/ioat: add start and stop functions

2021-09-07 Thread Kevin Laatz

On 03/09/2021 12:17, Conor Walsh wrote:

Add start, stop and recover functions for IOAT devices.

Signed-off-by: Conor Walsh
Signed-off-by: Bruce Richardson
---
  doc/guides/dmadevs/ioat.rst|  3 ++
  drivers/dma/ioat/ioat_dmadev.c | 87 ++
  2 files changed, 90 insertions(+)

diff --git a/doc/guides/dmadevs/ioat.rst b/doc/guides/dmadevs/ioat.rst
index b6d88fe966..f7742642b5 100644
--- a/doc/guides/dmadevs/ioat.rst
+++ b/doc/guides/dmadevs/ioat.rst
@@ -86,3 +86,6 @@ The following code shows how the device is configured in 
``test_dmadev.c``:
 :start-after: Setup of the dmadev device. 8<
 :end-before: >8 End of setup of the dmadev device.
 :dedent: 1
+
+Once configured, the device can then be made ready for use by calling the
+``rte_dmadev_start()`` API.
diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c
index 94f9139e0d..9f9feecd49 100644
--- a/drivers/dma/ioat/ioat_dmadev.c
+++ b/drivers/dma/ioat/ioat_dmadev.c
@@ -73,6 +73,91 @@ ioat_vchan_setup(struct rte_dmadev *dev, uint16_t vchan 
__rte_unused,
return 0;
  }
  
+/* Recover IOAT device. */

+static inline int
+__ioat_recover(struct ioat_dmadev *ioat)
+{
+   uint32_t chanerr, retry = 0;
+   uint16_t mask = ioat->qcfg.nb_desc - 1;
+
+   /* Clear any channel errors. Reading and writing to chanerr does this. 
*/
+   chanerr = ioat->regs->chanerr;
+   ioat->regs->chanerr = chanerr;
+
+   /* Reset Channel. */
+   ioat->regs->chancmd = IOAT_CHANCMD_RESET;
+
+   /* Write new chain address to trigger state change. */
+   ioat->regs->chainaddr = ioat->desc_ring[(ioat->next_read - 1) & 
mask].next;
+   /* Ensure channel control and status addr are correct. */
+   ioat->regs->chanctrl = IOAT_CHANCTRL_ANY_ERR_ABORT_EN |
+   IOAT_CHANCTRL_ERR_COMPLETION_EN;
+   ioat->regs->chancmp = ioat->status_addr;
+
+   /* Allow HW time to move to the ARMED state. */
+   do {
+   rte_pause();
+   retry++;
+   } while (ioat->regs->chansts != IOAT_CHANSTS_ARMED && retry < 200);
+
+   /* Exit as failure if device is still HALTED. */
+   if (ioat->regs->chansts != IOAT_CHANSTS_ARMED)
+   return -1;
+
+   /* Store next write as offset as recover will move HW and SW ring out 
of sync. */
+   ioat->offset = ioat->next_read;
+
+   /* Prime status register with previous address. */
+   ioat->status = ioat->desc_ring[(ioat->next_read - 2) & mask].next;
+
+   return 0;
+}
+
+/* Start a configured device. */
+static int
+ioat_dev_start(struct rte_dmadev *dev)
+{
+   struct ioat_dmadev *ioat = dev->dev_private;
+
+   if (ioat->qcfg.nb_desc == 0 || ioat->desc_ring == NULL)
+   return -EBUSY;
+
+   /* Inform hardware of where the descriptor ring is. */
+   ioat->regs->chainaddr = ioat->ring_addr;
+   /* Inform hardware of where to write the status/completions. */
+   ioat->regs->chancmp = ioat->status_addr;
+
+   /* Prime the status register to be set to the last element. */
+   ioat->status = ioat->ring_addr + ((ioat->qcfg.nb_desc - 1) * DESC_SZ);
+
+   printf("IOAT.status: %s [%#lx]\n",
+   chansts_readable[ioat->status & IOAT_CHANSTS_STATUS],
+   ioat->status);
+
+   if ((ioat->regs->chansts & IOAT_CHANSTS_STATUS) == IOAT_CHANSTS_HALTED) 
{
+   IOAT_PMD_WARN("Device HALTED on start, attempting to 
recover\n");
+   if (__ioat_recover(ioat) != 0) {
+   IOAT_PMD_ERR("Device couldn't be recovered");
+   return -1;
+   }
+   }
+
+   return 0;
+}
+
+/* Stop a configured device. */
+static int
+ioat_dev_stop(struct rte_dmadev *dev)
+{
+   struct ioat_dmadev *ioat = dev->dev_private;
+
+   ioat->regs->chancmd = IOAT_CHANCMD_SUSPEND;
+   /* Allow the device time to suspend itself. */
+   rte_delay_ms(1);
+
+   return 0;


It be more beneficial to check if the device has actually suspended 
before returning. Similar to recover, a timeout could be set by which 
the device is expected to be suspended. If the device is still not 
suspended by then, return error.


IMO this would be more useful to an application that always returning 0, 
since the device may still be active.



With the above addressed,

Reviewed-by: Kevin Laatz 



Re: [dpdk-dev] [PATCH v2 06/10] dma/ioat: add data path job submission functions

2021-09-07 Thread Kevin Laatz

On 03/09/2021 12:17, Conor Walsh wrote:

Add data path functions for enqueuing and submitting operations to
IOAT devices.

Signed-off-by: Conor Walsh 
---
  doc/guides/dmadevs/ioat.rst| 54 
  drivers/dma/ioat/ioat_dmadev.c | 92 ++
  2 files changed, 146 insertions(+)


Reviewed-by: Kevin Laatz 



Re: [dpdk-dev] [PATCH v2 09/10] dma/ioat: add support for vchan idle function

2021-09-07 Thread Kevin Laatz

On 03/09/2021 12:17, Conor Walsh wrote:

Add support for the rte_dmadev_vchan_idle API call.

Signed-off-by: Conor Walsh 
---
  drivers/dma/ioat/ioat_dmadev.c | 14 ++
  1 file changed, 14 insertions(+)


Reviewed-by: Kevin Laatz 



Re: [dpdk-dev] [PATCH v2 08/10] dma/ioat: add statistics

2021-09-07 Thread Kevin Laatz

On 03/09/2021 12:17, Conor Walsh wrote:

Add statistic tracking for operations in IOAT.

Signed-off-by: Conor Walsh 
---
  doc/guides/dmadevs/ioat.rst| 23 +++
  drivers/dma/ioat/ioat_dmadev.c | 40 ++
  2 files changed, 63 insertions(+)


Reviewed-by: Kevin Laatz 



Re: [dpdk-dev] [PATCH v7 1/2] net: add new ext hdr for gtp psc

2021-09-07 Thread Ferruh Yigit
On 8/23/2021 11:55 AM, Raslan Darawsheh wrote:
> Define new rte header for gtp PDU session container
> based on RFC 38415-g30
> 
> Signed-off-by: Raslan Darawsheh 

Acked-by: Ferruh Yigit 

Patch title can be updated to have abbreviations uppercase, but it can be done
while merging I guess.


@Olivier,

If you OK with the patch I can proceed with it in and merge to next-net because
of the dependency of next patch.


Re: [dpdk-dev] [PATCH v7 2/2] ethdev: use ext hdr for gtp psc item

2021-09-07 Thread Ferruh Yigit
On 8/23/2021 11:55 AM, Raslan Darawsheh wrote:
> This updates the gtp_psc item to use the net hdr
> definition of the gtp_psc to be based on RFC 38415-g30
> 
> Signed-off-by: Raslan Darawsheh 

Acked-by: Ferruh Yigit 



Re: [dpdk-dev] [PATCH v2 05/10] dma/ioat: add start and stop functions

2021-09-07 Thread Walsh, Conor


> > +/* Stop a configured device. */
> > +static int
> > +ioat_dev_stop(struct rte_dmadev *dev)
> > +{
> > +   struct ioat_dmadev *ioat = dev->dev_private;
> > +
> > +   ioat->regs->chancmd = IOAT_CHANCMD_SUSPEND;
> > +   /* Allow the device time to suspend itself. */
> > +   rte_delay_ms(1);
> > +
> > +   return 0;
> 
> It be more beneficial to check if the device has actually suspended
> before returning. Similar to recover, a timeout could be set by which
> the device is expected to be suspended. If the device is still not
> suspended by then, return error.
> 
> IMO this would be more useful to an application that always returning 0,
> since the device may still be active.
> 
> 
> With the above addressed,
> 
> Reviewed-by: Kevin Laatz 

Thanks for the review Kevin, I agree this is a better solution than just 
delaying and returning 0.
I will implement this and include it as part of v3.
/Conor.



Re: [dpdk-dev] [PATCH v3] eventdev: update crypto adapter metadata structures

2021-09-07 Thread Shijith Thotton
Hi Abhinandan,

>> In crypto adapter metadata, reserved bytes in request info structure is a
>> space holder for response info. It enforces an order of operation if the
>> structures are updated using memcpy to avoid overwriting response info. It
>> is logical to move the reserved space out of request info. It also solves the
>> ordering issue mentioned before.
>I would like to understand what kind of ordering issue you have faced with
>the current approach. Could you please give an example/sequence and explain?
>

I have seen this issue with crypto adapter autotest (#n215).

Example:
rte_memcpy(&m_data.response_info, &response_info, sizeof(response_info));
rte_memcpy(&m_data.request_info, &request_info, sizeof(request_info));

Here response info is getting overwritten by request info.
Above lines can reordered to fix the issue, but can be ignored with this patch.

>>
>> This patch removes the reserve field from request info and makes event
>> crypto metadata type to structure from union to make space for response
>> info.
>>
>> App and drivers are updated as per metadata change.
>>
>> Signed-off-by: Shijith Thotton 
>> Acked-by: Anoob Joseph 
>> ---
>> v3:
>> * Updated ABI section of release notes.
>>
>> v2:
>> * Updated deprecation notice.
>>
>> v1:
>> * Rebased.
>>
>>  app/test/test_event_crypto_adapter.c  | 14 +++---
>>  doc/guides/rel_notes/deprecation.rst  |  6 --
>>  doc/guides/rel_notes/release_21_11.rst|  2 ++
>>  drivers/crypto/octeontx/otx_cryptodev_ops.c   |  8 
>>  drivers/crypto/octeontx2/otx2_cryptodev_ops.c |  4 ++--
>>  .../event/octeontx2/otx2_evdev_crypto_adptr_tx.h  |  4 ++--
>>  lib/eventdev/rte_event_crypto_adapter.c   |  8 
>>  lib/eventdev/rte_event_crypto_adapter.h   | 15 +--
>>  8 files changed, 26 insertions(+), 35 deletions(-)
>>
>> diff --git a/app/test/test_event_crypto_adapter.c
>> b/app/test/test_event_crypto_adapter.c
>> index 3ad20921e2..0d73694d3a 100644
>> --- a/app/test/test_event_crypto_adapter.c
>> +++ b/app/test/test_event_crypto_adapter.c
>> @@ -168,7 +168,7 @@ test_op_forward_mode(uint8_t session_less)  {
>>  struct rte_crypto_sym_xform cipher_xform;
>>  struct rte_cryptodev_sym_session *sess;
>> -union rte_event_crypto_metadata m_data;
>> +struct rte_event_crypto_metadata m_data;
>>  struct rte_crypto_sym_op *sym_op;
>>  struct rte_crypto_op *op;
>>  struct rte_mbuf *m;
>> @@ -368,7 +368,7 @@ test_op_new_mode(uint8_t session_less)  {
>>  struct rte_crypto_sym_xform cipher_xform;
>>  struct rte_cryptodev_sym_session *sess;
>> -union rte_event_crypto_metadata m_data;
>> +struct rte_event_crypto_metadata m_data;
>>  struct rte_crypto_sym_op *sym_op;
>>  struct rte_crypto_op *op;
>>  struct rte_mbuf *m;
>> @@ -406,7 +406,7 @@ test_op_new_mode(uint8_t session_less)
>>  if (cap &
>> RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA) {
>>  /* Fill in private user data information */
>>  rte_memcpy(&m_data.response_info,
>> &response_info,
>> -   sizeof(m_data));
>> +   sizeof(response_info));
>>  rte_cryptodev_sym_session_set_user_data(sess,
>>  &m_data, sizeof(m_data));
>>  }
>> @@ -426,7 +426,7 @@ test_op_new_mode(uint8_t session_less)
>>  op->private_data_offset = len;
>>  /* Fill in private data information */
>>  rte_memcpy(&m_data.response_info, &response_info,
>> -   sizeof(m_data));
>> +   sizeof(response_info));
>>  rte_memcpy((uint8_t *)op + len, &m_data, sizeof(m_data));
>>  }
>>
>> @@ -519,7 +519,7 @@ configure_cryptodev(void)
>>  DEFAULT_NUM_XFORMS *
>>  sizeof(struct rte_crypto_sym_xform) +
>>  MAXIMUM_IV_LENGTH +
>> -sizeof(union rte_event_crypto_metadata),
>> +sizeof(struct rte_event_crypto_metadata),
>>  rte_socket_id());
>>  if (params.op_mpool == NULL) {
>>  RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
>> @@ -549,12 +549,12 @@ configure_cryptodev(void)
>>   * to include the session headers & private data
>>   */
>>  session_size =
>> rte_cryptodev_sym_get_private_session_size(TEST_CDEV_ID);
>> -session_size += sizeof(union rte_event_crypto_metadata);
>> +session_size += sizeof(struct rte_event_crypto_metadata);
>>
>>  params.session_mpool = rte_cryptodev_sym_session_pool_create(
>>  "CRYPTO_ADAPTER_SESSION_MP",
>>  MAX_NB_SESSIONS, 0, 0,
>> -sizeof(union rte_event_crypto_metadata),
>> +sizeof(struct rte_event_crypto_metadata),
>>  SOCKET_ID_ANY);
>>  TEST_ASSER

[dpdk-dev] [PATCH v4] net: fix Intel-specific Prepare the outer IPv4 hdr for checksum

2021-09-07 Thread Mohsin Kazmi
Preparation of the headers for the hardware offload
misses the outer IPv4 checksum offload.
It results in bad checksum computed by hardware NIC.

This patch fixes the issue by setting the outer IPv4
checksum field to 0.

Fixes: 4fb7e803eb1a ("ethdev: add Tx preparation")
Cc: sta...@dpdk.org

Signed-off-by: Mohsin Kazmi 
Acked-by: Qi Zhang 
Acked-by: Olivier Matz 
---
v4:
   * Update the commit message

v3:
   * Update the conditional test with PKT_TX_OUTER_IP_CKSUM.
   * Update the commit title with "Intel-specific".

v2:
   * Update the commit message with Fixes.

 lib/net/rte_net.h | 15 +--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/lib/net/rte_net.h b/lib/net/rte_net.h
index 434435ffa2..42639bc154 100644
--- a/lib/net/rte_net.h
+++ b/lib/net/rte_net.h
@@ -125,11 +125,22 @@ rte_net_intel_cksum_flags_prepare(struct rte_mbuf *m, 
uint64_t ol_flags)
 * Mainly it is required to avoid fragmented headers check if
 * no offloads are requested.
 */
-   if (!(ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK | PKT_TX_TCP_SEG)))
+   if (!(ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK | PKT_TX_TCP_SEG |
+ PKT_TX_OUTER_IP_CKSUM)))
return 0;
 
-   if (ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6))
+   if (ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6)) {
inner_l3_offset += m->outer_l2_len + m->outer_l3_len;
+   /*
+* prepare outer IPv4 header checksum by setting it to 0,
+* in order to be computed by hardware NICs.
+*/
+   if (ol_flags & PKT_TX_OUTER_IP_CKSUM) {
+   ipv4_hdr = rte_pktmbuf_mtod_offset(m,
+   struct rte_ipv4_hdr *, m->outer_l2_len);
+   ipv4_hdr->hdr_checksum = 0;
+   }
+   }
 
/*
 * Check if headers are fragmented.
-- 
2.17.1



Re: [dpdk-dev] [PATCH 2/4] cryptodev: promote asym APIs to stable

2021-09-07 Thread Kusztal, ArkadiuszX
> Do you think all the asym APIs are not eligible for promoting it to stable 
> APIs?
> I haven't seen any changes for quite some time and we cannot have it
> experimental Forever.
> The APIs which you think are expected to change, we can leave them as
> experimental And mark the others as stable.
We could potentially make capability related functions stable but for others 
there are still some many uncertainties, another example:
Ecdsa op expects 'k' in "in the interval (1, n-1)", openssl pmd will not even 
have function that accepts 'k' (although optionally inverse of k yes), what 
should user put then here?

This API needs more attention I believe, I can send patches for it after 21.11 
release. 
My opinion is that we should push all this by another year.

> 
> Can you post a patch for it? I will drop it from my series.
> 
> Also, could you review the other patches in the series as well.
> 
> Regards,
> Akhil
> 
> > Hi Akhil,
> >
> > I am not sure if this API is ready to be stable so I will add few comments 
> > here:
> >
> > RSA:
> > rte_crypto_param message;
> > ...
> >  * - to be signed for RSA sign generation.
> >
> > If this message is plaintext, then in case of:
> > 1) PKCS1_1.5 padding:
> > Standard defines data to be signed as DER encoded struct of
> > digestAlgorithm
> > + digest
> > (few exceptions I am aware of were TLS prior to 1.2 or IKE version 1)
> > - There is no field to specify that, even if PMD would be correctly
> > implemented it still would lack information about hash aglorithm.
> > - Currently what openssl pmd for example is doing is
> > RSA_private_encrypt which omits this step
> (https://www.openssl.org/docs/man1.1.1/man3/RSA_private_encrypt.html  -
> mentions this).
> > 2) PADDING_NONE:
> > I cannot find what user is supposed to do in this case, and I think it
> > may be quite common option for hw due to reliance on strong CSPRNG for
> > PSS or OAEP.
> >
> > DSA:
> > struct rte_crypto_dsa_op_param {
> > ...
> > There is no 'k' parameter? I though I have added it, how hw with no
> > CSRNG should work with DSA?
> >
> > For ECDSA private key is in Op, for DSA is in xform. Where this
> > inconsistency comes from?
> >
> > /**< x: Private key of the signer in octet-string network
> >  * byte order format.
> >  * Used when app has pre-defined private key.
> >  * Valid only when xform chain is DSA ONLY.
> >  * if xform chain is DH private key generate + DSA, then DSA sign
> >  * compute will use internally generated key.
> >
> > And this one I cannot understand, there is DH and DSA in one line plus
> > seems that private dsa key would be generated and used in the same
> operation.
> > We want to create self-signed certificate here on the fly or something?
> >
> > RTE_CRYPTO_ASYM_OP_PRIVATE_KEY_GENERATE,
> > /**< DH Private Key generation operation */
> >
> > This is another interesting part (similar to 'k' in (EC)DSA, PSS, QAEO
> > in RSA), there was no any type of hw random number generation concept
> > for symmetric crypto (i.e. salt, IV, nonce) and here we have
> > standalone Diffie Hellman private key generator.
> > And since it is no crypto computation but random number generation,
> > maybe there should be another module to handle CSRNG or we could
> > register randomness source into cryptodev, like callback? Another
> > option would be to predefine randomness source per device like (i.e.
> > x86 RDRAND, /dev/random) for user to decide.
> >
> > Additionally there is DH op but there is no ECDH (I know there is
> > ECPM, but the same way there is MODEXP which creates another
> inconsistency).
> > Optionally we can extend DH API to work with EC?
> > EDDSA, EDDH needs to be implemented soon too.
> >
> > Regards,
> > Arek


Re: [dpdk-dev] [PATCH 2/4] cryptodev: promote asym APIs to stable

2021-09-07 Thread Akhil Goyal
> > Do you think all the asym APIs are not eligible for promoting it to stable
> APIs?
> > I haven't seen any changes for quite some time and we cannot have it
> > experimental Forever.
> > The APIs which you think are expected to change, we can leave them as
> > experimental And mark the others as stable.
> We could potentially make capability related functions stable but for others
> there are still some many uncertainties, another example:
> Ecdsa op expects 'k' in "in the interval (1, n-1)", openssl pmd will not even
> have function that accepts 'k' (although optionally inverse of k yes), what
> should user put then here?
> 
> This API needs more attention I believe, I can send patches for it after 21.11
> release.
> My opinion is that we should push all this by another year.
> 
Ok will drop this patch for now.


[dpdk-dev] [PATCH] crypto/qat: enable aes xts in gen4

2021-09-07 Thread Arek Kusztal
Enable AES-XTS legacy mode in gen4 devices of Intel QuickAssist
Technology PMD.

Signed-off-by: Arek Kusztal 
---
 drivers/crypto/qat/qat_sym_capabilities.h | 20 
 1 file changed, 20 insertions(+)

diff --git a/drivers/crypto/qat/qat_sym_capabilities.h 
b/drivers/crypto/qat/qat_sym_capabilities.h
index cfb176ca94..636cfc2817 100644
--- a/drivers/crypto/qat/qat_sym_capabilities.h
+++ b/drivers/crypto/qat/qat_sym_capabilities.h
@@ -1199,6 +1199,26 @@
}   \
}, }\
}, }\
+   },  \
+   {   /* AES XTS */   \
+   .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \
+   {.sym = {   \
+   .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,  \
+   {.cipher = {\
+   .algo = RTE_CRYPTO_CIPHER_AES_XTS,  \
+   .block_size = 16,   \
+   .key_size = {   \
+   .min = 32,  \
+   .max = 64,  \
+   .increment = 32 \
+   },  \
+   .iv_size = {\
+   .min = 16,  \
+   .max = 16,  \
+   .increment = 0  \
+   }   \
+   }, }\
+   }, }\
}   \
 
 
-- 
2.30.2



Re: [dpdk-dev] [EXT] [PATCH v2 0/5] support of MAC-I

2021-09-07 Thread Akhil Goyal
> --
> This series add support of Message Authentication Code
> - Integrity on DPAAX platforms.
> 
> v2-change-log:
> * update commit message
> * merged an existing patch with this series:
> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__patches.dpdk.org_project_dpdk_patch_20210825081837.23830-2D1-
> 2Dhemant.agrawal-
> 40nxp.com_mbox_&d=DwIDAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=DnL7Si2wl
> _PRwpZ9TWey3eu68gBzn7DkPwuqhd6WNyo&m=rsC8q5LCqM75FDZlT9cU21
> Qaf4v__D1_IufHF8an_x0&s=8v25-
> EfDZN2P66hDuhUAZtK5PNPnODDAyGSmtKeQ_oI&e=
> 
> Gagandeep Singh (4):
>   common/dpaax: fix IV value for shortMAC-I for SNOW algo
>   test/crypto: add pdcp security short MAC-I support
>   crypto/dpaa2_sec: add PDCP short MAC-I support
>   crypto/dpaa_sec: add pdcp short MAC-I support
> 
> Hemant Agrawal (1):
>   security: support PDCP short MAC-I
> 
>  app/test-crypto-perf/cperf_options_parsing.c  |   8 +-
>  app/test/test_cryptodev.c |  48 
>  ...est_cryptodev_security_pdcp_test_vectors.h | 105 +-
>  doc/guides/prog_guide/rte_security.rst|  11 +-
>  doc/guides/tools/cryptoperf.rst   |   4 +-
>  drivers/common/dpaax/caamflib/desc/pdcp.h |   7 +-
>  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   |  29 +++--
>  drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h |   9 ++
>  drivers/crypto/dpaa_sec/dpaa_sec.c|   3 +
>  drivers/crypto/dpaa_sec/dpaa_sec.h|  11 +-
>  lib/security/rte_security.h   |   1 +
>  11 files changed, 215 insertions(+), 21 deletions(-)

Release notes missing for new feature updates


Re: [dpdk-dev] [EXT] [PATCH v3 01/10] crypto/dpaa_sec: support DES-CBC

2021-09-07 Thread Akhil Goyal


> From: Gagandeep Singh 
> 
> add DES-CBC support and enable available cipher-only
> test cases.
> 
> Signed-off-by: Gagandeep Singh 
> ---
Release notes missing for the new features added in PMDs.
Please update sequentially in each of the patches.


Re: [dpdk-dev] [EXT] [PATCH] crypto/qat: enable aes xts in gen4

2021-09-07 Thread Akhil Goyal
> Enable AES-XTS legacy mode in gen4 devices of Intel QuickAssist
> Technology PMD.
> 
> Signed-off-by: Arek Kusztal 
> ---
>  
Does it need update to .ini file for the algos supported?
Release notes update?



Re: [dpdk-dev] [EXT] [PATCH] crypto/qat: enable aes xts in gen4

2021-09-07 Thread Kusztal, ArkadiuszX



> -Original Message-
> From: Akhil Goyal 
> Sent: Tuesday, September 7, 2021 1:48 PM
> To: Kusztal, ArkadiuszX ; dev@dpdk.org
> Cc: Zhang, Roy Fan 
> Subject: RE: [EXT] [PATCH] crypto/qat: enable aes xts in gen4
> 
> > Enable AES-XTS legacy mode in gen4 devices of Intel QuickAssist
> > Technology PMD.
> >
> > Signed-off-by: Arek Kusztal 
> > ---
> >
> Does it need update to .ini file for the algos supported?
> Release notes update?

In qat we have combined features for every gen, so theoretically no (if one 
supports it is there).
But it may need some upgrade, and I forgot about release notes will send v2.
Thanks Akhil!


Re: [dpdk-dev] [PATCH v3 1/3] net/thunderx: enable build only on 64-bit Linux

2021-09-07 Thread Ferruh Yigit
On 8/23/2021 8:54 PM, pbhagavat...@marvell.com wrote:
> From: Pavan Nikhilesh 
> 
> Due to Linux kernel dependency, only enable build for 64-bit Linux.
> 
> Signed-off-by: Pavan Nikhilesh 
> Acked-by: Jerin Jacob 

patches looks good, but can you please add more details related to the
dependency in the commit log?


[dpdk-dev] [RFC PATCH] ethdev: clarify flow attribute and action port ID semantics

2021-09-07 Thread Ivan Malov
Problems:

1) Existing item PORT_ID and action PORT_ID are ambiguous because
   one can consider the port in question as either an ethdev or
   an embedded switch entity wired to it, as per the use case,
   which is not expressed clearly in code and documentation.

2) Attributes "ingress" and "egress" may not make sense in flows
   with "transfer" attribute for some PMDs. Furthermore, such
   PMDs may face a related problem (see below).

3) A PMD may not be able to handle "transfer" rules on all ethdevs
   it serves. It may have only one (admin) ethdev capable of that.
   Applications should be able to take this into account and
   submit "transfer" rules on that specific ethdev. However,
   meaning of attributes "ingress" and "egress" might be
   skewed in this case as the ethdev used to make flow
   API calls is just a technical entry point.

In order to solve problem (1)¸ one should recognise the existence
of two major application models: traffic consumer / generator and
a vSwitch / forwarder. To the latter, ethdevs used to perform
Rx / Tx burst calls are simply vSwitch ports. Requesting HW
offloads on these ports implies referring to e-switch ports
that correspond to them at the lowest, e-switch, level.
This way, suggested terminology is clear and concise.

The patch suggests using item / action PORT_ID sub-variants to
disambiguate the meaning. In order to avoid breaking existing
behaviour in Open vSwitch DPDK offload, the sub-variant for
e-switch port is declared with enum value of zero.

In order to solve problems (2) and (3), one needs to recognise
the existence of two paradigms of handling "transfer" rules in
PMDs. If the given PMD needs to "proxy" handling of "transfer"
rules via an admin ethdev, this must not be done implicitly
as the application must know the true ethdev responsible
for handling the flows in order to avoid detaching it
before all "transfer" rules get properly dismantled.

The patch suggests to use a generic helper to let applications
discover the paradigm in use and, if need be, communicate
flow rules through the discovered "proxy" ethdev.

Signed-off-by: Ivan Malov 
---
This proposal is an alternative to previously suggested RFCs, [1] and [2].
It covers several related problems and suggests clear API contract to
let vSwitch applications use item PORT_ID and action PORT_ID to refer
to e-switch ports thus preserving existing sense.

[1] 
https://patches.dpdk.org/project/dpdk/patch/2021060420.5549-1-ivan.ma...@oktetlabs.ru/
[2] 
https://patches.dpdk.org/project/dpdk/patch/20210903074610.313622-1-andrew.rybche...@oktetlabs.ru/
---
 lib/ethdev/rte_flow.h | 140 --
 1 file changed, 107 insertions(+), 33 deletions(-)

diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 70f455d47d..bf2b5e752c 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -82,22 +82,32 @@ struct rte_flow_attr {
uint32_t ingress:1; /**< Rule applies to ingress traffic. */
uint32_t egress:1; /**< Rule applies to egress traffic. */
/**
-* Instead of simply matching the properties of traffic as it would
-* appear on a given DPDK port ID, enabling this attribute transfers
-* a flow rule to the lowest possible level of any device endpoints
-* found in the pattern.
-*
-* When supported, this effectively enables an application to
-* re-route traffic not necessarily intended for it (e.g. coming
-* from or addressed to different physical ports, VFs or
-* applications) at the device level.
-*
-* It complements the behavior of some pattern items such as
-* RTE_FLOW_ITEM_TYPE_PHY_PORT and is meaningless without them.
-*
-* When transferring flow rules, ingress and egress attributes keep
-* their original meaning, as if processing traffic emitted or
-* received by the application.
+* This "transfers" the rule from the ethdev level to the embedded
+* switch (e-switch) level, where it's possible to match traffic
+* not necessarily going to the ethdev where the flow is created
+* and redirect it to endpoints otherwise not necessarily
+* accessible from rules having no such attribute.
+*
+* Applications willing to use attribute "transfer" should detect its
+* paradigm implemented inside the PMD. The paradigms are as follows:
+*
+* - The PMD supports handling "transfer" flow rules on any ethdevs
+*   it serves. With this paradigm, rte_flow_pick_transfer_proxy()
+*   call returns (-ENOTSUP) for all ethdevs backed by the PMD.
+*   Attributes "ingress" and "egress" are valid and preserve
+*   their original meaning, at application standpoint. Also,
+*   these attributes typically set some implicit filtering.
+*
+* - The PMD only supports handling "transfer" flow rules on some
+   

Re: [dpdk-dev] [PATCH v19 1/7] dmadev: introduce DMA device library public APIs

2021-09-07 Thread fengchengwen
Hi Gagandeep,

Based on the following considerations, it was decided not to support "ANY
direction on a channel".

As we previously analyze [1], many hardware (like dpaa2/octeontx2/Kunpeng)
supports multiple directions on a hardware channel.

Based on the consideration of smooth migration of existing drivers, we basically
confirmed the concept of using virtual-queue to represent different transmission
direction contexts, and which has persisted to this day.

Although it can be extended based on my proposal, this change will give rise to
new interface models, which applications have to take into account.
If we stay the same, the applications based on the original rawdev interface can
adapt quickly.

Also, Jorin has made some comments from a performance perspective, which I agree
with.

[1] 
https://lore.kernel.org/dpdk-dev/c4a0ee30-f7b8-f8a1-463c-8eedaec82...@huawei.com/

BTW: @Jorin @Bruce thank you for your reply.

Thanks

On 2021/9/6 15:52, fengchengwen wrote:
> I think we can add support for DIR_ANY.
> @Bruce @Jerin Would you please take a look at my proposal?
> 
> On 2021/9/6 14:48, Gagandeep Singh wrote:
>>
>>
>>> -Original Message-
>>> From: fengchengwen 
>>> Sent: Saturday, September 4, 2021 7:02 AM
>>> To: Gagandeep Singh ; tho...@monjalon.net;
>>> ferruh.yi...@intel.com; bruce.richard...@intel.com; jer...@marvell.com;
>>> jerinjac...@gmail.com; andrew.rybche...@oktetlabs.ru
>>> Cc: dev@dpdk.org; m...@smartsharesystems.com; Nipun Gupta
>>> ; Hemant Agrawal ;
>>> maxime.coque...@redhat.com; honnappa.nagaraha...@arm.com;
>>> david.march...@redhat.com; sbu...@marvell.com; pkap...@marvell.com;
>>> konstantin.anan...@intel.com; conor.wa...@intel.com
>>> Subject: Re: [dpdk-dev] [PATCH v19 1/7] dmadev: introduce DMA device library
>>> public APIs
>>>
>>> On 2021/9/3 19:42, Gagandeep Singh wrote:
 Hi,

 
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Close a DMA device.
> + *
> + * The device cannot be restarted after this call.
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + *
> + * @return
> + *   0 on success. Otherwise negative value is returned.
> + */
> +__rte_experimental
> +int
> +rte_dmadev_close(uint16_t dev_id);
> +
> +/**
> + * rte_dma_direction - DMA transfer direction defines.
> + */
> +enum rte_dma_direction {
> + RTE_DMA_DIR_MEM_TO_MEM,
> + /**< DMA transfer direction - from memory to memory.
> +  *
> +  * @see struct rte_dmadev_vchan_conf::direction
> +  */
> + RTE_DMA_DIR_MEM_TO_DEV,
> + /**< DMA transfer direction - from memory to device.
> +  * In a typical scenario, the SoCs are installed on host servers as
> +  * iNICs through the PCIe interface. In this case, the SoCs works in
> +  * EP(endpoint) mode, it could initiate a DMA move request from
> memory
> +  * (which is SoCs memory) to device (which is host memory).
> +  *
> +  * @see struct rte_dmadev_vchan_conf::direction
> +  */
> + RTE_DMA_DIR_DEV_TO_MEM,
> + /**< DMA transfer direction - from device to memory.
> +  * In a typical scenario, the SoCs are installed on host servers as
> +  * iNICs through the PCIe interface. In this case, the SoCs works in
> +  * EP(endpoint) mode, it could initiate a DMA move request from device
> +  * (which is host memory) to memory (which is SoCs memory).
> +  *
> +  * @see struct rte_dmadev_vchan_conf::direction
> +  */
> + RTE_DMA_DIR_DEV_TO_DEV,
> + /**< DMA transfer direction - from device to device.
> +  * In a typical scenario, the SoCs are installed on host servers as
> +  * iNICs through the PCIe interface. In this case, the SoCs works in
> +  * EP(endpoint) mode, it could initiate a DMA move request from device
> +  * (which is host memory) to the device (which is another host memory).
> +  *
> +  * @see struct rte_dmadev_vchan_conf::direction
> +  */
> +};
> +
> +/**
> ..
 The enum rte_dma_direction must have a member RTE_DMA_DIR_ANY for a
>>> channel that supports all 4 directions.
>>>
>>> We've discussed this issue before. The earliest solution was to set up 
>>> channels to
>>> support multiple DIRs, but
>>> no hardware/driver actually used this (at least at that time). they (like
>>> octeontx2_dma/dpaa) all setup one logic
>>> channel server single transfer direction.
>>>
>>> So, do you have that kind of desire for your driver ?
>>>
>> Both DPAA1 and DPAA2 drivers can support ANY direction on a channel, so we 
>> would like to have this option as well.
>>
>>>
>>> If you have a strong desire, we'll consider the following options:
>>>
>>> Once the channel was setup, there are no other parameters to indicate the 
>>> copy
>>> request's transfer direction.
>>> So I think it is not enough to define RTE_DMA_DIR_ANY only.
>>>
>>> Maybe we could add RTE_DMA

Re: [dpdk-dev] [RFC PATCH v2 0/7] heterogeneous computing library

2021-09-07 Thread Elena Agostini


> -Original Message-
> From: Wang, Haiyue 
> Sent: Monday, September 6, 2021 7:15 PM
> To: Elena Agostini ; Jerin Jacob
> 
> Cc: NBU-Contact-Thomas Monjalon ; Jerin Jacob
> ; dpdk-dev ; Stephen Hemminger
> ; David Marchand
> ; Andrew Rybchenko
> ; Honnappa Nagarahalli
> ; Yigit, Ferruh ;
> techbo...@dpdk.org
> Subject: RE: [dpdk-dev] [RFC PATCH v2 0/7] heterogeneous computing
> library
> 
> 
> > -Original Message-
> > From: Elena Agostini 
> > Sent: Tuesday, September 7, 2021 00:11
> > To: Jerin Jacob 
> > Cc: Wang, Haiyue ; NBU-Contact-Thomas
> Monjalon
> > ; Jerin Jacob ; dpdk-dev
> > ; Stephen Hemminger ;
> David
> > Marchand ; Andrew Rybchenko
> > ; Honnappa Nagarahalli
> > ; Yigit, Ferruh
> > ; techbo...@dpdk.org
> > Subject: RE: [dpdk-dev] [RFC PATCH v2 0/7] heterogeneous computing
> > library
> >
> >
> >
> 
> 
> > > >
> > > > I'd like to introduce (with a dedicated option) the memory API in
> > > > testpmd to provide an example of how to TX/RX packets using device
> > > memory.
> > >
> > > Not sure without embedding sideband communication mechanism how
> it
> > > can notify to GPU and back to CPU. If you could share the example
> > > API sequence that helps to us understand the level of coupling with
> testpmd.
> > >
> >
> > There is no need of communication mechanism here.
> > Assuming there is not workload to process network packets (to not
> > complicate things), the steps are:
> > 1) Create a DPDK mempool with device external memory using the hcdev
> > (or gpudev) library
> > 2) Use that mempool to tx/rx/fwd packets
> >
> > As an example, you look at my l2fwd-nv application here:
> > https://github.com/NVIDIA/l2fwd-nv
> >
> 
> To enhance the 'rte_extmem_register' / 'rte_pktmbuf_pool_create_extbuf'
> ?
> 

The purpose of these two functions is different.
Here DPDK allows the user to use any kind of memory to rx/tx packets.
It's not about allocating memory.

Maybe I'm missing the point here: what's the main objection in having a GPU 
library?

> if (l2fwd_mem_type == MEM_HOST_PINNED) {
> ext_mem.buf_ptr = rte_malloc("extmem", ext_mem.buf_len, 0);
> CUDA_CHECK(cudaHostRegister(ext_mem.buf_ptr,
> ext_mem.buf_len, cudaHostRegisterMapped));
> void *pDevice;
> CUDA_CHECK(cudaHostGetDevicePointer(&pDevice,
> ext_mem.buf_ptr, 0));
> if (pDevice != ext_mem.buf_ptr)
> rte_exit(EXIT_FAILURE, "GPU pointer does not match CPU
> pointer\n");
> } else {
> ext_mem.buf_iova = RTE_BAD_IOVA;
> CUDA_CHECK(cudaMalloc(&ext_mem.buf_ptr,
> ext_mem.buf_len));
> if (ext_mem.buf_ptr == NULL)
> rte_exit(EXIT_FAILURE, "Could not allocate GPU 
> memory\n");
> 
> unsigned int flag = 1;
> CUresult status = cuPointerSetAttribute(&flag,
> CU_POINTER_ATTRIBUTE_SYNC_MEMOPS, (CUdeviceptr)ext_mem.buf_ptr);
> if (CUDA_SUCCESS != status) {
> rte_exit(EXIT_FAILURE, "Could not set SYNC MEMOP 
> attribute
> for GPU memory at %llx\n", (CUdeviceptr)ext_mem.buf_ptr);
> }
> ret = rte_extmem_register(ext_mem.buf_ptr, ext_mem.buf_len,
> NULL, ext_mem.buf_iova, GPU_PAGE_SIZE);
> if (ret)
> rte_exit(EXIT_FAILURE, "Could not register GPU 
> memory\n");
> }
> ret = rte_dev_dma_map(rte_eth_devices[l2fwd_port_id].device,
> ext_mem.buf_ptr, ext_mem.buf_iova, ext_mem.buf_len);
> if (ret)
> rte_exit(EXIT_FAILURE, "Could not DMA map EXT memory\n");
> mpool_payload = rte_pktmbuf_pool_create_extbuf("payload_mpool",
> l2fwd_nb_mbufs,
>   
>   0, 0, ext_mem.elt_size,
>   
>   rte_socket_id(),
> &ext_mem, 1);
> if (mpool_payload == NULL)
> rte_exit(EXIT_FAILURE, "Could not create EXT memory
> mempool\n");
> 
> 



[dpdk-dev] [PATCH v21 1/7] dmadev: introduce DMA device library public APIs

2021-09-07 Thread Chengwen Feng
The 'dmadevice' is a generic type of DMA device.

This patch introduce the 'dmadevice' public APIs which expose generic
operations that can enable configuration and I/O with the DMA devices.

Maintainers update is also included in this patch.

Signed-off-by: Chengwen Feng 
Acked-by: Bruce Richardson 
Acked-by: Morten Brørup 
Acked-by: Jerin Jacob 
Reviewed-by: Kevin Laatz 
Reviewed-by: Conor Walsh 
---
 MAINTAINERS|   4 +
 doc/api/doxy-api-index.md  |   1 +
 doc/api/doxy-api.conf.in   |   1 +
 doc/guides/rel_notes/release_21_11.rst |   5 +
 lib/dmadev/meson.build |   4 +
 lib/dmadev/rte_dmadev.h| 951 +
 lib/dmadev/version.map |  24 +
 lib/meson.build|   1 +
 8 files changed, 991 insertions(+)
 create mode 100644 lib/dmadev/meson.build
 create mode 100644 lib/dmadev/rte_dmadev.h
 create mode 100644 lib/dmadev/version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 1e0d303394..9885cc56b7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -496,6 +496,10 @@ F: drivers/raw/skeleton/
 F: app/test/test_rawdev.c
 F: doc/guides/prog_guide/rawdev.rst
 
+DMA device API - EXPERIMENTAL
+M: Chengwen Feng 
+F: lib/dmadev/
+
 
 Memory Pool Drivers
 ---
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 1992107a03..ce08250639 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -27,6 +27,7 @@ The public API headers are grouped by topics:
   [event_timer_adapter](@ref rte_event_timer_adapter.h),
   [event_crypto_adapter]   (@ref rte_event_crypto_adapter.h),
   [rawdev] (@ref rte_rawdev.h),
+  [dmadev] (@ref rte_dmadev.h),
   [metrics](@ref rte_metrics.h),
   [bitrate](@ref rte_bitrate.h),
   [latency](@ref rte_latencystats.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 325a0195c6..a44a92b5fe 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -34,6 +34,7 @@ INPUT   = @TOPDIR@/doc/api/doxy-api-index.md \
   @TOPDIR@/lib/cmdline \
   @TOPDIR@/lib/compressdev \
   @TOPDIR@/lib/cryptodev \
+  @TOPDIR@/lib/dmadev \
   @TOPDIR@/lib/distributor \
   @TOPDIR@/lib/efd \
   @TOPDIR@/lib/ethdev \
diff --git a/doc/guides/rel_notes/release_21_11.rst 
b/doc/guides/rel_notes/release_21_11.rst
index 675b573834..3562822b3d 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -62,6 +62,11 @@ New Features
   * Added bus-level parsing of the devargs syntax.
   * Kept compatibility with the legacy syntax as parsing fallback.
 
+* **Added dmadev library support.**
+
+  The dmadev library provides a DMA device framework for management and
+  provision of hardware and software DMA devices.
+
 
 Removed Items
 -
diff --git a/lib/dmadev/meson.build b/lib/dmadev/meson.build
new file mode 100644
index 00..6d5bd85373
--- /dev/null
+++ b/lib/dmadev/meson.build
@@ -0,0 +1,4 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 HiSilicon Limited.
+
+headers = files('rte_dmadev.h')
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
new file mode 100644
index 00..76d71615eb
--- /dev/null
+++ b/lib/dmadev/rte_dmadev.h
@@ -0,0 +1,951 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 HiSilicon Limited.
+ * Copyright(c) 2021 Intel Corporation.
+ * Copyright(c) 2021 Marvell International Ltd.
+ * Copyright(c) 2021 SmartShare Systems.
+ */
+
+#ifndef _RTE_DMADEV_H_
+#define _RTE_DMADEV_H_
+
+/**
+ * @file rte_dmadev.h
+ *
+ * RTE DMA (Direct Memory Access) device APIs.
+ *
+ * The DMA framework is built on the following model:
+ *
+ * ---   ---   ---
+ * | virtual DMA |   | virtual DMA |   | virtual DMA |
+ * | channel |   | channel |   | channel |
+ * ---   ---   ---
+ *||  |
+ *--  |
+ * |  |
+ *   
+ *   |  dmadev  ||  dmadev  |
+ *   
+ * |  |
+ *--   --
+ *| HW-DMA-channel |   | HW-DMA-channel |
+ *--   --
+ * |  |
+ * 
+ * |
+ *

[dpdk-dev] [PATCH v21 4/7] dmadev: introduce DMA device library implementation

2021-09-07 Thread Chengwen Feng
This patch introduce DMA device library implementation which includes
configuration and I/O with the DMA devices.

Signed-off-by: Chengwen Feng 
Acked-by: Bruce Richardson 
Acked-by: Morten Brørup 
Reviewed-by: Kevin Laatz 
Reviewed-by: Conor Walsh 
---
 config/rte_config.h  |   3 +
 lib/dmadev/meson.build   |   1 +
 lib/dmadev/rte_dmadev.c  | 607 +++
 lib/dmadev/rte_dmadev.h  | 118 ++-
 lib/dmadev/rte_dmadev_core.h |   2 +
 lib/dmadev/version.map   |   1 +
 6 files changed, 720 insertions(+), 12 deletions(-)
 create mode 100644 lib/dmadev/rte_dmadev.c

diff --git a/config/rte_config.h b/config/rte_config.h
index 590903c07d..331a431819 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -81,6 +81,9 @@
 /* rawdev defines */
 #define RTE_RAWDEV_MAX_DEVS 64
 
+/* dmadev defines */
+#define RTE_DMADEV_MAX_DEVS 64
+
 /* ip_fragmentation defines */
 #define RTE_LIBRTE_IP_FRAG_MAX_FRAG 4
 #undef RTE_LIBRTE_IP_FRAG_TBL_STAT
diff --git a/lib/dmadev/meson.build b/lib/dmadev/meson.build
index 833baf7d54..d2fc85e8c7 100644
--- a/lib/dmadev/meson.build
+++ b/lib/dmadev/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2021 HiSilicon Limited.
 
+sources = files('rte_dmadev.c')
 headers = files('rte_dmadev.h')
 indirect_headers += files('rte_dmadev_core.h')
 driver_sdk_headers += files('rte_dmadev_pmd.h')
diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
new file mode 100644
index 00..ee8db9aaca
--- /dev/null
+++ b/lib/dmadev/rte_dmadev.c
@@ -0,0 +1,607 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 HiSilicon Limited.
+ * Copyright(c) 2021 Intel Corporation.
+ */
+
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "rte_dmadev.h"
+#include "rte_dmadev_pmd.h"
+
+struct rte_dmadev rte_dmadevices[RTE_DMADEV_MAX_DEVS];
+
+static const char *mz_rte_dmadev_data = "rte_dmadev_data";
+/* Shared memory between primary and secondary processes. */
+static struct {
+   struct rte_dmadev_data data[RTE_DMADEV_MAX_DEVS];
+} *dmadev_shared_data;
+
+RTE_LOG_REGISTER_DEFAULT(rte_dmadev_logtype, INFO);
+#define RTE_DMADEV_LOG(level, fmt, args...) \
+   rte_log(RTE_LOG_ ## level, rte_dmadev_logtype, "%s(): " fmt "\n", \
+   __func__, ##args)
+
+/* Macros to check for valid device id */
+#define RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, retval) do { \
+   if (!rte_dmadev_is_valid_dev(dev_id)) { \
+   RTE_DMADEV_LOG(ERR, "Invalid dev_id=%u", dev_id); \
+   return retval; \
+   } \
+} while (0)
+
+static int
+dmadev_check_name(const char *name)
+{
+   size_t name_len;
+
+   if (name == NULL) {
+   RTE_DMADEV_LOG(ERR, "Name can't be NULL");
+   return -EINVAL;
+   }
+
+   name_len = strnlen(name, RTE_DMADEV_NAME_MAX_LEN);
+   if (name_len == 0) {
+   RTE_DMADEV_LOG(ERR, "Zero length DMA device name");
+   return -EINVAL;
+   }
+   if (name_len >= RTE_DMADEV_NAME_MAX_LEN) {
+   RTE_DMADEV_LOG(ERR, "DMA device name is too long");
+   return -EINVAL;
+   }
+
+   return 0;
+}
+
+static uint16_t
+dmadev_find_free_dev(void)
+{
+   uint16_t i;
+
+   for (i = 0; i < RTE_DMADEV_MAX_DEVS; i++) {
+   if (dmadev_shared_data->data[i].dev_name[0] == '\0')
+   return i;
+   }
+
+   return RTE_DMADEV_MAX_DEVS;
+}
+
+static struct rte_dmadev*
+dmadev_find(const char *name)
+{
+   uint16_t i;
+
+   for (i = 0; i < RTE_DMADEV_MAX_DEVS; i++) {
+   if ((rte_dmadevices[i].state == RTE_DMADEV_ATTACHED) &&
+   (!strcmp(name, rte_dmadevices[i].data->dev_name)))
+   return &rte_dmadevices[i];
+   }
+
+   return NULL;
+}
+
+static int
+dmadev_shared_data_prepare(void)
+{
+   const struct rte_memzone *mz;
+
+   if (dmadev_shared_data == NULL) {
+   if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+   /* Allocate port data and ownership shared memory. */
+   mz = rte_memzone_reserve(mz_rte_dmadev_data,
+sizeof(*dmadev_shared_data),
+rte_socket_id(), 0);
+   } else
+   mz = rte_memzone_lookup(mz_rte_dmadev_data);
+   if (mz == NULL)
+   return -ENOMEM;
+
+   dmadev_shared_data = mz->addr;
+   if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+   memset(dmadev_shared_data->data, 0,
+  sizeof(dmadev_shared_data->data));
+   }
+
+   return 0;
+}
+
+static struct rte_dmadev *
+dmadev_allocate(const char *name)
+{
+   struct rte_dmadev *dev;
+   uint16_t dev_id;
+
+   dev = dmadev_find(name);
+   if (dev != NULL) {
+   RTE_DMADEV_LOG(

[dpdk-dev] [PATCH v21 3/7] dmadev: introduce DMA device library PMD header

2021-09-07 Thread Chengwen Feng
This patch introduce DMA device library PMD header which was driver
facing APIs for a DMA device.

Signed-off-by: Chengwen Feng 
Acked-by: Bruce Richardson 
Acked-by: Morten Brørup 
Reviewed-by: Kevin Laatz 
Reviewed-by: Conor Walsh 
---
 lib/dmadev/meson.build  |  1 +
 lib/dmadev/rte_dmadev.h |  2 ++
 lib/dmadev/rte_dmadev_pmd.h | 72 +
 lib/dmadev/version.map  | 10 ++
 4 files changed, 85 insertions(+)
 create mode 100644 lib/dmadev/rte_dmadev_pmd.h

diff --git a/lib/dmadev/meson.build b/lib/dmadev/meson.build
index f421ec1909..833baf7d54 100644
--- a/lib/dmadev/meson.build
+++ b/lib/dmadev/meson.build
@@ -3,3 +3,4 @@
 
 headers = files('rte_dmadev.h')
 indirect_headers += files('rte_dmadev_core.h')
+driver_sdk_headers += files('rte_dmadev_pmd.h')
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index 76d71615eb..c8dd0009f5 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -730,6 +730,8 @@ struct rte_dmadev_sge {
uint32_t length; /**< The DMA operation length. */
 };
 
+#include "rte_dmadev_core.h"
+
 /* DMA flags to augment operation preparation. */
 #define RTE_DMA_OP_FLAG_FENCE  (1ull << 0)
 /**< DMA fence flag.
diff --git a/lib/dmadev/rte_dmadev_pmd.h b/lib/dmadev/rte_dmadev_pmd.h
new file mode 100644
index 00..45141f9dc1
--- /dev/null
+++ b/lib/dmadev/rte_dmadev_pmd.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 HiSilicon Limited.
+ */
+
+#ifndef _RTE_DMADEV_PMD_H_
+#define _RTE_DMADEV_PMD_H_
+
+/**
+ * @file
+ *
+ * RTE DMA Device PMD APIs
+ *
+ * Driver facing APIs for a DMA device. These are not to be called directly by
+ * any application.
+ */
+
+#include "rte_dmadev.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * @internal
+ * Allocates a new dmadev slot for an DMA device and returns the pointer
+ * to that slot for the driver to use.
+ *
+ * @param name
+ *   DMA device name.
+ *
+ * @return
+ *   A pointer to the DMA device slot case of success,
+ *   NULL otherwise.
+ */
+__rte_internal
+struct rte_dmadev *
+rte_dmadev_pmd_allocate(const char *name);
+
+/**
+ * @internal
+ * Release the specified dmadev.
+ *
+ * @param dev
+ *   Device to be released.
+ *
+ * @return
+ *   - 0 on success, negative on error
+ */
+__rte_internal
+int
+rte_dmadev_pmd_release(struct rte_dmadev *dev);
+
+/**
+ * @internal
+ * Return the DMA device based on the device name.
+ *
+ * @param name
+ *   DMA device name.
+ *
+ * @return
+ *   A pointer to the DMA device slot case of success,
+ *   NULL otherwise.
+ */
+__rte_internal
+struct rte_dmadev *
+rte_dmadev_get_device_by_name(const char *name);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_DMADEV_PMD_H_ */
diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
index 2e37882364..d027eeac97 100644
--- a/lib/dmadev/version.map
+++ b/lib/dmadev/version.map
@@ -22,3 +22,13 @@ EXPERIMENTAL {
 
local: *;
 };
+
+INTERNAL {
+global:
+
+   rte_dmadev_get_device_by_name;
+   rte_dmadev_pmd_allocate;
+   rte_dmadev_pmd_release;
+
+   local: *;
+};
-- 
2.33.0



[dpdk-dev] [PATCH v21 7/7] app/test: add dmadev API test

2021-09-07 Thread Chengwen Feng
This patch add dmadev API test which based on 'dma_skeleton' vdev. The
test cases could be executed using 'dmadev_autotest' command in test
framework.

Signed-off-by: Chengwen Feng 
Signed-off-by: Bruce Richardson 
Reviewed-by: Kevin Laatz 
Reviewed-by: Conor Walsh 
---
 MAINTAINERS|   1 +
 app/test/meson.build   |   4 +
 app/test/test_dmadev.c |  43 +++
 app/test/test_dmadev_api.c | 543 +
 4 files changed, 591 insertions(+)
 create mode 100644 app/test/test_dmadev.c
 create mode 100644 app/test/test_dmadev_api.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 2b505ce71e..a19a3cb53c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -500,6 +500,7 @@ DMA device API - EXPERIMENTAL
 M: Chengwen Feng 
 F: lib/dmadev/
 F: drivers/dma/skeleton/
+F: app/test/test_dmadev*
 F: doc/guides/prog_guide/dmadev.rst
 
 
diff --git a/app/test/meson.build b/app/test/meson.build
index a7611686ad..9027eba3a4 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -43,6 +43,8 @@ test_sources = files(
 'test_debug.c',
 'test_distributor.c',
 'test_distributor_perf.c',
+'test_dmadev.c',
+'test_dmadev_api.c',
 'test_eal_flags.c',
 'test_eal_fs.c',
 'test_efd.c',
@@ -162,6 +164,7 @@ test_deps = [
 'cmdline',
 'cryptodev',
 'distributor',
+'dmadev',
 'efd',
 'ethdev',
 'eventdev',
@@ -333,6 +336,7 @@ driver_test_names = [
 'cryptodev_sw_mvsam_autotest',
 'cryptodev_sw_snow3g_autotest',
 'cryptodev_sw_zuc_autotest',
+'dmadev_autotest',
 'eventdev_selftest_octeontx',
 'eventdev_selftest_sw',
 'rawdev_autotest',
diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
new file mode 100644
index 00..92c47fc041
--- /dev/null
+++ b/app/test/test_dmadev.c
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 HiSilicon Limited.
+ * Copyright(c) 2021 Intel Corporation.
+ */
+
+#include 
+#include 
+
+#include "test.h"
+
+/* from test_dmadev_api.c */
+extern int test_dmadev_api(uint16_t dev_id);
+
+static int
+test_apis(void)
+{
+   const char *pmd = "dma_skeleton";
+   int id;
+   int ret;
+
+   if (rte_vdev_init(pmd, NULL) < 0)
+   return TEST_SKIPPED;
+   id = rte_dmadev_get_dev_id(pmd);
+   if (id < 0)
+   return TEST_SKIPPED;
+   printf("\n### Test dmadev infrastructure using skeleton driver\n");
+   ret = test_dmadev_api(id);
+   rte_vdev_uninit(pmd);
+
+   return ret;
+}
+
+static int
+test_dmadev(void)
+{
+   /* basic sanity on dmadev infrastructure */
+   if (test_apis() < 0)
+   return -1;
+
+   return 0;
+}
+
+REGISTER_TEST_COMMAND(dmadev_autotest, test_dmadev);
diff --git a/app/test/test_dmadev_api.c b/app/test/test_dmadev_api.c
new file mode 100644
index 00..55046ac485
--- /dev/null
+++ b/app/test/test_dmadev_api.c
@@ -0,0 +1,543 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 HiSilicon Limited.
+ */
+
+#include 
+
+#include 
+#include 
+#include 
+#include 
+
+extern int test_dmadev_api(uint16_t dev_id);
+
+#define SKELDMA_TEST_RUN(test) \
+   testsuite_run_test(test, #test)
+
+#define TEST_MEMCPY_SIZE   1024
+#define TEST_WAIT_US_VAL   5
+
+#define TEST_SUCCESS 0
+#define TEST_FAILED  -1
+
+static uint16_t test_dev_id;
+static uint16_t invalid_dev_id;
+
+static char *src;
+static char *dst;
+
+static int total;
+static int passed;
+static int failed;
+
+static int
+testsuite_setup(uint16_t dev_id)
+{
+   test_dev_id = dev_id;
+   invalid_dev_id = RTE_DMADEV_MAX_DEVS;
+
+   src = rte_malloc("dmadev_test_src", TEST_MEMCPY_SIZE, 0);
+   if (src == NULL)
+   return -ENOMEM;
+   dst = rte_malloc("dmadev_test_dst", TEST_MEMCPY_SIZE, 0);
+   if (dst == NULL) {
+   rte_free(src);
+   src = NULL;
+   return -ENOMEM;
+   }
+
+   total = 0;
+   passed = 0;
+   failed = 0;
+
+   /* Set dmadev log level to critical to suppress unnecessary output
+* during API tests.
+*/
+   rte_log_set_level_pattern("lib.dmadev", RTE_LOG_CRIT);
+
+   return 0;
+}
+
+static void
+testsuite_teardown(void)
+{
+   rte_free(src);
+   src = NULL;
+   rte_free(dst);
+   dst = NULL;
+   /* Ensure the dmadev is stopped. */
+   rte_dmadev_stop(test_dev_id);
+
+   rte_log_set_level_pattern("lib.dmadev", RTE_LOG_INFO);
+}
+
+static void
+testsuite_run_test(int (*test)(void), const char *name)
+{
+   int ret = 0;
+
+   if (test) {
+   ret = test();
+   if (ret < 0) {
+   failed++;
+   printf("%s Failed\n", name);
+   } else {
+   passed++;
+   printf("%s Passed\n", name);
+   }
+   }
+
+

[dpdk-dev] [PATCH v21 0/7] support dmadev

2021-09-07 Thread Chengwen Feng
This patch set contains seven patch for new add dmadev.

Chengwen Feng (7):
  dmadev: introduce DMA device library public APIs
  dmadev: introduce DMA device library internal header
  dmadev: introduce DMA device library PMD header
  dmadev: introduce DMA device library implementation
  doc: add DMA device library guide
  dma/skeleton: introduce skeleton dmadev driver
  app/test: add dmadev API test

---
v21:
* add comment for reserved fields of struct rte_dmadev.
v20:
* delete unnecessary and duplicate include header files.
* the conf_sz parameter is added to the configure and vchan-setup
  callbacks of the PMD, this is mainly used to enhance ABI
  compatibility.
* the rte_dmadev structure field is rearranged to reserve more space
  for I/O functions.
* fix some ambiguous and unnecessary comments.
* fix the potential memory leak of ut.
* redefine skeldma_init_once to skeldma_count.
* suppress rte_dmadev error output when execute ut.
v19:
* squash maintainer patch to patch #1.
v18:
* RTE_DMA_STATUS_* add BUS_READ/WRITE_ERR, PAGE_FAULT.
* rte_dmadev dataplane API add judge dev_started when debug enable.
* rte_dmadev_start/vchan_setup add judge device configured.
* rte_dmadev_dump support format capability name.
* optimized the comments of rte_dmadev.
* fix skeldma_copy always return zero when enqueue successful.
* log encapsulation macro add newline characters.
* test_dmadev_api support rte_dmadev_dump() ut.

 MAINTAINERS|7 +
 app/test/meson.build   |4 +
 app/test/test_dmadev.c |   43 +
 app/test/test_dmadev_api.c |  543 
 config/rte_config.h|3 +
 doc/api/doxy-api-index.md  |1 +
 doc/api/doxy-api.conf.in   |1 +
 doc/guides/prog_guide/dmadev.rst   |  125 +++
 doc/guides/prog_guide/img/dmadev.svg   |  283 +++
 doc/guides/prog_guide/index.rst|1 +
 doc/guides/rel_notes/release_21_11.rst |5 +
 drivers/dma/meson.build|   11 +
 drivers/dma/skeleton/meson.build   |7 +
 drivers/dma/skeleton/skeleton_dmadev.c |  594 ++
 drivers/dma/skeleton/skeleton_dmadev.h |   61 ++
 drivers/dma/skeleton/version.map   |3 +
 drivers/meson.build|1 +
 lib/dmadev/meson.build |7 +
 lib/dmadev/rte_dmadev.c|  607 ++
 lib/dmadev/rte_dmadev.h| 1047 
 lib/dmadev/rte_dmadev_core.h   |  187 +
 lib/dmadev/rte_dmadev_pmd.h|   72 ++
 lib/dmadev/version.map |   35 +
 lib/meson.build|1 +
 24 files changed, 3649 insertions(+)
 create mode 100644 app/test/test_dmadev.c
 create mode 100644 app/test/test_dmadev_api.c
 create mode 100644 doc/guides/prog_guide/dmadev.rst
 create mode 100644 doc/guides/prog_guide/img/dmadev.svg
 create mode 100644 drivers/dma/meson.build
 create mode 100644 drivers/dma/skeleton/meson.build
 create mode 100644 drivers/dma/skeleton/skeleton_dmadev.c
 create mode 100644 drivers/dma/skeleton/skeleton_dmadev.h
 create mode 100644 drivers/dma/skeleton/version.map
 create mode 100644 lib/dmadev/meson.build
 create mode 100644 lib/dmadev/rte_dmadev.c
 create mode 100644 lib/dmadev/rte_dmadev.h
 create mode 100644 lib/dmadev/rte_dmadev_core.h
 create mode 100644 lib/dmadev/rte_dmadev_pmd.h
 create mode 100644 lib/dmadev/version.map

-- 
2.33.0



[dpdk-dev] [PATCH v21 6/7] dma/skeleton: introduce skeleton dmadev driver

2021-09-07 Thread Chengwen Feng
Skeleton dmadevice driver, on the lines of rawdev skeleton, is for
showcasing of the dmadev library.

Design of skeleton involves a virtual device which is plugged into VDEV
bus on initialization.

Also, enable compilation of dmadev skeleton drivers.

Signed-off-by: Chengwen Feng 
Reviewed-by: Kevin Laatz 
Reviewed-by: Conor Walsh 
---
 MAINTAINERS|   1 +
 drivers/dma/meson.build|  11 +
 drivers/dma/skeleton/meson.build   |   7 +
 drivers/dma/skeleton/skeleton_dmadev.c | 594 +
 drivers/dma/skeleton/skeleton_dmadev.h |  61 +++
 drivers/dma/skeleton/version.map   |   3 +
 drivers/meson.build|   1 +
 7 files changed, 678 insertions(+)
 create mode 100644 drivers/dma/meson.build
 create mode 100644 drivers/dma/skeleton/meson.build
 create mode 100644 drivers/dma/skeleton/skeleton_dmadev.c
 create mode 100644 drivers/dma/skeleton/skeleton_dmadev.h
 create mode 100644 drivers/dma/skeleton/version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index e237e9406b..2b505ce71e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -499,6 +499,7 @@ F: doc/guides/prog_guide/rawdev.rst
 DMA device API - EXPERIMENTAL
 M: Chengwen Feng 
 F: lib/dmadev/
+F: drivers/dma/skeleton/
 F: doc/guides/prog_guide/dmadev.rst
 
 
diff --git a/drivers/dma/meson.build b/drivers/dma/meson.build
new file mode 100644
index 00..0c2c34cd00
--- /dev/null
+++ b/drivers/dma/meson.build
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 HiSilicon Limited.
+
+if is_windows
+subdir_done()
+endif
+
+drivers = [
+'skeleton',
+]
+std_deps = ['dmadev']
diff --git a/drivers/dma/skeleton/meson.build b/drivers/dma/skeleton/meson.build
new file mode 100644
index 00..27509b1668
--- /dev/null
+++ b/drivers/dma/skeleton/meson.build
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 HiSilicon Limited.
+
+deps += ['dmadev', 'kvargs', 'ring', 'bus_vdev']
+sources = files(
+'skeleton_dmadev.c',
+)
diff --git a/drivers/dma/skeleton/skeleton_dmadev.c 
b/drivers/dma/skeleton/skeleton_dmadev.c
new file mode 100644
index 00..0cc7e2409f
--- /dev/null
+++ b/drivers/dma/skeleton/skeleton_dmadev.c
@@ -0,0 +1,594 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 HiSilicon Limited.
+ */
+
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "skeleton_dmadev.h"
+
+RTE_LOG_REGISTER_DEFAULT(skeldma_logtype, INFO);
+#define SKELDMA_LOG(level, fmt, args...) \
+   rte_log(RTE_LOG_ ## level, skeldma_logtype, "%s(): " fmt "\n", \
+   __func__, ##args)
+
+/* Count of instances, currently only 1 is supported. */
+static uint16_t skeldma_count;
+
+static int
+skeldma_info_get(const struct rte_dmadev *dev, struct rte_dmadev_info 
*dev_info,
+uint32_t info_sz)
+{
+#define SKELDMA_MAX_DESC   8192
+#define SKELDMA_MIN_DESC   128
+
+   RTE_SET_USED(dev);
+   RTE_SET_USED(info_sz);
+
+   dev_info->dev_capa = RTE_DMADEV_CAPA_MEM_TO_MEM |
+RTE_DMADEV_CAPA_SVA |
+RTE_DMADEV_CAPA_OPS_COPY;
+   dev_info->max_vchans = 1;
+   dev_info->max_desc = SKELDMA_MAX_DESC;
+   dev_info->min_desc = SKELDMA_MIN_DESC;
+
+   return 0;
+}
+
+static int
+skeldma_configure(struct rte_dmadev *dev, const struct rte_dmadev_conf *conf,
+ uint32_t conf_sz)
+{
+   RTE_SET_USED(dev);
+   RTE_SET_USED(conf);
+   RTE_SET_USED(conf_sz);
+   return 0;
+}
+
+static void *
+cpucopy_thread(void *param)
+{
+#define SLEEP_THRESHOLD1
+#define SLEEP_US_VAL   10
+
+   struct rte_dmadev *dev = (struct rte_dmadev *)param;
+   struct skeldma_hw *hw = dev->dev_private;
+   struct skeldma_desc *desc = NULL;
+   int ret;
+
+   while (!hw->exit_flag) {
+   ret = rte_ring_dequeue(hw->desc_running, (void **)&desc);
+   if (ret) {
+   hw->zero_req_count++;
+   if (hw->zero_req_count > SLEEP_THRESHOLD) {
+   if (hw->zero_req_count == 0)
+   hw->zero_req_count = SLEEP_THRESHOLD;
+   rte_delay_us_sleep(SLEEP_US_VAL);
+   }
+   continue;
+   }
+
+   hw->zero_req_count = 0;
+   rte_memcpy(desc->dst, desc->src, desc->len);
+   hw->completed_count++;
+   (void)rte_ring_enqueue(hw->desc_completed, (void *)desc);
+   }
+
+   return NULL;
+}
+
+static void
+fflush_ring(struct skeldma_hw *hw, struct rte_ring *ring)
+{
+   struct skeldma_desc *desc = NULL;
+   while (rte_ring_count(ring) > 0) {
+   (void)rte_ring_dequeue(ring, (void **)&desc);
+   (void)rte_ring_enqueue(hw->desc_empty, (void *)

[dpdk-dev] [PATCH v21 2/7] dmadev: introduce DMA device library internal header

2021-09-07 Thread Chengwen Feng
This patch introduce DMA device library internal header, which contains
internal data types that are used by the DMA devices in order to expose
their ops to the class.

Signed-off-by: Chengwen Feng 
Acked-by: Bruce Richardson 
Acked-by: Morten Brørup 
Reviewed-by: Kevin Laatz 
Reviewed-by: Conor Walsh 
---
 lib/dmadev/meson.build   |   1 +
 lib/dmadev/rte_dmadev_core.h | 185 +++
 2 files changed, 186 insertions(+)
 create mode 100644 lib/dmadev/rte_dmadev_core.h

diff --git a/lib/dmadev/meson.build b/lib/dmadev/meson.build
index 6d5bd85373..f421ec1909 100644
--- a/lib/dmadev/meson.build
+++ b/lib/dmadev/meson.build
@@ -2,3 +2,4 @@
 # Copyright(c) 2021 HiSilicon Limited.
 
 headers = files('rte_dmadev.h')
+indirect_headers += files('rte_dmadev_core.h')
diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
new file mode 100644
index 00..cbf5e88621
--- /dev/null
+++ b/lib/dmadev/rte_dmadev_core.h
@@ -0,0 +1,185 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 HiSilicon Limited.
+ * Copyright(c) 2021 Intel Corporation.
+ */
+
+#ifndef _RTE_DMADEV_CORE_H_
+#define _RTE_DMADEV_CORE_H_
+
+/**
+ * @file
+ *
+ * RTE DMA Device internal header.
+ *
+ * This header contains internal data types, that are used by the DMA devices
+ * in order to expose their ops to the class.
+ *
+ * Applications should not use these API directly.
+ *
+ */
+
+struct rte_dmadev;
+
+typedef int (*rte_dmadev_info_get_t)(const struct rte_dmadev *dev,
+struct rte_dmadev_info *dev_info,
+uint32_t info_sz);
+/**< @internal Used to get device information of a device. */
+
+typedef int (*rte_dmadev_configure_t)(struct rte_dmadev *dev,
+ const struct rte_dmadev_conf *dev_conf,
+ uint32_t conf_sz);
+/**< @internal Used to configure a device. */
+
+typedef int (*rte_dmadev_start_t)(struct rte_dmadev *dev);
+/**< @internal Used to start a configured device. */
+
+typedef int (*rte_dmadev_stop_t)(struct rte_dmadev *dev);
+/**< @internal Used to stop a configured device. */
+
+typedef int (*rte_dmadev_close_t)(struct rte_dmadev *dev);
+/**< @internal Used to close a configured device. */
+
+typedef int (*rte_dmadev_vchan_setup_t)(struct rte_dmadev *dev, uint16_t vchan,
+   const struct rte_dmadev_vchan_conf *conf,
+   uint32_t conf_sz);
+/**< @internal Used to allocate and set up a virtual DMA channel. */
+
+typedef int (*rte_dmadev_stats_get_t)(const struct rte_dmadev *dev,
+   uint16_t vchan, struct rte_dmadev_stats *stats,
+   uint32_t stats_sz);
+/**< @internal Used to retrieve basic statistics. */
+
+typedef int (*rte_dmadev_stats_reset_t)(struct rte_dmadev *dev, uint16_t 
vchan);
+/**< @internal Used to reset basic statistics. */
+
+typedef int (*rte_dmadev_dump_t)(const struct rte_dmadev *dev, FILE *f);
+/**< @internal Used to dump internal information. */
+
+typedef int (*rte_dmadev_copy_t)(struct rte_dmadev *dev, uint16_t vchan,
+rte_iova_t src, rte_iova_t dst,
+uint32_t length, uint64_t flags);
+/**< @internal Used to enqueue a copy operation. */
+
+typedef int (*rte_dmadev_copy_sg_t)(struct rte_dmadev *dev, uint16_t vchan,
+   const struct rte_dmadev_sge *src,
+   const struct rte_dmadev_sge *dst,
+   uint16_t nb_src, uint16_t nb_dst,
+   uint64_t flags);
+/**< @internal Used to enqueue a scatter-gather list copy operation. */
+
+typedef int (*rte_dmadev_fill_t)(struct rte_dmadev *dev, uint16_t vchan,
+uint64_t pattern, rte_iova_t dst,
+uint32_t length, uint64_t flags);
+/**< @internal Used to enqueue a fill operation. */
+
+typedef int (*rte_dmadev_submit_t)(struct rte_dmadev *dev, uint16_t vchan);
+/**< @internal Used to trigger hardware to begin working. */
+
+typedef uint16_t (*rte_dmadev_completed_t)(struct rte_dmadev *dev,
+   uint16_t vchan, const uint16_t nb_cpls,
+   uint16_t *last_idx, bool *has_error);
+/**< @internal Used to return number of successful completed operations. */
+
+typedef uint16_t (*rte_dmadev_completed_status_t)(struct rte_dmadev *dev,
+   uint16_t vchan, const uint16_t nb_cpls,
+   uint16_t *last_idx, enum rte_dma_status_code *status);
+/**< @internal Used to return number of completed operations. */
+
+/**
+ * Possible states of a DMA device.
+ */
+enum rte_dmadev_state {
+   RTE_DMADEV_UNUSED = 0,
+   /**< Device is unused before being probed. */
+   RTE_DMADEV_ATTACHED,
+   /**< Device is attached when allocated in probing. */
+};
+

[dpdk-dev] [PATCH v21 5/7] doc: add DMA device library guide

2021-09-07 Thread Chengwen Feng
This patch adds dmadev library guide.

Signed-off-by: Chengwen Feng 
Acked-by: Conor Walsh 
Reviewed-by: Kevin Laatz 
---
 MAINTAINERS  |   1 +
 doc/guides/prog_guide/dmadev.rst | 125 
 doc/guides/prog_guide/img/dmadev.svg | 283 +++
 doc/guides/prog_guide/index.rst  |   1 +
 4 files changed, 410 insertions(+)
 create mode 100644 doc/guides/prog_guide/dmadev.rst
 create mode 100644 doc/guides/prog_guide/img/dmadev.svg

diff --git a/MAINTAINERS b/MAINTAINERS
index 9885cc56b7..e237e9406b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -499,6 +499,7 @@ F: doc/guides/prog_guide/rawdev.rst
 DMA device API - EXPERIMENTAL
 M: Chengwen Feng 
 F: lib/dmadev/
+F: doc/guides/prog_guide/dmadev.rst
 
 
 Memory Pool Drivers
diff --git a/doc/guides/prog_guide/dmadev.rst b/doc/guides/prog_guide/dmadev.rst
new file mode 100644
index 00..e47a164850
--- /dev/null
+++ b/doc/guides/prog_guide/dmadev.rst
@@ -0,0 +1,125 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright 2021 HiSilicon Limited
+
+DMA Device Library
+
+
+The DMA library provides a DMA device framework for management and provisioning
+of hardware and software DMA poll mode drivers, defining generic APIs which
+support a number of different DMA operations.
+
+
+Design Principles
+-
+
+The DMA library follows the same basic principles as those used in DPDK's
+Ethernet Device framework and the RegEx framework. The DMA framework provides
+a generic DMA device framework which supports both physical (hardware)
+and virtual (software) DMA devices as well as a generic DMA API which allows
+DMA devices to be managed and configured and supports DMA operations to be
+provisioned on DMA poll mode driver.
+
+.. _figure_dmadev:
+
+.. figure:: img/dmadev.*
+
+The above figure shows the model on which the DMA framework is built on:
+
+ * The DMA controller could have multiple hardware DMA channels (aka. hardware
+   DMA queues), each hardware DMA channel should be represented by a dmadev.
+ * The dmadev could create multiple virtual DMA channels, each virtual DMA
+   channel represents a different transfer context. The DMA operation request
+   must be submitted to the virtual DMA channel. e.g. Application could create
+   virtual DMA channel 0 for memory-to-memory transfer scenario, and create
+   virtual DMA channel 1 for memory-to-device transfer scenario.
+
+
+Device Management
+-
+
+Device Creation
+~~~
+
+Physical DMA controllers are discovered during the PCI probe/enumeration of the
+EAL function which is executed at DPDK initialization, this is based on their
+PCI BDF (bus/bridge, device, function). Specific physical DMA controllers, like
+other physical devices in DPDK can be listed using the EAL command line 
options.
+
+The dmadevs are dynamically allocated by using the API
+``rte_dmadev_pmd_allocate`` based on the number of hardware DMA channels.
+
+
+Device Identification
+~
+
+Each DMA device, whether physical or virtual is uniquely designated by two
+identifiers:
+
+- A unique device index used to designate the DMA device in all functions
+  exported by the DMA API.
+
+- A device name used to designate the DMA device in console messages, for
+  administration or debugging purposes.
+
+
+Device Configuration
+
+
+The rte_dmadev_configure API is used to configure a DMA device.
+
+.. code-block:: c
+
+   int rte_dmadev_configure(uint16_t dev_id,
+const struct rte_dmadev_conf *dev_conf);
+
+The ``rte_dmadev_conf`` structure is used to pass the configuration parameters
+for the DMA device for example the number of virtual DMA channels to set up,
+indication of whether to enable silent mode.
+
+
+Configuration of Virtual DMA Channels
+~
+
+The rte_dmadev_vchan_setup API is used to configure a virtual DMA channel.
+
+.. code-block:: c
+
+   int rte_dmadev_vchan_setup(uint16_t dev_id, uint16_t vchan,
+  const struct rte_dmadev_vchan_conf *conf);
+
+The ``rte_dmadev_vchan_conf`` structure is used to pass the configuration
+parameters for the virtual DMA channel for example transfer direction, number 
of
+descriptor for the virtual DMA channel, source device access port parameter,
+destination device access port parameter.
+
+
+Device Features and Capabilities
+
+
+DMA devices may support different feature sets. The ``rte_dmadev_info_get`` API
+can be used to get the device info and supported features.
+
+Silent mode is a special device capability which does not require the
+application to invoke dequeue APIs.
+
+
+Enqueue / Dequeue APIs
+~~
+
+Enqueue APIs such as ``rte_dmadev_copy`` and ``rte_dmadev_fill`` can be used to
+enqueue operations to hardware. If an enqueue is successful, a ``ring_idx`` is
+returned. This ``ring_idx`` can be used by applicat

Re: [dpdk-dev] [PATCH v20 2/7] dmadev: introduce DMA device library internal header

2021-09-07 Thread fengchengwen
Already fix in V21, thanks

On 2021/9/6 21:35, Bruce Richardson wrote:
> On Sat, Sep 04, 2021 at 06:10:22PM +0800, Chengwen Feng wrote:
>> This patch introduce DMA device library internal header, which contains
>> internal data types that are used by the DMA devices in order to expose
>> their ops to the class.
>>
>> Signed-off-by: Chengwen Feng 
>> Acked-by: Bruce Richardson 
>> Acked-by: Morten Brørup 
>> Reviewed-by: Kevin Laatz 
>> Reviewed-by: Conor Walsh 
>> ---
> 
>> +struct rte_dmadev {
>> +void *dev_private;
>> +/**< PMD-specific private data.
>> + *
>> + * - If is the primary process, after dmadev allocated by
>> + * rte_dmadev_pmd_allocate(), the PCI/SoC device probing should
>> + * initialize this field, and copy it's value to the 'dev_private'
>> + * field of 'struct rte_dmadev_data' which pointer by 'data' filed.
>> + *
>> + * - If is the secondary process, dmadev framework will initialize this
>> + * field by copy from 'dev_private' field of 'struct rte_dmadev_data'
>> + * which initialized by primary process.
>> + *
>> + * @note It's the primary process responsibility to deinitialize this
>> + * field after invoke rte_dmadev_pmd_release() in the PCI/SoC device
>> + * removing stage.
>> + */
>> +rte_dmadev_copy_t copy;
>> +rte_dmadev_copy_sg_t  copy_sg;
>> +rte_dmadev_fill_t fill;
>> +rte_dmadev_submit_t   submit;
>> +rte_dmadev_completed_tcompleted;
>> +rte_dmadev_completed_status_t completed_status;
>> +void *reserved_ptr[7]; /**< Reserved for future IO function. */
> 
> This is new in this set, I think. I assume that 7 was chosen so that we
> have the "data" pointer and the "dev_ops" pointers on the second cacheline
> (if 64-byte CLs)? If so, I wonder if we can find a good way to express that
> in the code or in the comments?
> 
> The simplest - and probably as clear as any - is to split this into
> "void *__reserved_cl0" and "void *__reserved_cl1[6]" to show that it is
> split across the two cachelines, with the latter having comment:
> "Reserve space for future IO functions, while keeping data and dev_ops
> pointers on the second cacheline"
> 
> If we don't mind using a slightly different type the magic "6" could be
> changed to a computation:
> char __reserved_cl1[RTE_CACHELINE_SZ - sizeof(void *) * 2];
> 
> However, for simplicity, I think the magic 6 can be kept, and just split
> into reserved_cl0 and reserved_cl1 as I suggest above.
> 
> /Bruce
> 
> .
> 


Re: [dpdk-dev] [PATCH v17] app/testpmd: support multi-process

2021-09-07 Thread Ferruh Yigit
On 8/25/2021 3:06 AM, Min Hu (Connor) wrote:
> This patch adds multi-process support for testpmd.
> For example the following commands run two testpmd
> processes:
> 
>  * the primary process:
> 
> ./dpdk-testpmd --proc-type=auto -l 0-1 -- -i \
>--rxq=4 --txq=4 --num-procs=2 --proc-id=0
> 
>  * the secondary process:
> 
> ./dpdk-testpmd --proc-type=auto -l 2-3 -- -i \
>--rxq=4 --txq=4 --num-procs=2 --proc-id=1
> 
> Signed-off-by: Min Hu (Connor) 
> Signed-off-by: Lijun Ou 
> Signed-off-by: Andrew Rybchenko 
> Acked-by: Xiaoyun Li 
> Acked-by: Ajit Khaparde 
> Reviewed-by: Ferruh Yigit 
> Acked-by: Aman Deep Singh 

Applied to dpdk-next-net/main, thanks.


Thanks Connor for the work,
I won't be surprised if we need a few more tweak/fix for the testpmd secondary
process support but right now there is no blocker issue, so proceeding with the
patch, please continue your support for further required changes.


  1   2   >