RE: [EXT] Re: [dpdk-dev] [PATCH 22.02 2/2] net/cnxk: add devargs for configuring SDP channel mask

2022-01-11 Thread Satheesh Paul
Hi,

Please find reply inline.

Thanks,
Satheesh.

-Original Message-
From: Ferruh Yigit  
Sent: 11 January 2022 05:26 PM
To: Satheesh Paul ; Nithin Kumar Dabilpuram 
; Kiran Kumar Kokkilagadda ; 
Sunil Kumar Kori ; Satha Koteswara Rao Kottidi 

Cc: dev@dpdk.org; Ori Kam ; Andrew Rybchenko 

Subject: [EXT] Re: [dpdk-dev] [PATCH 22.02 2/2] net/cnxk: add devargs for 
configuring SDP channel mask

External Email

--
On 11/9/2021 9:42 AM, psathe...@marvell.com wrote:
> From: Satheesh Paul 
> 
> This patch adds support to configure channel mask which will be used 
> by rte flow when adding flow rules on SDP interfaces.
> 

>Hi Satheesh,

>+ Ori & Andrew.

>What 'SDP' stands for?
It stands for "System DMA Packet Interface". This is when the system acts as 
PCIe endpoint. For instance, an x86 machine can act as a host having an Octeon 
TX* board plugged through this PCIe interface and packets are transferred 
through this PCIe interface.

>And can this new devarg be provided with flow rule? Why it needs to be a new 
>devarg?
SDP and its channel related info are specific to the hardware and rte flow api 
cannot be extended to support them. Hence, it is added as a new devarg.

>Can you please give a sample of the rte flow API that will be used?
This channel mask will be used by the rte_flow_create() api. It is actually 
transparent at rte_flow_create() invocation itself. That is, at the time of 
rte_flow_create() invocation, user does not give any additional information. 
But internally, the driver's flow creation api takes the SDP channel/mask value 
supplied at the startup and applies it. Basically, in Octeon tx*, the 
interfaces have a "channel identifier" number. The rules in packet 
classification hardware are configured to match the channel number. With this 
change, we are relaxing the exact match and are allowing a range for this SDP 
interface.

Thanks,
ferruh


> Signed-off-by: Satheesh Paul 
> ---
>   doc/guides/nics/cnxk.rst   | 21 ++
>   drivers/net/cnxk/cnxk_ethdev_devargs.c | 40 --
>   2 files changed, 59 insertions(+), 2 deletions(-)
> 
> diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst index 
> 837ffc02b4..470e01b811 100644
> --- a/doc/guides/nics/cnxk.rst
> +++ b/doc/guides/nics/cnxk.rst
> @@ -276,6 +276,27 @@ Runtime Config Options
>  set with this custom mask, inbound encrypted traffic from all ports with
>  matching channel number pattern will be directed to the inline IPSec 
> device.
>   
> +- ``SDP device channel and mask`` (default ``none``)
> +   Set channel and channel mask configuration for the SDP device. This
> +   will be used when creating flow rules on the SDP device.
> +
> +   By default, for rules created on the SDP device, the RTE Flow API sets the
> +   channel number and mask to cover the entire SDP channel range in the 
> channel
> +   field of the MCAM entry. This behaviour can be modified using the
> +   ``sdp_channel_mask`` ``devargs`` parameter.
> +
> +   For example::
> +
> +  -a 0002:1d:00.0,sdp_channel_mask=0x700/0xf00
> +
> +   With the above configuration, RTE Flow rules API will set the channel
> +   and channel mask as 0x700 and 0xF00 in the MCAM entries of the  flow rules
> +   created on the SDP device. This option needs to be used when more than one
> +   SDP interface is in use and RTE Flow rules created need to distinguish
> +   between traffic from each SDP interface. The channel and mask combination
> +   specified should match all the channels(or rings) configured on the SDP
> +   interface.
> +
>   .. note::
>   

<...>


Re: [dpdk-dev] [PATCH] app/flow-perf: added option to support flow priority

2021-11-02 Thread Satheesh Paul
Hi,

Thanks for the review. Please find the reply inline.

-Original Message-
From: Wisam Monther  
Sent: 01 November 2021 02:37 PM
To: Satheesh Paul 
Cc: dev@dpdk.org; Asaf Penso 
Subject: [EXT] RE: [dpdk-dev] [PATCH] app/flow-perf: added option to support 
flow priority

External Email

--
Hi,

Please see my inline comments, thanks 😊 

> -Original Message-
> From: psathe...@marvell.com 
> Sent: Friday, October 29, 2021 7:09 AM
> To: Wisam Monther 
> Cc: dev@dpdk.org; Satheesh Paul 
> Subject: [dpdk-dev] [PATCH] app/flow-perf: added option to support 
> flow priority
> 
> From: Satheesh Paul 
> 
> Added support to specify maximum flow priority option. When this 
> option is given, each flow will be created with the priority attribute 
> and the priority will be random between
> 0 to max-flow-priority. This is useful to measure performance on NICs 
> which may have to rearrange flows to honor flow priority.
> 
> Removed the lower limit of 10 flows per batch.
> 
> Signed-off-by: Satheesh Paul 
> ---
>  app/test-flow-perf/flow_gen.c  |  7 --  
> app/test-flow-perf/flow_gen.h
> |  1 +
>  app/test-flow-perf/main.c  | 44 +-
>  doc/guides/tools/flow-perf.rst |  5 
>  4 files changed, 38 insertions(+), 19 deletions(-)
> 
> diff --git a/app/test-flow-perf/flow_gen.c 
> b/app/test-flow-perf/flow_gen.c index 51871dbfdc..414ec0bc5e 100644
> --- a/app/test-flow-perf/flow_gen.c
> +++ b/app/test-flow-perf/flow_gen.c
> @@ -18,7 +18,7 @@
> 
>  static void
>  fill_attributes(struct rte_flow_attr *attr,
> - uint64_t *flow_attrs, uint16_t group)
> + uint64_t *flow_attrs, uint16_t group, uint8_t max_priority)
>  {
>   uint8_t i;
>   for (i = 0; i < MAX_ATTRS_NUM; i++) { @@ -32,6 +32,8 @@ 
> fill_attributes(struct rte_flow_attr *attr,
>   attr->transfer = 1;
>   }
>   attr->group = group;
> + if (max_priority)
> + attr->priority = rte_rand_max(max_priority);

No need to do this, since it's static and default is zero you can always set it 
without the condition

Ack.

>  }
> 
>  struct rte_flow *
> @@ -48,6 +50,7 @@ generate_flow(uint16_t port_id,
>   uint8_t core_idx,
>   uint8_t rx_queues_count,
>   bool unique_data,
> + uint8_t max_priority,

--max-priority=N is good, but I think to reflect the exact usage of it we need 
something like:
--random-priority=lower_hand-upper_hand
That do random prio from lower to upper.

The spec says priority 0 must be supported. Since we have one side of the 
interval, it may not be useful to take "lower_hand" as input. 

>   struct rte_flow_error *error)
>  {
>   struct rte_flow_attr attr;
> @@ -59,7 +62,7 @@ generate_flow(uint16_t port_id,
>   memset(actions, 0, sizeof(actions));
>   memset(&attr, 0, sizeof(struct rte_flow_attr));
> 
> - fill_attributes(&attr, flow_attrs, group);
> + fill_attributes(&attr, flow_attrs, group, max_priority);
> 
>   fill_actions(actions, flow_actions,
>   outer_ip_src, next_table, hairpinq, diff --git 
> a/app/test-flow-perf/flow_gen.h b/app/test-flow-perf/flow_gen.h index 
> 1118a9fc14..40eeceae6e 100644
> --- a/app/test-flow-perf/flow_gen.h
> +++ b/app/test-flow-perf/flow_gen.h
> @@ -37,6 +37,7 @@ generate_flow(uint16_t port_id,
>   uint8_t core_idx,
>   uint8_t rx_queues_count,
>   bool unique_data,
> + uint8_t max_priority,
>   struct rte_flow_error *error);
> 
>  #endif /* FLOW_PERF_FLOW_GEN */
> diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c 
> index 3ebc025fb2..1d91f308fd 100644
> --- a/app/test-flow-perf/main.c
> +++ b/app/test-flow-perf/main.c
> @@ -77,6 +77,7 @@ static uint32_t rules_count;  static uint32_t 
> rules_batch; static uint32_t hairpin_queues_num; /* total hairpin q 
> number - default: 0 */ static uint32_t nb_lcores;
> +static uint8_t max_priority;
> 
>  #define MAX_PKT_BURST32
>  #define LCORE_MODE_PKT1
> @@ -140,6 +141,7 @@ usage(char *progname)
>   printf("  --enable-fwd: To enable packets forwarding"
>   " after insertion\n");
>   printf("  --portmask=N: hexadecimal bitmask of ports used\n");
> + printf("  --max-priority=N: Maximum priority level for flows\n");

It talks also about random values, you need to mention that it will do the 
random also.

Ack.

>   printf("  --unique-data: flag to set using unique data for all"
>   " actions that support data, such as header modify and encap 
> actions\n");
> 
> @@ -589,

Re: [dpdk-dev] [EXT] Re: [PATCH] common/cnxk: add ROC API to merge base steering rule

2021-08-31 Thread Satheesh Paul
Please find reply inline.

-Original Message-
From: Kinsella, Ray  
Sent: 31 August 2021 09:11 PM
To: Satheesh Paul ; Nithin Kumar Dabilpuram 
; Kiran Kumar Kokkilagadda ; 
Sunil Kumar Kori ; Satha Koteswara Rao Kottidi 

Cc: dev@dpdk.org
Subject: [EXT] Re: [dpdk-dev] [PATCH] common/cnxk: add ROC API to merge base 
steering rule

External Email

--


On 31/08/2021 05:16, psathe...@marvell.com wrote:
> From: Satheesh Paul 
> 
> This patch adds an ROC API to merge base steering rule with rules 
> added by VF.
> 
> Signed-off-by: Satheesh Paul 
> Reviewed-by: Kiran Kumar Kokkilagadda 
> ---
>  drivers/common/cnxk/roc_npc.c   | 27 +++
>  drivers/common/cnxk/roc_npc.h   |  5 ++---
>  drivers/common/cnxk/version.map |  1 +
>  3 files changed, 30 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/common/cnxk/roc_npc.c 
> b/drivers/common/cnxk/roc_npc.c index aff4eef554..53074bed99 100644
> --- a/drivers/common/cnxk/roc_npc.c
> +++ b/drivers/common/cnxk/roc_npc.c
> @@ -1136,3 +1136,30 @@ roc_npc_flow_dump(FILE *file, struct roc_npc *roc_npc)
>   }
>   }
>  }
> +
> +int
> +roc_npc_mcam_merge_base_steering_rule(struct roc_npc *roc_npc,
> +   struct roc_npc_flow *flow)
> +{
> + struct npc_mcam_read_base_rule_rsp *base_rule_rsp;
> + struct npc *npc = roc_npc_to_npc_priv(roc_npc);
> + struct mcam_entry *base_entry;
> + int idx, rc;
> +
> + if (roc_nix_is_pf(roc_npc->roc_nix))
> + return 0;
> +
> + (void)mbox_alloc_msg_npc_read_base_steer_rule(npc->mbox);
> + rc = mbox_process_msg(npc->mbox, (void *)&base_rule_rsp);
> + if (rc) {
> + plt_err("Failed to fetch VF's base MCAM entry");
> + return rc;
> + }
> + base_entry = &base_rule_rsp->entry_data;
> + for (idx = 0; idx < ROC_NPC_MAX_MCAM_WIDTH_DWORDS; idx++) {
> + flow->mcam_data[idx] |= base_entry->kw[idx];
> + flow->mcam_mask[idx] |= base_entry->kw_mask[idx];
> + }
> +
> + return 0;
> +}
> diff --git a/drivers/common/cnxk/roc_npc.h 
> b/drivers/common/cnxk/roc_npc.h index bab25fd72e..1f9d29e2dd 100644
> --- a/drivers/common/cnxk/roc_npc.h
> +++ b/drivers/common/cnxk/roc_npc.h
> @@ -215,15 +215,12 @@ int __roc_api roc_npc_flow_parse(struct roc_npc 
> *roc_npc,
>const struct roc_npc_action actions[],
>struct roc_npc_flow *flow);
>  int __roc_api roc_npc_get_low_priority_mcam(struct roc_npc *roc_npc);
> -
>  int __roc_api roc_npc_mcam_free_counter(struct roc_npc *roc_npc,
>   uint16_t ctr_id);
> -
>  int __roc_api roc_npc_mcam_read_counter(struct roc_npc *roc_npc,
>   uint32_t ctr_id, uint64_t *count);  int 
> __roc_api 
> roc_npc_mcam_clear_counter(struct roc_npc *roc_npc,
>uint32_t ctr_id);
> -
>  int __roc_api roc_npc_mcam_free_all_resources(struct roc_npc 
> *roc_npc);  void __roc_api roc_npc_flow_dump(FILE *file, struct 
> roc_npc *roc_npc);  void __roc_api roc_npc_flow_mcam_dump(FILE *file, 
> struct roc_npc *roc_npc, @@ -234,4 +231,6 @@ int __roc_api 
> roc_npc_mark_actions_sub_return(struct roc_npc *roc_npc,  int 
> __roc_api roc_npc_vtag_actions_get(struct roc_npc *roc_npc);  int __roc_api 
> roc_npc_vtag_actions_sub_return(struct roc_npc *roc_npc,
> uint32_t count);
> +int __roc_api roc_npc_mcam_merge_base_steering_rule(struct roc_npc *roc_npc,
> + struct roc_npc_flow *flow);

> Missing __rte_internal ?

__roc_api is defined as __rte_internal in "drivers/common/cnxk/roc_platform.h". 

>  #endif /* _ROC_NPC_H_ */
> diff --git a/drivers/common/cnxk/version.map 
> b/drivers/common/cnxk/version.map index 2cbcc4b93a..13231fcf04 100644
> --- a/drivers/common/cnxk/version.map
> +++ b/drivers/common/cnxk/version.map
> @@ -234,6 +234,7 @@ INTERNAL {
>   roc_npc_mcam_free_all_resources;
>   roc_npc_mcam_free_counter;
>   roc_npc_mcam_free_entry;
> + roc_npc_mcam_merge_base_steering_rule;
>   roc_npc_mcam_write_entry;
>   roc_npc_mcam_read_counter;
>   roc_npc_profile_name_get;
> 


RE: [dpdk-dev] [PATCH v3] examples/ipsec-secgw: support more flow patterns and actions

2022-06-17 Thread Satheesh Paul Antonysamy
Hi,

Please find reply inline.

Thanks,
Satheesh.

-Original Message-
From: Zhang, Roy Fan  
Sent: 17 June 2022 03:22 PM
To: Satheesh Paul Antonysamy ; Nicolau, Radu 
; Akhil Goyal 
Cc: dev@dpdk.org
Subject: [EXT] RE: [dpdk-dev] [PATCH v3] examples/ipsec-secgw: support more 
flow patterns and actions

External Email

--
Hi,

> -Original Message-
> From: psathe...@marvell.com 
> Sent: Friday, June 3, 2022 4:17 AM
> To: Nicolau, Radu ; Akhil Goyal 
> 
> Cc: dev@dpdk.org; Satheesh Paul 
> Subject: [dpdk-dev] [PATCH v3] examples/ipsec-secgw: support more flow 
> patterns and actions
> 
> From: Satheesh Paul 
> 
> Added support to create flow rules with count, mark and security 
> actions and mark pattern.
> 
> Signed-off-by: Satheesh Paul 
> ---



>  .. code-block:: console
> 
> -flow 
> -
> +flow\
> +   
> 
>  where each options means:
> 
> +
> +
> + * Set RTE_FLOW_ITEM_TYPE_MARK pattern item with the given mark value.
> +
> + * Optional: Yes, this pattern is not set by default.
> +
> + * Syntax: *mark X*
> +



> +
> +
> +
> + * Set RTE_FLOW_ACTION_TYPE_MARK action with the given mark value.
> +
> + * Optional: yes, this action is not set by default.
> +
> + * Syntax: *set_mark X*
> +
>  Example flow rules:

> I feel "mark" and "set_mark" are duplicated?
> From the implementation below it looks there are slight difference in between 
> But we may need better description for both.

Ack. I have added some more description and sent v4 patch.

> 
>  .. code-block:: console
> @@ -948,6 +988,18 @@ Example flow rules:
> 
>  flow ipv6 dst :::::::/116 port 1 
> queue 0
> 
> +flow mark 123 ipv4 dst 192.168.0.0/16 port 0 queue 0 count
> +
> +flow eth ipv4 dst 192.168.0.0/16 port 0 queue 0 count
> +
> +flow ipv4 dst 192.168.0.0/16 port 0 queue 0 count
> +
> +flow ipv4 dst 192.168.0.0/16 port 0 queue 0
> +
> +flow port 0 security set_mark 123
> +
> +flow ipv4 dst 1.1.0.0/16 port 0 count set_mark 123 security
> +
> 
>  Neighbour rule syntax
>  ^
> diff --git a/examples/ipsec-secgw/flow.c b/examples/ipsec-secgw/flow.c 
> index 1a1ec7861c..2088876999 100644
> --- a/examples/ipsec-secgw/flow.c
> +++ b/examples/ipsec-secgw/flow.c
> @@ -15,7 +15,9 @@
>  #define FLOW_RULES_MAX 128
> 
>  struct flow_rule_entry {
> + uint8_t is_eth;
>   uint8_t is_ipv4;
> + uint8_t is_ipv6;
>   RTE_STD_C11
>   union {
>   struct {
> @@ -27,8 +29,15 @@ struct flow_rule_entry {
>   struct rte_flow_item_ipv6 mask;
>   } ipv6;
>   };
> + struct rte_flow_item_mark mark_val;
>   uint16_t port;
>   uint16_t queue;
> + bool is_queue_set;
> + bool enable_count;
> + bool enable_mark;
> + bool set_security_action;
> + bool set_mark_action;
> + uint32_t mark_action_val;
>   struct rte_flow *flow;
>  } flow_rule_tbl[FLOW_RULES_MAX];
> 
> @@ -64,8 +73,9 @@ ipv4_addr_cpy(rte_be32_t *spec, rte_be32_t *mask, 
> char *token,
>   memcpy(mask, &rte_flow_item_ipv4_mask.hdr.src_addr, sizeof(ip));
> 
>   *spec = ip.s_addr;
> +
>   if (depth < 32)
> - *mask = *mask << (32-depth);
> + *mask = htonl(*mask << (32 - depth));
> 
>   return 0;
>  }
> @@ -124,7 +134,7 @@ parse_flow_tokens(char **tokens, uint32_t n_tokens,
> struct parse_status *status)
>  {
>   struct flow_rule_entry *rule;
> - uint32_t ti;
> + uint32_t ti = 0;
> 
>   if (nb_flow_rule >= FLOW_RULES_MAX) {
>   printf("Too many flow rules\n");
> @@ -134,49 +144,73 @@ parse_flow_tokens(char **tokens, uint32_t 
> n_tokens,
>   rule = &flow_rule_tbl[nb_flow_rule];
>   memset(rule, 0, sizeof(*rule));
> 
> - if (strcmp(tokens[0], "ipv4") == 0) {
> - rule->is_ipv4 = 1;
> - } else if (strcmp(tokens[0], "ipv6") == 0) {
> - rule->is_ipv4 = 0;
> - } else {
> - APP_CHECK(0, status, "unrecognized input \"%s\"", tokens[0]);
> - return;
> - }
> -
> - for (ti = 1; ti < n_tokens; ti++) {
> - if (strcmp(tokens[ti], "src") == 0) {
> + for (ti = 0; ti < n_tokens; ti++) {
> + if (strcmp(tokens[ti], "mark") == 0) {
>   INCREMENT_TOKEN_INDEX(ti, n_tokens, status);
> + 

RE: [EXTERNAL] [dpdk-dev] [PATCH v3 2/2] net/cnxk: support rte flow on cn20k

2025-01-22 Thread Satheesh Paul Antonysamy


-Original Message-
From: Jerin Jacob  
Sent: Wednesday, January 22, 2025 6:04 PM
To: Satheesh Paul Antonysamy ; Nithin Kumar Dabilpuram 
; Kiran Kumar Kokkilagadda ; 
Sunil Kumar Kori ; Satha Koteswara Rao Kottidi 
; Harman Kalra 
Cc: dev@dpdk.org; Satheesh Paul Antonysamy 
Subject: RE: [EXTERNAL] [dpdk-dev] [PATCH v3 2/2] net/cnxk: support rte flow on 
cn20k



> -Original Message-
> From: psathe...@marvell.com 
> Sent: Tuesday, November 12, 2024 3:29 PM
> To: Nithin Kumar Dabilpuram ; Kiran Kumar 
> Kokkilagadda ; Sunil Kumar Kori 
> ; Satha Koteswara Rao Kottidi 
> ; Harman Kalra 
> Cc: dev@dpdk.org; Satheesh Paul Antonysamy 
> Subject: [EXTERNAL] [dpdk-dev] [PATCH v3 2/2] net/cnxk: support rte 
> flow on cn20k
> 
> From: Satheesh Paul  Support for rte flow in cn20k.
> Signed-off-by: Satheesh Paul  Reviewed-by: 
> Kiran Kumar K  --- 
> drivers/net/cnxk/cn10k_flow. c | 227
> ++--- 
> From: Satheesh Paul 
> 
> Support for rte flow in cn20k.
> 
> Signed-off-by: Satheesh Paul 
> Reviewed-by: Kiran Kumar K 


1)Fix https://mails.dpdk.org/archives/test-report/2024-November/823929.html

This checkpatch warning is a false alarm.

2) Please rebase t
Ack.

[for-main]dell[dpdk-next-net-mrvl] $ git pw series apply 33903   
Failed to apply patch:
Applying: common/cnxk: support NPC flow on cn20k Using index info to 
reconstruct a base tree...
M   drivers/common/cnxk/roc_mbox.h
M   drivers/common/cnxk/roc_npc.h
M   drivers/common/cnxk/roc_npc_mcam_dump.c
M   drivers/common/cnxk/version.map
Falling back to patching base and 3-way merge...
Auto-merging drivers/common/cnxk/version.map Auto-merging 
drivers/common/cnxk/roc_npc_mcam_dump.c
CONFLICT (content): Merge conflict in drivers/common/cnxk/roc_npc_mcam_dump.c
Auto-merging drivers/common/cnxk/roc_npc.h Auto-merging 
drivers/common/cnxk/roc_mbox.h
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
hint: When you have resolved this problem, run "git am --continue".
hint: If you prefer to skip this patch, run "git am --skip" instead.
hint: To restore the original branch and stop patching, run "git am --abort".
hint: Disable this message with "git config set advice.mergeConflict false"
Patch failed at 0001 common/cnxk: support NPC flow on cn20k