Re: [PATCH v2 1/3] eventdev/eth_rx: add params set/get APIs

2023-01-28 Thread Jerin Jacob
On Wed, Jan 25, 2023 at 10:02 PM Naga Harish K, S V
 wrote:
>
>
>
> > -Original Message-
> > From: Jerin Jacob 

> > > > >
> > > > > > > +*/
> > > > > > > +   uint32_t rsvd[15];
> > > > > > > +   /**< Reserved fields for future use */
> > > > > >
> > > > > > Introduce rte_event_eth_rx_adapter_runtime_params_init() to
> > make
> > > > > > sure rsvd is zero.
> > > > > >
> > > > >
> > > > > The reserved fields are not used by the adapter or application.
> > > > > Not sure Is it necessary to Introduce a new API to clear reserved 
> > > > > fields.
> > > >
> > > > When adapter starts using new fileds(when we add new fieds in
> > > > future), the old applicaiton which is not using
> > > > rte_event_eth_rx_adapter_runtime_params_init() may have junk value
> > > > and then adapter implementation will behave bad.
> > > >
> > > >
> > >
> > > does it mean, the application doesn't re-compile for the new DPDK?
> >
> > Yes. No need recompile if ABI not breaking.
> >
> > > When some of the reserved fields are used in the future, the application
> > also may need to be recompiled along with DPDK right?
> > > As the application also may need to use the newly consumed reserved
> > fields?
> >
> > The problematic case is:
> >
> > Adapter implementation of 23.07(Assuming there is change params) field
> > needs to work with application of 23.03.
> > rte_event_eth_rx_adapter_runtime_params_init() will sove that.
> >
>
> As rte_event_eth_rx_adapter_runtime_params_init() initializes only reserved 
> fields to zero,  it may not solve the issue in this case.

rte_event_eth_rx_adapter_runtime_params_init() needs to zero all
fields, not just reserved field.
The application calling sequence  is

struct my_config c;
rte_event_eth_rx_adapter_runtime_params_init(&c)
c.interseted_filed_to_be_updated = val;

Let me share an example and you can tell where is the issue

1)Assume parameter structure is 64B and for 22.03 8B are used.
2)rte_event_eth_rx_adapter_runtime_params_init() will clear all 64B.
3)There is an application written based on 22.03 which using only 8B
after calling rte_event_eth_rx_adapter_runtime_params_init()
4)Assume, in 22.07 another 8B added to structure.
5)Now, the application (3) needs to run on 22.07. Since the
application is calling rte_event_eth_rx_adapter_runtime_params_init()
and 9 to 15B are zero, the implementation will not go bad.

> The old application only tries to set/get previous valid fields and the newly 
> used fields may still contain junk value.
> If the application wants to make use of any the newly used params, the 
> application changes are required anyway.

Yes. If application wants to make use of newly added features. No need
to change if new features are not needed for old application.


Re: [EXT] Re: [dpdk-dev] [PATCH v1 00/12] mldev: introduce machine learning device library

2023-01-28 Thread Jerin Jacob
On Fri, Jan 27, 2023 at 6:28 PM Thomas Monjalon  wrote:
>
> Hi,
>
> Shivah Shankar, please quote your replies
> so we can distinguish what I said from what you say.
>
> Please try to understand my questions, you tend to reply to something else.
>
>
> 27/01/2023 05:29, Jerin Jacob:
> > On Fri, Jan 27, 2023 at 8:04 AM Shivah Shankar Shankar Narayan Rao
> >  wrote:
> > > 25/01/2023 20:01, Jerin Jacob:
> > > > On Wed, Jan 25, 2023 at 7:50 PM Thomas Monjalon  
> > > > wrote:
> > > > > 14/11/2022 13:02, jer...@marvell.com:
> > > > > > > ML Model: An ML model is an algorithm trained over a dataset. A
> > > > > > > model consists of procedure/algorithm and data/pattern required to
> > > > > > > make predictions on live data. Once the model is created and
> > > > > > > trained outside of the DPDK scope,
> > > > > > > the model can be loaded via rte_ml_model_load() and then start it
> > > > > > > using rte_ml_model_start() API. The rte_ml_model_params_update()
> > > > > > > can be used to update the model
> > > > > > > parameters such as weight and bias without unloading the model
> > > > > > > using rte_ml_model_unload().> > > > >
> > > > > > The fact that the model is prepared outside means the model format
> > > > > > is free and probably different per mldev driver.
> > > > > > I think it is OK but it requires a lot of documentation effort to
> > > > > > explain how to bind the model and its parameters with the DPDK API.
> > > > > > Also we may need to pass some metadata from the model builder to the
> > > > > > inference engine in order to enable optimizations prepared in the
> > > > > > model.
> > > > > > And the other way, we may need inference capabilities in order to
> > > > > > generate an optimized model which can run in the inference engine.
> > > > >
> > > > > The base API specification kept absolute minimum. Currently, weight
> > > > > and biases parameters updated through rte_ml_model_params_update(). It
> > > > > can be extended when there are drivers supports it or if you have any
> > > > > specific parameter you would like to add it in
> > > > > rte_ml_model_params_update().
> > > >
> > > > This function is
> > > > int rte_ml_model_params_update(int16_t dev_id, int16_t model_id, void
> > > > *buffer);
> > > >
> > > > How are we supposed to provide separate parameters in this void* ?
> > >
> > > Just to clarify on what "parameters" mean,
> > > they just mean weights and biases of the model,
> > > which are the parameters for a model.
> > > Also, the Proposed APIs are for running the inference
> > > on a pre-trained model.
> > > For running the inference the amount of parameters tuning
> > > needed/done is limited/none.
>
> Why is it limited?
> I think you are limiting to *your* model.

See below.


>
> > > The only parameters that get may get changed are the Weights and Bias
> > > which the API rte_ml_model_params_update() caters to.
>
> We cannot imagine a model with more type of parameters?
>
> > > While running the inference on a Model there won't be any random
> > > addition or removal of operators to/from the model or there won't
> > > be any changes in the actual flow of model.
> > > Since the only parameter that can be changed is Weights and Biases
> > > the above API should take care.
>
> No, you don't reply to my question.
> I want to be able to change a single parameter.
> I am expecting a more fine-grain API than a simple "void*".
> We could give the name of the parameter and a value, why not?

The current API model is follows
1)The model is developed outside DPDK and binary file is loaded via
rte_ml_model_load()
2)The modes "read only" capabilities like shape  or quantized data can
be read through
rte_ml_model_info_get() API. If you wish to advertise any other
capability for optimization etc
please give inline reply around rte_ml_io_info for the parameter and
its comment.
We can review and add it.
3)Now comes the parameter, which is the "update" on the model which
loaded prior via rte_ml_model_load() .
Also, it created outside DPDK. User have an "update" to the parameter
when we have new set of training happens.
Currently we are assuming this as single blob due to that fact that It
is model specific and it just continues
stream of bytes from model and thus void* is given.

If you have use case or your model support more parameter update as
separate blob, we should
able to update rte_ml_model_params_update() as needed. Please suggest the new
rte_ml_model_params_type enum or so. We can add that to
rte_ml_model_params_update().
Also, if you have concrete data type instead of void* for given TYPE.
Please propose the structure
for that as well, We should be able to update struct rte_ml_dev_info
for these capabilities
to abstract the models or inference engine differences.

>
> > > > > Other metadata data like batch, shapes, formats queried using
> > > > > rte_ml_io_info().
> > > > Copying:
> > > > +/** Input and output data information structure
> > > > + *
> > > > + * Specifies the type and s

[PATCH 0/2] add new mhpsdp hw port in the flow item and Tx queue API

2023-01-28 Thread Jiawei Wang
For the multiple hardware ports connect to a single DPDK port (mhpsdp),
currently there is no information to indicate the packet belongs to
which hardware port.

This patch introduces a new hardware port item in rte flow API, and
the port value reflects the hardware port of the received packets.

This patch adds the tx_mhpsdp_hwport setting in Tx queue API, the hwport value
reflects packets be sent to which hardware port.

While uses the hw port as a matching item in the flow, and sets the
same port value on the tx queue, then the packet can be sent from the same
hardware port with received.

RFC: 
http://patches.dpdk.org/project/dpdk/cover/20221221102934.13822-1-jiaw...@nvidia.com/

Jiawei Wang (2):
  ethdev: add mhpsdp hardware port match item
  ethdev: introduce the mhpsdp hwport field in Tx queue API

 app/test-pmd/cmdline.c  | 84 +
 app/test-pmd/cmdline_flow.c | 29 +++
 devtools/libabigail.abignore|  5 ++
 doc/guides/prog_guide/rte_flow.rst  |  8 ++
 doc/guides/rel_notes/release_23_03.rst  |  5 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 18 +
 lib/ethdev/rte_ethdev.h |  8 ++
 lib/ethdev/rte_flow.c   |  1 +
 lib/ethdev/rte_flow.h   | 32 
 9 files changed, 190 insertions(+)

-- 
2.18.1



[PATCH 1/2] ethdev: add mhpsdp hardware port match item

2023-01-28 Thread Jiawei Wang
For the multiple hardware ports connect to a single DPDK port (mhpsdp),
currently there is no information to indicate the packet belongs to
which hardware port.

This patch introduces a new hardware port item in rte flow API, and
the port value reflects the hardware port of the received packets.

While uses the hardware port as a matching item in the flow, and sets the
same port value on the tx queue, then the packet can be sent from the same
hardware port with received.

This patch also adds the testpmd command line to match the new item:
flow create 0 ingress group 0 pattern mhpsdp_hw_port hwport is 1 /
end actions queue index 0 / end

The above command means that creates a flow on a single DPDK port and
matches the packet from the first physical port (the hwport 1
stands for the first port) and redirects these packets into RxQ 0.

Signed-off-by: Jiawei Wang 
---
 app/test-pmd/cmdline_flow.c | 29 +++
 doc/guides/prog_guide/rte_flow.rst  |  8 ++
 doc/guides/rel_notes/release_23_03.rst  |  5 
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  5 
 lib/ethdev/rte_flow.c   |  1 +
 lib/ethdev/rte_flow.h   | 32 +
 6 files changed, 80 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 88108498e0..fb9c0cc622 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -465,6 +465,8 @@ enum index {
ITEM_METER,
ITEM_METER_COLOR,
ITEM_METER_COLOR_NAME,
+   ITEM_MHPSDP_HW_PORT,
+   ITEM_MHPSDP_HW_PORT_VALUE,
 
/* Validate/create actions. */
ACTIONS,
@@ -1355,6 +1357,7 @@ static const enum index next_item[] = {
ITEM_L2TPV2,
ITEM_PPP,
ITEM_METER,
+   ITEM_MHPSDP_HW_PORT,
END_SET,
ZERO,
 };
@@ -1821,6 +1824,12 @@ static const enum index item_meter[] = {
ZERO,
 };
 
+static const enum index item_mhpsdp_hw_port[] = {
+   ITEM_MHPSDP_HW_PORT_VALUE,
+   ITEM_NEXT,
+   ZERO,
+};
+
 static const enum index next_action[] = {
ACTION_END,
ACTION_VOID,
@@ -6443,6 +6452,23 @@ static const struct token token_list[] = {
ARGS_ENTRY(struct buffer, port)),
.call = parse_mp,
},
+   [ITEM_MHPSDP_HW_PORT] = {
+   .name = "mhpsdp_hw_port",
+   .help = "match on the hardware port of the"
+   " received packet.",
+   .priv = PRIV_ITEM(MHPSDP_HW_PORT,
+ sizeof(struct rte_flow_item_mhpsdp_hw_port)),
+   .next = NEXT(item_mhpsdp_hw_port),
+   .call = parse_vc,
+   },
+   [ITEM_MHPSDP_HW_PORT_VALUE] = {
+   .name = "hwport",
+   .help = "hardware port value",
+   .next = NEXT(item_mhpsdp_hw_port, NEXT_ENTRY(COMMON_UNSIGNED),
+item_param),
+   .args = ARGS(ARGS_ENTRY(struct rte_flow_item_mhpsdp_hw_port,
+   hwport)),
+   },
 };
 
 /** Remove and return last entry from argument stack. */
@@ -10981,6 +11007,9 @@ flow_item_default_mask(const struct rte_flow_item *item)
case RTE_FLOW_ITEM_TYPE_METER_COLOR:
mask = &rte_flow_item_meter_color_mask;
break;
+   case RTE_FLOW_ITEM_TYPE_MHPSDP_HW_PORT:
+   mask = &rte_flow_item_mhpsdp_hw_port_mask;
+   break;
default:
break;
}
diff --git a/doc/guides/prog_guide/rte_flow.rst 
b/doc/guides/prog_guide/rte_flow.rst
index 3e6242803d..175ffec288 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1544,6 +1544,14 @@ Matches Color Marker set by a Meter.
 
 - ``color``: Metering color marker.
 
+Item: ``MHPSDP_HW_PORT``
+^^^
+
+Matches on the hardware port of the received packet, the hardware port in the
+group of physical ports connected to a single DPDK port.
+
+- ``hwport``: Hardware port value.
+
 Actions
 ~~~
 
diff --git a/doc/guides/rel_notes/release_23_03.rst 
b/doc/guides/rel_notes/release_23_03.rst
index c15f6fbb9f..b5b3ab8886 100644
--- a/doc/guides/rel_notes/release_23_03.rst
+++ b/doc/guides/rel_notes/release_23_03.rst
@@ -69,6 +69,11 @@ New Features
 ``rte_event_dev_config::nb_single_link_event_port_queues`` parameter
 required for eth_rx, eth_tx, crypto and timer eventdev adapters.
 
+* **Added rte_flow support for matching MHPSDP_HW_PORT fields.**
+
+  For the multiple hardware ports connect to a single DPDK port (mhpsdp),
+  Added ``mhpsdp_hw_port`` item in rte_flow to support hardware port of
+  the packets.
 
 Removed Items
 -
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst 
b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 0037506a79..7be7c55d63 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++

[PATCH 2/2] ethdev: introduce the mhpsdp hwport field in Tx queue API

2023-01-28 Thread Jiawei Wang
For the multiple hardware ports connect to a single DPDK port (mhpsdp),
the previous patch introduces the new rte flow item to match the
hardware port of the received packets.

This patch adds the tx_mhpsdp_hwport setting in Tx queue API, the hwport
value reflects packets be sent to which hardware port.
0 is no port assigned and traffic will be routed between different hardware
ports, if 0 is disabled then try to match on MHPSDP_HW_PORT with 0 will
result in an error.

Adds the new tx_mhpsdp_hwport field into the padding hole of rte_eth_txconf
structure, the size of rte_eth_txconf keeps the same. Adds a suppress
type for structure change in the ABI check file.

This patch adds the testpmd command line:
testpmd> port config (port_id) txq (queue_id) mhpsdp_hwport (value)

For example, there're two hardware ports 0 and 1 connected to
a single DPDK port (port id 0), and mhpsdp_hwport 1 stood for
hardware port 0 and mhpsdp_hwport 2 stood for hardware port 1,
used the below command to config tx mhpsdp_hwport for per Tx Queue:
port config 0 txq 0 mhpsdp_hwport 1
port config 0 txq 1 mhpsdp_hwport 1
port config 0 txq 2 mhpsdp_hwport 2
port config 0 txq 3 mhpsdp_hwport 2

These commands config the TxQ index 0 and TxQ index 1 with mhpsdp_hwport 1,
uses TxQ 0 or TxQ 1 send packets, these packets will be sent from the
hardware port 0, and similar with hardware port 1 if sending packets
with TxQ 2 or TxQ 3.

Signed-off-by: Jiawei Wang 
---
 app/test-pmd/cmdline.c  | 84 +
 devtools/libabigail.abignore|  5 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 13 
 lib/ethdev/rte_ethdev.h |  8 ++
 4 files changed, 110 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index b32dc8bfd4..db9ea8b18a 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -764,6 +764,10 @@ static void cmd_help_long_parsed(void *parsed_result,
 
"port cleanup (port_id) txq (queue_id) (free_cnt)\n"
"Cleanup txq mbufs for a specific Tx queue\n\n"
+
+   "port config (port_id) txq (queue_id) mhpsdp_hwport 
(value)\n"
+   "Set the hwport value in mhpsdp "
+   "on a specific Tx queue\n\n"
);
}
 
@@ -12621,6 +12625,85 @@ static cmdline_parse_inst_t 
cmd_show_port_flow_transfer_proxy = {
}
 };
 
+/* *** configure port txq mhpsdp_hwport value *** */
+struct cmd_config_tx_mhpsdp_hwport {
+   cmdline_fixed_string_t port;
+   cmdline_fixed_string_t config;
+   portid_t portid;
+   cmdline_fixed_string_t txq;
+   uint16_t qid;
+   cmdline_fixed_string_t mhpsdp_hwport;
+   uint16_t value;
+};
+
+static void
+cmd_config_tx_mhpsdp_hwport_parsed(void *parsed_result,
+ __rte_unused struct cmdline *cl,
+ __rte_unused void *data)
+{
+   struct cmd_config_tx_mhpsdp_hwport *res = parsed_result;
+   struct rte_port *port;
+
+   if (port_id_is_invalid(res->portid, ENABLED_WARN))
+   return;
+
+   if (res->portid == (portid_t)RTE_PORT_ALL) {
+   printf("Invalid port id\n");
+   return;
+   }
+
+   port = &ports[res->portid];
+
+   if (strcmp(res->txq, "txq")) {
+   printf("Unknown parameter\n");
+   return;
+   }
+   if (tx_queue_id_is_invalid(res->qid))
+   return;
+
+   port->txq[res->qid].conf.tx_mhpsdp_hwport = res->value;
+
+   cmd_reconfig_device_queue(res->portid, 0, 1);
+}
+
+cmdline_parse_token_string_t cmd_config_tx_mhpsdp_hwport_port =
+   TOKEN_STRING_INITIALIZER(struct cmd_config_tx_mhpsdp_hwport,
+port, "port");
+cmdline_parse_token_string_t cmd_config_tx_mhpsdp_hwport_config =
+   TOKEN_STRING_INITIALIZER(struct cmd_config_tx_mhpsdp_hwport,
+config, "config");
+cmdline_parse_token_num_t cmd_config_tx_mhpsdp_hwport_portid =
+   TOKEN_NUM_INITIALIZER(struct cmd_config_tx_mhpsdp_hwport,
+portid, RTE_UINT16);
+cmdline_parse_token_string_t cmd_config_tx_mhpsdp_hwport_txq =
+   TOKEN_STRING_INITIALIZER(struct cmd_config_tx_mhpsdp_hwport,
+txq, "txq");
+cmdline_parse_token_num_t cmd_config_tx_mhpsdp_hwport_qid =
+   TOKEN_NUM_INITIALIZER(struct cmd_config_tx_mhpsdp_hwport,
+ qid, RTE_UINT16);
+cmdline_parse_token_string_t cmd_config_tx_mhpsdp_hwport_hwport =
+   TOKEN_STRING_INITIALIZER(struct cmd_config_tx_mhpsdp_hwport,
+mhpsdp_hwport, "mhpsdp_hwport");
+cmdline_parse_token_num_t cmd_config_tx_mhpsdp_hwport_value =
+   TOKEN_NUM_INITIALIZER(struct cmd_config_tx_mhpsdp_hwport,
+ value, RTE_UINT16);
+
+static cmdline_parse_inst_t cmd_conf

RE: [PATCH v3] app/dma-perf: introduce dma-perf application

2023-01-28 Thread Jiang, Cheng1
Hi Bruce,

Sorry for the late reply. We are in the Spring Festival holiday last week.
Thanks for your comments.
Replies are inline.

Thanks,
Cheng

> -Original Message-
> From: Richardson, Bruce 
> Sent: Wednesday, January 18, 2023 12:52 AM
> To: Jiang, Cheng1 
> Cc: tho...@monjalon.net; m...@smartsharesystems.com; dev@dpdk.org; Hu,
> Jiayu ; Ding, Xuan ; Ma, WenwuX
> ; Wang, YuanX ; He,
> Xingguang 
> Subject: Re: [PATCH v3] app/dma-perf: introduce dma-perf application
> 
> On Tue, Jan 17, 2023 at 12:05:26PM +, Cheng Jiang wrote:
> > There are many high-performance DMA devices supported in DPDK now, and
> > these DMA devices can also be integrated into other modules of DPDK as
> > accelerators, such as Vhost. Before integrating DMA into applications,
> > developers need to know the performance of these DMA devices in
> > various scenarios and the performance of CPUs in the same scenario,
> > such as different buffer lengths. Only in this way can we know the
> > target performance of the application accelerated by using them. This
> > patch introduces a high-performance testing tool, which supports
> > comparing the performance of CPU and DMA in different scenarios
> > automatically with a pre-set config file. Memory Copy performance test are
> supported for now.
> >
> > Signed-off-by: Cheng Jiang 
> > Signed-off-by: Jiayu Hu 
> > Signed-off-by: Yuan Wang 
> > Acked-by: Morten Brørup 
> > ---
> 
> More input based off trying running the application, including some thoughts 
> on
> the testing methodology below.
> 
> 
> > +static void
> > +output_result(uint8_t scenario_id, uint32_t lcore_id, uint16_t dev_id,
> uint64_t ave_cycle,
> > +   uint32_t buf_size, uint32_t nr_buf, uint32_t memory,
> > +   float bandwidth, uint64_t ops, bool is_dma) {
> > +   if (is_dma)
> > +   printf("lcore %u, DMA %u:\n"
> > +   "average cycles: %" PRIu64 ","
> > +   " buffer size: %u, nr_buf: %u,"
> > +   " memory: %uMB, frequency: %" PRIu64 ".\n",
> > +   lcore_id,
> > +   dev_id,
> > +   ave_cycle,
> > +   buf_size,
> > +   nr_buf,
> > +   memory,
> > +   rte_get_timer_hz());
> > +   else
> > +   printf("lcore %u\n"
> > +   "average cycles: %" PRIu64 ","
> > +   " buffer size: %u, nr_buf: %u,"
> > +   " memory: %uMB, frequency: %" PRIu64 ".\n",
> > +   lcore_id,
> > +   ave_cycle,
> > +   buf_size,
> > +   nr_buf,
> > +   memory,
> > +   rte_get_timer_hz());
> > +
> 
> The term "average cycles" is unclear here - is it average cycles per test 
> iteration,
> or average cycles per buffer copy?

The average cycles stands for average cycles per buffer copy, I'll clarify it 
in the next version.

> 
> 
> > +   printf("Average bandwidth: %.3lfGbps, OPS: %" PRIu64 "\n",
> > +bandwidth, ops);
> > +
> 
> 
> 
> > +
> > +static inline void
> > +do_dma_mem_copy(uint16_t dev_id, uint32_t nr_buf, uint16_t kick_batch,
> uint32_t buf_size,
> > +   uint16_t mpool_iter_step, struct rte_mbuf **srcs,
> struct rte_mbuf
> > +**dsts) {
> > +   int64_t async_cnt = 0;
> > +   int nr_cpl = 0;
> > +   uint32_t index;
> > +   uint16_t offset;
> > +   uint32_t i;
> > +
> > +   for (offset = 0; offset < mpool_iter_step; offset++) {
> > +   for (i = 0; index = i * mpool_iter_step + offset, index < 
> > nr_buf;
> i++) {
> > +   if (unlikely(rte_dma_copy(dev_id,
> > +   0,
> > +   srcs[index]->buf_iova +
> srcs[index]->data_off,
> > +   dsts[index]->buf_iova +
> dsts[index]->data_off,
> > +   buf_size,
> > +   0) < 0)) {
> > +   rte_dma_submit(dev_id, 0);
> > +   while (rte_dma_burst_capacity(dev_id, 0) == 0)
> {
> > +   nr_cpl = rte_dma_completed(dev_id, 0,
> MAX_DMA_CPL_NB,
> > +   NULL, NULL);
> > +   async_cnt -= nr_cpl;
> > +   }
> > +   if (rte_dma_copy(dev_id,
> > +   0,
> > +   srcs[index]->buf_iova +
> srcs[index]->data_off,
> > +   dsts[index]->buf_iova +
> dsts[index]->data_off,
> > +   buf_size,
> > +   0) < 0) {
> > +   printf("enqueue

Re: [PATCH v8] doc: add PMD known issue

2023-01-28 Thread Stephen Hemminger
On Sat, 28 Jan 2023 06:01:38 +
Mingjin Ye  wrote:

> Add a known issue: Rx path dynamic change is not supported for PMD.
> 
> Fixes: de853a3bb151 ("net/ice: disable DDP package on Windows")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Mingjin Ye 
> ---
>  doc/guides/nics/ice.rst | 12 
>  1 file changed, 12 insertions(+)
> 
> diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
> index ce075e067c..115625523e 100644
> --- a/doc/guides/nics/ice.rst
> +++ b/doc/guides/nics/ice.rst
> @@ -395,3 +395,15 @@ file is used by both the kernel driver and the DPDK PMD.
>  
>Windows support: The DDP package is not supported on Windows so,
>loading of the package is disabled on Windows.
> +
> +ice: Rx path does not support dynamic change
> +
> +
> +The ice driver supports fast and offload rx path. When pmd is initialized,
> +the fast rx path is selected by default. Even if offload is subsequently
> +enabled through the API, which will not work because the past rx path is
> +still used.
> +
> +The ice driver does not support to change the rx path after application
> +is initialized. If HW offload is required, the ``--rx-offloads`` parameter
> +should be used to choose the offload Rx path by default.

Is this when the device is stopped, or running.
Dynamic configuration of offload parameters is not safe on many devices.
Usually the device driver requires the device not be started to change offloads.

The driver should reject in the API things it does not support.


Re: [PATCH v2 1/3] eventdev/eth_rx: add params set/get APIs

2023-01-28 Thread Stephen Hemminger
On Sat, 28 Jan 2023 16:23:45 +0530
Jerin Jacob  wrote:

> > >
> > > Yes. No need recompile if ABI not breaking.
> > >  
> > > > When some of the reserved fields are used in the future, the 
> > > > application  
> > > also may need to be recompiled along with DPDK right?  
> > > > As the application also may need to use the newly consumed reserved  
> > > fields?
> > >
> > > The problematic case is:
> > >
> > > Adapter implementation of 23.07(Assuming there is change params) field
> > > needs to work with application of 23.03.
> > > rte_event_eth_rx_adapter_runtime_params_init() will sove that.
> > >  
> >

First off, reserved fields are a problematic design choice IMHO (see YAGNI).

Second. any reserved fields can not be used in future unless the
original code enforced that all reserved fields are zero.
Same is true for holes in structs which some times get reused.

You can't use a reserved field without breaking ABI unless the previous
code enforced that the field must be zero.


RE: lib/vhost/virtio_net: possible stack overflow in virtio_dev_tx_async_packed()

2023-01-28 Thread Jiang, Cheng1
Hi Mike,

Thanks for your report.
I agree with you, maybe you can submit the patch to fix it. (by the way, the 
sync path has the same issue)

Thanks a lot.
Cheng

From: Mike Cui 
Sent: Thursday, December 29, 2022 4:38 AM
To: dev@dpdk.org; Jiang, Cheng1 
Subject: lib/vhost/virtio_net: possible stack overflow in 
virtio_dev_tx_async_packed()

Hi,

I believe there is a possible stack overflow in this code: 
https://github.com/DPDK/dpdk/blob/main/lib/vhost/virtio_net.c#L3631

Here, pkts_prealloc is declared on the stack with size MAX_PKT_BURST, then 
filled in by rte_pktmbuf_alloc_bulk() up to 'count' elements, but 'count'  is 
not capped at MAX_PKT_BURST like in many other code paths.

Suggested patch:


diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c

index 9abf752f30..21f00317c7 100644

--- a/lib/vhost/virtio_net.c

+++ b/lib/vhost/virtio_net.c

@@ -3634,6 +3634,7 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct 
vhost_virtqueue *vq,



  async_iter_reset(async);



+   count = RTE_MIN(count, MAX_PKT_BURST);

  if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts_prealloc, count))

  goto out;