回复: 回复: 回复: [PATCH v2 1/3] ethdev: add API for direct rearm mode

2022-10-26 Thread Feifei Wang


> -邮件原件-
> 发件人: Konstantin Ananyev 
> 发送时间: Thursday, October 13, 2022 5:49 PM
> 收件人: Feifei Wang ; tho...@monjalon.net;
> Ferruh Yigit ; Andrew Rybchenko
> ; Ray Kinsella 
> 抄送: dev@dpdk.org; nd ; Honnappa Nagarahalli
> ; Ruifeng Wang
> 
> 主题: Re: 回复: 回复: [PATCH v2 1/3] ethdev: add API for direct rearm mode
> 
> 12/10/2022 14:38, Feifei Wang пишет:
> >
> >
> >> -邮件原件-
> >> 发件人: Konstantin Ananyev 
> >> 发送时间: Wednesday, October 12, 2022 6:21 AM
> >> 收件人: Feifei Wang ; tho...@monjalon.net;
> Ferruh
> >> Yigit ; Andrew Rybchenko
> >> ; Ray Kinsella 
> >> 抄送: dev@dpdk.org; nd ; Honnappa Nagarahalli
> >> ; Ruifeng Wang
> 
> >> 主题: Re: 回复: [PATCH v2 1/3] ethdev: add API for direct rearm mode
> >>
> >>
> >>
> > Add API for enabling direct rearm mode and for mapping RX and TX
> > queues. Currently, the API supports 1:1(txq : rxq) mapping.
> >
> > Furthermore, to avoid Rx load Tx data directly, add API called
> > 'rte_eth_txq_data_get' to get Tx sw_ring and its information.
> >
> > Suggested-by: Honnappa Nagarahalli
> 
> > Suggested-by: Ruifeng Wang 
> > Signed-off-by: Feifei Wang 
> > Reviewed-by: Ruifeng Wang 
> > Reviewed-by: Honnappa Nagarahalli
> 
> > ---
> > lib/ethdev/ethdev_driver.h   |  9 
> > lib/ethdev/ethdev_private.c  |  1 +
> > lib/ethdev/rte_ethdev.c  | 37 ++
> > lib/ethdev/rte_ethdev.h  | 95
>  
> > lib/ethdev/rte_ethdev_core.h |  5 ++
> > lib/ethdev/version.map   |  4 ++
> > 6 files changed, 151 insertions(+)
> >
> > diff --git a/lib/ethdev/ethdev_driver.h
> > b/lib/ethdev/ethdev_driver.h index 47a55a419e..14f52907c1 100644
> > --- a/lib/ethdev/ethdev_driver.h
> > +++ b/lib/ethdev/ethdev_driver.h
> > @@ -58,6 +58,8 @@ struct rte_eth_dev {
> > eth_rx_descriptor_status_t rx_descriptor_status;
> > /** Check the status of a Tx descriptor */
> > eth_tx_descriptor_status_t tx_descriptor_status;
> > +   /**  Use Tx mbufs for Rx to rearm */
> > +   eth_rx_direct_rearm_t rx_direct_rearm;
> >
> > /**
> >  * Device data that is shared between primary and secondary
> > processes @@ -486,6 +488,11 @@ typedef int
>  (*eth_rx_enable_intr_t)(struct rte_eth_dev *dev,
> > typedef int (*eth_rx_disable_intr_t)(struct rte_eth_dev *dev,
> > uint16_t rx_queue_id);
> >
> > +/**< @internal Get Tx information of a transmit queue of an
> > +Ethernet device. */ typedef void (*eth_txq_data_get_t)(struct
> >> rte_eth_dev *dev,
> > + uint16_t tx_queue_id,
> > + struct rte_eth_txq_data
> *txq_data);
> > +
> > /** @internal Release memory resources allocated by given
> > Rx/Tx
> >> queue.
>  */
> > typedef void (*eth_queue_release_t)(struct rte_eth_dev *dev,
> > uint16_t queue_id);
> > @@ -1138,6 +1145,8 @@ struct eth_dev_ops {
> > eth_rxq_info_get_t rxq_info_get;
> > /** Retrieve Tx queue information */
> > eth_txq_info_get_t txq_info_get;
> > +   /** Get the address where Tx data is stored */
> > +   eth_txq_data_get_t txq_data_get;
> > eth_burst_mode_get_t   rx_burst_mode_get; /**< Get Rx
> burst
>  mode */
> > eth_burst_mode_get_t   tx_burst_mode_get; /**< Get Tx
> burst
>  mode */
> > eth_fw_version_get_t   fw_version_get; /**< Get
> firmware
>  version */
> > diff --git a/lib/ethdev/ethdev_private.c
> > b/lib/ethdev/ethdev_private.c index 48090c879a..bfe16c7d77 100644
> > --- a/lib/ethdev/ethdev_private.c
> > +++ b/lib/ethdev/ethdev_private.c
> > @@ -276,6 +276,7 @@ eth_dev_fp_ops_setup(struct
> rte_eth_fp_ops
>  *fpo,
> > fpo->rx_queue_count = dev->rx_queue_count;
> > fpo->rx_descriptor_status = dev->rx_descriptor_status;
> > fpo->tx_descriptor_status = dev->tx_descriptor_status;
> > +   fpo->rx_direct_rearm = dev->rx_direct_rearm;
> >
> > fpo->rxq.data = dev->data->rx_queues;
> > fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> > index 0c2c1088c0..0dccec2e4b 100644
> > --- a/lib/ethdev/rte_ethdev.c
> > +++ b/lib/ethdev/rte_ethdev.c
> > @@ -1648,6 +1648,43 @@ rte_eth_dev_is_removed(uint16_t port_id)
> > return ret;
> > }
> >
> > +int
> > +rte_eth_tx_queue_data_get(uint16_t port_id, uint16_t queue_id,
> > +   struct rte_eth_txq_data *txq_data) {
> > +   struct rte_eth_dev *dev;
> > +
> > +   RTE_ETH_VALID_PORTID_OR_ERR_

Re: [PATCH v2] devtools: check for supported git version

2022-10-26 Thread Ferruh Yigit

On 10/26/2022 7:24 AM, David Marchand wrote:

On Tue, Oct 25, 2022 at 12:15 PM Ali Alnubani  wrote:


The script devtools/parse-flow-support.sh uses the git-grep
option (-o, --only-matching), which is only supported from
git version 2.19 and onwards.[1]

The script now exits early providing a clear message to the user
about the required git version instead of showing the following
error messages multiple times:
   error: unknown switch `o'
   usage: git grep [] [-e]  [...] [[--] ...]
   [..]

[1] https://github.com/git/git/blob/v2.19.0/Documentation/RelNotes/2.19.0.txt

Signed-off-by: Ali Alnubani 
Signed-off-by: Thomas Monjalon 


I don't have a "non working" git, but the patch lgtm.

Acked-by: David Marchand 



+Chaoyong,

He had observed the problem, perhaps he can help to test.


RE: [dpdk-dev v3] crypto/ipsec_mb: multi-process IPC request handler

2022-10-26 Thread De Lara Guarch, Pablo
Hi Kai,

A few comments below

Thanks,
Pablo

> -Original Message-
> From: Ji, Kai 
> Sent: Tuesday, October 25, 2022 10:48 PM
> To: dev@dpdk.org
> Cc: gak...@marvell.com; Ji, Kai ; De Lara Guarch, Pablo
> ; Burakov, Anatoly
> 
> Subject: [dpdk-dev v3] crypto/ipsec_mb: multi-process IPC request handler
> 
> As the secondary process needs queue-pair to be configured by the primary
> process before use. 

This sentence looks incomplete: as the secondary process needs, then what?

> This patch adds an IPC register function to allow
> secondary process to setup cryptodev queue-pair via IPC messages during
> the runtime. A new "qp_in_used_pid" param stores the PID to provide the
> ownership of the queue-pair so that only the PID matched queue-pair can be
> free'd in the request.
> 
> Signed-off-by: Kai Ji 
> ---
> 
> v3:
> - remove shared memzone as qp_conf params can be passed directly from
> ipc message.
> 
> v2:
> - add in shared memzone for data exchange between multi-process
> ---
>  drivers/crypto/ipsec_mb/ipsec_mb_ops.c | 130
> -
>  drivers/crypto/ipsec_mb/ipsec_mb_private.c |  24 +++-
> drivers/crypto/ipsec_mb/ipsec_mb_private.h |  46 
>  3 files changed, 196 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
> b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
> index cedcaa2742..c29656b746 100644
> --- a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
> +++ b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
> @@ -3,6 +3,7 @@
>   */
> 
>  #include 
> +#include 
> 
>  #include 
>  #include 
> @@ -93,6 +94,46 @@ ipsec_mb_info_get(struct rte_cryptodev *dev,
>   }
>  }
> 
> +static int
> +ipsec_mb_secondary_qp_op(int dev_id, int qp_id,
> + const struct rte_cryptodev_qp_conf *qp_conf,
> + int socket_id, uint16_t pid, enum ipsec_mb_mp_req_type
> op_type) {
> + int ret;
> + struct rte_mp_msg qp_req_msg;
> + struct rte_mp_msg *qp_resp_msg;
> + struct rte_mp_reply qp_resp;
> + struct ipsec_mb_mp_param *req_param;
> + struct ipsec_mb_mp_param *resp_param;
> + struct timespec ts = {.tv_sec = 1, .tv_nsec = 0};
> +
> + memset(&qp_req_msg, 0, sizeof(IPSEC_MB_MP_MSG));
> + memcpy(qp_req_msg.name, IPSEC_MB_MP_MSG,
> sizeof(IPSEC_MB_MP_MSG));
> + req_param = (struct ipsec_mb_mp_param *)&qp_req_msg.param;
> +
> + qp_req_msg.len_param = sizeof(struct ipsec_mb_mp_param);
> + req_param->type = op_type;
> + req_param->dev_id = dev_id;
> + req_param->qp_id = qp_id;
> + req_param->socket_id = socket_id;
> + req_param->process_id = pid;

I think you can use "get_pid()" here and not have "pid" argument in the 
function, as I believe it is always called within the same process.

> + if (qp_conf) {
> + req_param->nb_descriptors = qp_conf->nb_descriptors;
> + req_param->mp_session = (void *)qp_conf->mp_session;
> + }
> +
> + qp_req_msg.num_fds = 0;
> + ret = rte_mp_request_sync(&qp_req_msg, &qp_resp, &ts);
> + if (ret) {
> + RTE_LOG(ERR, USER1, "Create MR request to primary
> process failed.");
> + return -1;
> + }
> + qp_resp_msg = &qp_resp.msgs[0];
> + resp_param = (struct ipsec_mb_mp_param *)qp_resp_msg->param;
> +
> + return resp_param->result;
> +}
> +

..

> diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_private.h
> b/drivers/crypto/ipsec_mb/ipsec_mb_private.h
> index b56eaf061e..a2b52f808e 100644
> --- a/drivers/crypto/ipsec_mb/ipsec_mb_private.h
> +++ b/drivers/crypto/ipsec_mb/ipsec_mb_private.h

...

> +
>  /** All supported device types */
>  enum ipsec_mb_pmd_types {
>   IPSEC_MB_PMD_TYPE_AESNI_MB = 0,
> @@ -149,11 +155,51 @@ struct ipsec_mb_qp {
>   IMB_MGR *mb_mgr;
>   /* Multi buffer manager */
>   const struct rte_memzone *mb_mgr_mz;
> + /** The process id used for queue pairs **/
> + uint16_t qp_in_used_by_pid;

I would rename this to "qp_in_use_by_pid" or "qp_used_by_pid".

>   /* Shared memzone for storing mb_mgr */

Check these comments. This last one "Shared memzone..." applies to the field 
"mb_mgr_mz".
Also, they should use "/**< ".

>   __extension__ uint8_t additional_data[];
>   /**< Storing PMD specific additional data */  };
> 



RE: [PATCH v11 02/18] net/idpf: add support for device initialization

2022-10-26 Thread Xing, Beilei


> -Original Message-
> From: Andrew Rybchenko 
> Sent: Tuesday, October 25, 2022 4:57 PM
> To: Guo, Junfeng ; Zhang, Qi Z
> ; Wu, Jingjing ; Xing, Beilei
> 
> Cc: dev@dpdk.org; Li, Xiaoyun ; Wang, Xiao W
> 
> Subject: Re: [PATCH v11 02/18] net/idpf: add support for device initialization
> 
> On 10/24/22 16:12, Junfeng Guo wrote:
> > Support device init and add the following dev ops:
> >   - dev_configure
> >   - dev_close
> >   - dev_infos_get
> >
> > Signed-off-by: Beilei Xing 
> > Signed-off-by: Xiaoyun Li 
> > Signed-off-by: Xiao Wang 
> > Signed-off-by: Junfeng Guo 
> 
> [snip]
> 
> > diff --git a/doc/guides/nics/features/idpf.ini
> > b/doc/guides/nics/features/idpf.ini
> > new file mode 100644
> > index 00..7a44b8b5e4
> > --- /dev/null
> > +++ b/doc/guides/nics/features/idpf.ini
> > @@ -0,0 +1,10 @@
> > +;
> > +; Supported features of the 'idpf' network poll mode driver.
> > +;
> > +; Refer to default.ini for the full list of available PMD features.
> > +;
> > +; A feature with "P" indicates only be supported when non-vector path
> > +; is selected.
> 
> The statement should be added when the first P appears.

Thanks for the comments, all the comments are addressed in next version except 
the below one.

[snip] 

> > +};
> > +
> > +struct idpf_vport {
> > +   struct idpf_adapter *adapter; /* Backreference to associated adapter
> */
> > +   uint16_t vport_id;
> > +   uint32_t txq_model;
> > +   uint32_t rxq_model;
> > +   uint16_t num_tx_q;
> 
> Shouldn't it be set as the result of configure?
> data->nb_tx_queues which is the result of the configure is not
> used in the patch?

The num_tx_q here is different from data->nb_tx_queues.
The idpf PMD requires fixed txqs and rxqs when the idpf PMD requires creating a 
vport.
The num_tx_q and the num_rxq_q are the values returned by backend.
But there'll be only data->nb_tx_queues txqs configured during dev_start.




RE: [PATCH v2] devtools: check for supported git version

2022-10-26 Thread Chaoyong He
> On 10/26/2022 7:24 AM, David Marchand wrote:
> > On Tue, Oct 25, 2022 at 12:15 PM Ali Alnubani  wrote:
> >>
> >> The script devtools/parse-flow-support.sh uses the git-grep option
> >> (-o, --only-matching), which is only supported from git version 2.19
> >> and onwards.[1]
> >>
> >> The script now exits early providing a clear message to the user
> >> about the required git version instead of showing the following error
> >> messages multiple times:
> >>error: unknown switch `o'
> >>usage: git grep [] [-e]  [...] [[--] ...]
> >>[..]
> >>
> >> [1]
> >> https://github.com/git/git/blob/v2.19.0/Documentation/RelNotes/2.19.0
> >> .txt
> >>
> >> Signed-off-by: Ali Alnubani 
> >> Signed-off-by: Thomas Monjalon 
> >
> > I don't have a "non working" git, but the patch lgtm.
> >
> > Acked-by: David Marchand 
> >
> 
> +Chaoyong,
> 
> He had observed the problem, perhaps he can help to test.

I test in my host, it does work.

$ git --version
git version 2.18.5

Before this patch:
$ ./devtools/check-doc-vs-code.sh
error: unknown switch `o'
usage: git grep [] [-e]  [...] [[--] ...]
--cached  search in index instead of in the work tree
...
repeat many times.

After this patch:
$ ./devtools/check-doc-vs-code.sh
git version >= 2.19 is required



RE: [PATCH v11 03/18] net/idpf: add Tx queue setup

2022-10-26 Thread Xing, Beilei


> -Original Message-
> From: Andrew Rybchenko 
> Sent: Tuesday, October 25, 2022 5:40 PM
> To: Guo, Junfeng ; Zhang, Qi Z
> ; Wu, Jingjing ; Xing, Beilei
> 
> Cc: dev@dpdk.org; Li, Xiaoyun 
> Subject: Re: [PATCH v11 03/18] net/idpf: add Tx queue setup
> 
> On 10/24/22 16:12, Junfeng Guo wrote:
> > Add support for tx_queue_setup ops.
> >
> > In the single queue model, the same descriptor queue is used by SW to
> > post buffer descriptors to HW and by HW to post completed descriptors
> > to SW.
> >
> > In the split queue model, "RX buffer queues" are used to pass
> > descriptor buffers from SW to HW while Rx queues are used only to pass
> > the descriptor completions, that is, descriptors that point to
> > completed buffers, from HW to SW. This is contrary to the single queue
> > model in which Rx queues are used for both purposes.
> >
> > Signed-off-by: Beilei Xing 
> > Signed-off-by: Xiaoyun Li 
> > Signed-off-by: Junfeng Guo 
> 
> > +
> > +   size = sizeof(struct idpf_flex_tx_sched_desc) * txq->nb_tx_desc;
> > +   for (i = 0; i < size; i++)
> > +   ((volatile char *)txq->desc_ring)[i] = 0;
> 
> Please, add a comment which explains why volatile is required here.

Volatile is used here to make sure the memory write through when touch desc.
It follows Intel PMD's coding style.

> 
> > +
> > +   txq->tx_ring_phys_addr = mz->iova;
> > +   txq->tx_ring = (struct idpf_flex_tx_desc *)mz->addr;
> > +
> > +   txq->mz = mz;
> > +   reset_single_tx_queue(txq);
> > +   txq->q_set = true;
> > +   dev->data->tx_queues[queue_idx] = txq;
> > +   txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
> > +   queue_idx * vport->chunks_info.tx_qtail_spacing);
> 
> I'm sorry, but it looks like too much code is duplicated.
> I guess it could be a shared function to avoid it.

It makes sense to optimize the queue setup, but can we improve it in next 
release
due to the RC2 deadline? 
And this part should be common module shared by idpf PMD and a new PMD,
It should be in driver/common/idpf folder in DPDK 23.03.

> 
> > +
> > +   return 0;
> > +}
> > +
> > +int
> > +idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
> > +   uint16_t nb_desc, unsigned int socket_id,
> > +   const struct rte_eth_txconf *tx_conf) {
> > +   struct idpf_vport *vport = dev->data->dev_private;
> > +
> > +   if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
> > +   return idpf_tx_single_queue_setup(dev, queue_idx, nb_desc,
> > + socket_id, tx_conf);
> > +   else
> > +   return idpf_tx_split_queue_setup(dev, queue_idx, nb_desc,
> > +socket_id, tx_conf);
> > +}
> 
> [snip]



RE: [PATCH v11 05/18] net/idpf: add support for device start and stop

2022-10-26 Thread Xing, Beilei


> -Original Message-
> From: Andrew Rybchenko 
> Sent: Tuesday, October 25, 2022 5:50 PM
> To: Guo, Junfeng ; Zhang, Qi Z
> ; Wu, Jingjing ; Xing, Beilei
> 
> Cc: dev@dpdk.org; Li, Xiaoyun 
> Subject: Re: [PATCH v11 05/18] net/idpf: add support for device start and
> stop
> 
> On 10/24/22 16:12, Junfeng Guo wrote:
> > Add dev ops dev_start, dev_stop and link_update.
> >
> > Signed-off-by: Beilei Xing 
> > Signed-off-by: Xiaoyun Li 
> > Signed-off-by: Junfeng Guo 
> > ---
> >   drivers/net/idpf/idpf_ethdev.c | 89
> ++
> >   drivers/net/idpf/idpf_ethdev.h |  5 ++
> >   2 files changed, 94 insertions(+)
> >
> > diff --git a/drivers/net/idpf/idpf_ethdev.c
> > b/drivers/net/idpf/idpf_ethdev.c index 1d2075f466..4c7a2d0748 100644
> > --- a/drivers/net/idpf/idpf_ethdev.c
> > +++ b/drivers/net/idpf/idpf_ethdev.c
> > @@ -29,17 +29,42 @@ static const char * const idpf_valid_args[] = {
> >   };
> > +static int
> > +idpf_start_queues(struct rte_eth_dev *dev) {
> > +   struct idpf_rx_queue *rxq;
> > +   struct idpf_tx_queue *txq;
> > +   int err = 0;
> > +   int i;
> > +
> > +   for (i = 0; i < dev->data->nb_tx_queues; i++) {
> > +   txq = dev->data->tx_queues[i];
> > +   if (txq == NULL || txq->tx_deferred_start)
> > +   continue;
> > +
> > +   PMD_DRV_LOG(ERR, "Start Tx queues not supported yet");
> > +   return -ENOTSUP;
> > +   }
> > +
> > +   for (i = 0; i < dev->data->nb_rx_queues; i++) {
> > +   rxq = dev->data->rx_queues[i];
> > +   if (rxq == NULL || rxq->rx_deferred_start)
> > +   continue;
> > +
> > +   PMD_DRV_LOG(ERR, "Start Rx queues not supported yet");
> > +   return -ENOTSUP;
> > +   }
> > +
> > +   return err;
> > +}
> > +
> > +static int
> > +idpf_dev_start(struct rte_eth_dev *dev) {
> > +   struct idpf_vport *vport = dev->data->dev_private;
> > +
> > +   if (dev->data->mtu > vport->max_mtu) {
> > +   PMD_DRV_LOG(ERR, "MTU should be less than %d", vport-
> >max_mtu);
> > +   return -1;
> > +   }
> > +
> > +   vport->max_pkt_len = dev->data->mtu + IDPF_ETH_OVERHEAD;
> > +
> > +   if (idpf_start_queues(dev) != 0) {
> > +   PMD_DRV_LOG(ERR, "Failed to start queues");
> > +   return -1;
> > +   }
> > +
> > +   if (idpf_vc_ena_dis_vport(vport, true) != 0) {
> > +   PMD_DRV_LOG(ERR, "Failed to enable vport");
> 
> Don't you need to stop queues here?

In this patch, we didn't implement start HW queues, so I will remove start 
queues here
And add start_queues/stop_queues once the APIs are finished.

> 
> > +   return -1;
> > +   }
> > +
> > +   return 0;
> > +}
> > +
> > +static int
> > +idpf_dev_stop(struct rte_eth_dev *dev) {
> > +   struct idpf_vport *vport = dev->data->dev_private;
> 
> Stop queues?

Same.
> 
> > +
> > +   idpf_vc_ena_dis_vport(vport, false);
> > +
> > +   return 0;
> > +}
> > +
> >   static int
> >   idpf_dev_close(struct rte_eth_dev *dev)
> >   {


Re: [PATCH 8/8] net: mark all big endian types

2022-10-26 Thread David Marchand
On Tue, Oct 25, 2022 at 11:46 PM Thomas Monjalon  wrote:
> diff --git a/lib/net/rte_higig.h b/lib/net/rte_higig.h
> index b55fb1a7db..bba3898a88 100644
> --- a/lib/net/rte_higig.h
> +++ b/lib/net/rte_higig.h
> @@ -112,9 +112,9 @@ struct rte_higig2_ppt_type0 {
>   */
>  __extension__
>  struct rte_higig2_ppt_type1 {
> -   uint16_t classification;
> -   uint16_t resv;
> -   uint16_t vid;
> +   rte_be16_t classification;
> +   rte_be16_t resv;
> +   rte_be16_t vid;

Compiling with sparse (from OVS dpdk-latest), there are, at least,
some annotations missing in the public headers for higig2.
lib/ethdev/rte_flow.h:644:  .classification = 0x,
lib/ethdev/rte_flow.h:645:  .vid = 0xfff,

And the 0xfff mask for a 16 bits field (vid) is suspicious, isn't it?


>  #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> uint16_t opcode:3;
> uint16_t resv1:2;


-- 
David Marchand



[PATCH] net/gve: fix meson build failure on non-Linux platforms

2022-10-26 Thread Junfeng Guo
Meson build may fail on FreeBSD with gcc and clang, due to missing
the header file linux/pci_regs.h on non-Linux platform. Thus, in
this patch, we removed the file include and added the used Macros
derived from linux/pci_regs.h.

Fixes: 3047a5ac8e66 ("net/gve: add support for device initialization")
Cc: sta...@dpdk.org

Signed-off-by: Junfeng Guo 
---
 drivers/net/gve/gve_ethdev.c | 14 +-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c
index b0f7b98daa..e968317737 100644
--- a/drivers/net/gve/gve_ethdev.c
+++ b/drivers/net/gve/gve_ethdev.c
@@ -1,12 +1,24 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright(C) 2022 Intel Corporation
  */
-#include 
 
 #include "gve_ethdev.h"
 #include "base/gve_adminq.h"
 #include "base/gve_register.h"
 
+/*
+ * Following macros are derived from linux/pci_regs.h, however,
+ * we can't simply include that header here, as there is no such
+ * file for non-Linux platform.
+ */
+#define PCI_CFG_SPACE_SIZE 256
+#define PCI_CAPABILITY_LIST0x34/* Offset of first capability list 
entry */
+#define PCI_STD_HEADER_SIZEOF  64
+#define PCI_CAP_SIZEOF 4
+#define PCI_CAP_ID_MSIX0x11/* MSI-X */
+#define PCI_MSIX_FLAGS 2   /* Message Control */
+#define PCI_MSIX_FLAGS_QSIZE   0x07FF  /* Table size */
+
 const char gve_version_str[] = GVE_VERSION;
 static const char gve_version_prefix[] = GVE_VERSION_PREFIX;
 
-- 
2.34.1



Re: [PATCH 4/8] ethdev: use GRE protocol struct for flow matching

2022-10-26 Thread David Marchand
On Tue, Oct 25, 2022 at 11:45 PM Thomas Monjalon  wrote:
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index 6045a352ae..fd9be56e31 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -1069,19 +1069,29 @@ static const struct rte_flow_item_mpls 
> rte_flow_item_mpls_mask = {
>   *
>   * Matches a GRE header.
>   */
> +RTE_STD_C11
>  struct rte_flow_item_gre {
> -   /**
> -* Checksum (1b), reserved 0 (12b), version (3b).
> -* Refer to RFC 2784.
> -*/
> -   rte_be16_t c_rsvd0_ver;
> -   rte_be16_t protocol; /**< Protocol type. */
> +   union {
> +   struct {
> +   /*
> +* These are old fields kept for compatibility.
> +* Please prefer hdr field below.
> +*/
> +   /**
> +* Checksum (1b), reserved 0 (12b), version (3b).
> +* Refer to RFC 2784.
> +*/
> +   rte_be16_t c_rsvd0_ver;
> +   rte_be16_t protocol; /**< Protocol type. */
> +   };
> +   struct rte_gre_hdr hdr; /**< GRE header definition. */
> +   };
>  };
>
>  /** Default mask for RTE_FLOW_ITEM_TYPE_GRE. */
>  #ifndef __cplusplus
>  static const struct rte_flow_item_gre rte_flow_item_gre_mask = {
> -   .protocol = RTE_BE16(0x),
> +   .hdr.proto = RTE_BE16(UINT16_MAX),


The proto field in struct rte_gre_hdr from lib/net lacks endianness annotation.
This triggers a sparse warning (from OVS dpdk-latest build):

/home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:1095:22:
error: incorrect type in initializer (different base types)
/home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:1095:22:
expected unsigned short [usertype] proto
/home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:1095:22:
got restricted ovs_be16 [usertype]


-- 
David Marchand



Re: [PATCH] net/gve: fix meson build failure on non-Linux platforms

2022-10-26 Thread David Marchand
On Wed, Oct 26, 2022 at 10:44 AM Junfeng Guo  wrote:
>
> Meson build may fail on FreeBSD with gcc and clang, due to missing
> the header file linux/pci_regs.h on non-Linux platform. Thus, in
> this patch, we removed the file include and added the used Macros
> derived from linux/pci_regs.h.
>
> Fixes: 3047a5ac8e66 ("net/gve: add support for device initialization")
> Cc: sta...@dpdk.org

...
No need for Cc: stable.


>
> Signed-off-by: Junfeng Guo 
> ---
>  drivers/net/gve/gve_ethdev.c | 14 +-
>  1 file changed, 13 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c
> index b0f7b98daa..e968317737 100644
> --- a/drivers/net/gve/gve_ethdev.c
> +++ b/drivers/net/gve/gve_ethdev.c
> @@ -1,12 +1,24 @@
>  /* SPDX-License-Identifier: BSD-3-Clause
>   * Copyright(C) 2022 Intel Corporation
>   */
> -#include 
>
>  #include "gve_ethdev.h"
>  #include "base/gve_adminq.h"
>  #include "base/gve_register.h"
>
> +/*
> + * Following macros are derived from linux/pci_regs.h, however,
> + * we can't simply include that header here, as there is no such
> + * file for non-Linux platform.
> + */
> +#define PCI_CFG_SPACE_SIZE 256
> +#define PCI_CAPABILITY_LIST0x34/* Offset of first capability list 
> entry */
> +#define PCI_STD_HEADER_SIZEOF  64
> +#define PCI_CAP_SIZEOF 4
> +#define PCI_CAP_ID_MSIX0x11/* MSI-X */
> +#define PCI_MSIX_FLAGS 2   /* Message Control */
> +#define PCI_MSIX_FLAGS_QSIZE   0x07FF  /* Table size */

No, don't introduce such defines in a driver.
We have a PCI library, that provides some defines.
So fix your driver to use them.


-- 
David Marchand



RE: [PATCH v9 00/12] vdpa/ifc: add multi queue support

2022-10-26 Thread Xia, Chenbo
> -Original Message-
> From: Pei, Andy 
> Sent: Wednesday, October 19, 2022 4:41 PM
> To: dev@dpdk.org
> Cc: Xia, Chenbo ; Xu, Rosen ;
> Huang, Wei ; Cao, Gang ;
> maxime.coque...@redhat.com
> Subject: [PATCH v9 00/12] vdpa/ifc: add multi queue support
> 
> v9:
>  fix some commit message.
> 
> v8:
>  change "vdpa_device_type" in "rte_vdpa_device" to "type".
> 
> v7:
>  Fill vdpa_device_type in vdpa device registration.
> 
> v6:
>  Add vdpa_device_type to rte_vdpa_device to store vDPA device type.
> 
> v5:
>  fix some commit message.
>  rework some code logic.
> 
> v4:
>  fix some commit message.
>  add some commets to code.
>  fix some code to reduce confusion.
> 
> v3:
>  rename device ID macro name.
>  fix some patch title and commit message.
>  delete some used marco.
>  rework some code logic.
> 
> v2:
>  fix some coding style issue.
>  support dynamic enable/disable queue at run time.
> 
> Andy Pei (10):
>   vdpa/ifc: add multi-queue support
>   vdpa/ifc: set max queues based on virtio spec
>   vdpa/ifc: write queue count to MQ register
>   vdpa/ifc: only configure enabled queue
>   vdpa/ifc: change internal function name
>   vdpa/ifc: add internal API to get device
>   vdpa/ifc: improve internal list logic
>   vhost: add type to rte vdpa device
>   vhost: vDPA blk device gets ready when the first queue is ready
>   vhost: improve vDPA blk device configure condition
> 
> Huang Wei (2):
>   vdpa/ifc: add new device ID for legacy network device
>   vdpa/ifc: support dynamic enable/disable queue
> 
>  drivers/vdpa/ifc/base/ifcvf.c | 144 
>  drivers/vdpa/ifc/base/ifcvf.h |  16 +++-
>  drivers/vdpa/ifc/ifcvf_vdpa.c | 185 +++--
> -
>  lib/vhost/socket.c|  15 +---
>  lib/vhost/vdpa.c  |  15 
>  lib/vhost/vdpa_driver.h   |   2 +
>  lib/vhost/vhost_user.c|  38 +
>  7 files changed, 354 insertions(+), 61 deletions(-)
> 
> --
> 1.8.3.1

Change title ' vhost: add type to rte vdpa device ' to ' vhost: add type to 
vDPA device '

Series applied to next-virtio/main, Thanks



RE: [PATCH v8 1/2] vhost: introduce DMA vchannel unconfiguration

2022-10-26 Thread Xia, Chenbo
> -Original Message-
> From: Ding, Xuan 
> Sent: Tuesday, October 25, 2022 4:26 PM
> To: maxime.coque...@redhat.com; Xia, Chenbo 
> Cc: dev@dpdk.org; Hu, Jiayu ; He, Xingguang
> ; Ling, WeiX ; Jiang, Cheng1
> ; Wang, YuanX ; Ma, WenwuX
> ; Ding, Xuan 
> Subject: [PATCH v8 1/2] vhost: introduce DMA vchannel unconfiguration
> 
> From: Xuan Ding 
> 
> Add a new API rte_vhost_async_dma_unconfigure() to unconfigure DMA
> vChannels in vhost async data path. Lock protection are also added
> to protect DMA vChannel configuration and unconfiguration
> from concurrent calls.
> 
> Signed-off-by: Xuan Ding 
> ---
>  doc/guides/prog_guide/vhost_lib.rst|  5 ++
>  doc/guides/rel_notes/release_22_11.rst |  5 ++
>  lib/vhost/rte_vhost_async.h| 20 +++
>  lib/vhost/version.map  |  3 ++
>  lib/vhost/vhost.c  | 72 --
>  5 files changed, 100 insertions(+), 5 deletions(-)
> 
> diff --git a/doc/guides/prog_guide/vhost_lib.rst
> b/doc/guides/prog_guide/vhost_lib.rst
> index bad4d819e1..0d9eca1f7d 100644
> --- a/doc/guides/prog_guide/vhost_lib.rst
> +++ b/doc/guides/prog_guide/vhost_lib.rst
> @@ -323,6 +323,11 @@ The following is an overview of some key Vhost API
> functions:
>Get device type of vDPA device, such as VDPA_DEVICE_TYPE_NET,
>VDPA_DEVICE_TYPE_BLK.
> 
> +* ``rte_vhost_async_dma_unconfigure(dma_id, vchan_id)``
> +
> +  Clean DMA vChannel finished to use. After this function is called,
> +  the specified DMA vChannel should no longer be used by the Vhost
> library.
> +
>  Vhost-user Implementations
>  --
> 
> diff --git a/doc/guides/rel_notes/release_22_11.rst
> b/doc/guides/rel_notes/release_22_11.rst
> index 2da8bc9661..bbd1c5aa9c 100644
> --- a/doc/guides/rel_notes/release_22_11.rst
> +++ b/doc/guides/rel_notes/release_22_11.rst
> @@ -225,6 +225,11 @@ New Features
>sysfs entries to adjust the minimum and maximum uncore frequency values,
>which works on Linux with Intel hardware only.
> 
> +* **Added DMA vChannel unconfiguration for async vhost.**
> +
> +  Added support to unconfigure DMA vChannel that is no longer used
> +  by the Vhost library.
> +
>  * **Rewritten pmdinfo script.**
> 
>The ``dpdk-pmdinfo.py`` script was rewritten to produce valid JSON only.
> diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
> index 1db2a10124..8f190dd44b 100644
> --- a/lib/vhost/rte_vhost_async.h
> +++ b/lib/vhost/rte_vhost_async.h
> @@ -266,6 +266,26 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t
> queue_id,
>   struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t
> count,
>   int *nr_inflight, int16_t dma_id, uint16_t vchan_id);
> 
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior
> notice.
> + *
> + * Unconfigure DMA vChannel in Vhost asynchronous data path.
> + * This function should be called when the specified DMA vChannel is no
> longer
> + * used by the Vhost library. Before this function is called, make sure
> there
> + * does not exist in-flight packets in DMA vChannel.
> + *
> + * @param dma_id
> + *  the identifier of DMA device
> + * @param vchan_id
> + *  the identifier of virtual DMA channel
> + * @return
> + *  0 on success, and -1 on failure
> + */
> +__rte_experimental
> +int
> +rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id);
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/vhost/version.map b/lib/vhost/version.map
> index 7a00b65740..0b61870870 100644
> --- a/lib/vhost/version.map
> +++ b/lib/vhost/version.map
> @@ -94,6 +94,9 @@ EXPERIMENTAL {
>   rte_vhost_async_try_dequeue_burst;
>   rte_vhost_driver_get_vdpa_dev_type;
>   rte_vhost_clear_queue;
> +
> + # added in 22.11
> + rte_vhost_async_dma_unconfigure;
>  };
> 
>  INTERNAL {
> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> index 8740aa2788..1bb01c2a2e 100644
> --- a/lib/vhost/vhost.c
> +++ b/lib/vhost/vhost.c
> @@ -23,6 +23,7 @@
> 
>  struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE];
>  pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
> +pthread_mutex_t vhost_dma_lock = PTHREAD_MUTEX_INITIALIZER;
> 
>  struct vhost_vq_stats_name_off {
>   char name[RTE_VHOST_STATS_NAME_SIZE];
> @@ -1844,19 +1845,21 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>   void *pkts_cmpl_flag_addr;
>   uint16_t max_desc;
> 
> + pthread_mutex_lock(&vhost_dma_lock);
> +
>   if (!rte_dma_is_valid(dma_id)) {
>   VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
> - return -1;
> + goto error;
>   }
> 
>   if (rte_dma_info_get(dma_id, &info) != 0) {
>   VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d
> information.\n", dma_id);
> - return -1;
> + goto error;
>   }
> 
>   if (vchan_id >= info.max_vchans) {
>   VHOST_LOG_CONFIG("dma", ERR, 

RE: [PATCH v8 2/2] examples/vhost: unconfigure DMA vchannel

2022-10-26 Thread Xia, Chenbo
> -Original Message-
> From: Ding, Xuan 
> Sent: Tuesday, October 25, 2022 4:26 PM
> To: maxime.coque...@redhat.com; Xia, Chenbo 
> Cc: dev@dpdk.org; Hu, Jiayu ; He, Xingguang
> ; Ling, WeiX ; Jiang, Cheng1
> ; Wang, YuanX ; Ma, WenwuX
> ; Ding, Xuan 
> Subject: [PATCH v8 2/2] examples/vhost: unconfigure DMA vchannel
> 
> From: Xuan Ding 
> 
> This patch applies rte_vhost_async_dma_unconfigure() to manually free
> DMA vChannels. Before unconfiguration, make sure the specified DMA
> vChannel is no longer used by any vhost ports.
> 
> Signed-off-by: Xuan Ding 
> ---
>  examples/vhost/main.c | 8 
>  1 file changed, 8 insertions(+)
> 
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> index ac78704d79..42e53a0f9a 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -2066,6 +2066,14 @@ main(int argc, char *argv[])
>   RTE_LCORE_FOREACH_WORKER(lcore_id)
>   rte_eal_wait_lcore(lcore_id);
> 
> + for (i = 0; i < dma_count; i++) {
> + if (rte_vhost_async_dma_unconfigure(dmas_id[i], 0) < 0) {
> + RTE_LOG(ERR, VHOST_PORT,
> + "Failed to unconfigure DMA %d in vhost.\n",
> dmas_id[i]);
> + rte_exit(EXIT_FAILURE, "Cannot use given DMA device\n");
> + }
> + }
> +
>   /* clean up the EAL */
>   rte_eal_cleanup();
> 
> --
> 2.17.1

Reviewed-by: Chenbo Xia 


RE: [PATCH v6 08/27] eal: deprecate RTE_FUNC_PTR_* macros

2022-10-26 Thread Morten Brørup
> From: David Marchand [mailto:david.march...@redhat.com]
> Sent: Wednesday, 14 September 2022 09.58
> 
> Those macros have no real value and are easily replaced with a simple
> if() block.
> 
> Existing users have been converted using a new cocci script.
> Deprecate them.
> 
> Signed-off-by: David Marchand 
> ---

[...]

>  /* Macros to check for invalid function pointers */
> -#define RTE_FUNC_PTR_OR_ERR_RET(func, retval) do { \
> +#define RTE_FUNC_PTR_OR_ERR_RET(func, retval)
> RTE_DEPRECATED(RTE_FUNC_PTR_OR_ERR_RET) \
> +do { \
>   if ((func) == NULL) \
>   return retval; \
>  } while (0)
> 
> -#define RTE_FUNC_PTR_OR_RET(func) do { \
> +#define RTE_FUNC_PTR_OR_RET(func) RTE_DEPRECATED(RTE_FUNC_PTR_OR_RET)
> \
> +do { \
>   if ((func) == NULL) \
>   return; \
>  } while (0)

rte_ethdev.h has somewhat similar macros [1]:

/* Macros to check for valid port */
#define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \
if (!rte_eth_dev_is_valid_port(port_id)) { \
RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \
return retval; \
} \
} while (0)

#define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \
if (!rte_eth_dev_is_valid_port(port_id)) { \
RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \
return; \
} \
} while (0)

However, these log an error message. It makes me wonder about consistency...

Are the RTE_FUNC_PTR_OR_* macros a special case, or should similar macros, such 
as RTE_ETH_VALID_PORTID_OR_*, also be deprecated?

Or should the RTE_FUNC_PTR_OR_* macros be modified to log an error message 
instead of being deprecated?


[1]: 
https://elixir.bootlin.com/dpdk/v22.11-rc1/source/lib/ethdev/rte_ethdev.h#L1909



RE: [PATCH v8 0/2] vhost: introduce DMA vchannel unconfiguration

2022-10-26 Thread Xia, Chenbo
> -Original Message-
> From: Ding, Xuan 
> Sent: Tuesday, October 25, 2022 4:26 PM
> To: maxime.coque...@redhat.com; Xia, Chenbo 
> Cc: dev@dpdk.org; Hu, Jiayu ; He, Xingguang
> ; Ling, WeiX ; Jiang, Cheng1
> ; Wang, YuanX ; Ma, WenwuX
> ; Ding, Xuan 
> Subject: [PATCH v8 0/2] vhost: introduce DMA vchannel unconfiguration
> 
> From: Xuan Ding 
> 
> This patchset introduces a new API rte_vhost_async_dma_unconfigure()
> to help user to manually free DMA vChannels finished to use.
> 
> v8:
> * Check inflight packets release virtual channel.
> 
> v7:
> * Add inflight packets processing.
> * Fix CI error.
> 
> v6:
> * Move DMA unconfiguration to the end due to DMA devices maybe reused
>   after destroy_device().
> * Refine the doc to claim the DMA device should not be used in vhost
>   after unconfiguration.
> 
> v5:
> * Use mutex instead of spinlock.
> * Improve code readability.
> 
> v4:
> * Rebase to 22.11 rc1.
> * Fix the usage of 'dma_ref_count' to make sure the specified DMA device
>   is not used by any vhost ports before unconfiguration.
> 
> v3:
> * Rebase to latest DPDK.
> * Refine some descriptions in the doc.
> * Fix one bug in the vhost example.
> 
> v2:
> * Add spinlock protection.
> * Fix a memory leak issue.
> * Refine the doc.
> 
> Xuan Ding (2):
>   vhost: introduce DMA vchannel unconfiguration
>   examples/vhost: unconfigure DMA vchannel
> 
>  doc/guides/prog_guide/vhost_lib.rst|  5 ++
>  doc/guides/rel_notes/release_22_11.rst |  5 ++
>  examples/vhost/main.c  |  8 +++
>  lib/vhost/rte_vhost_async.h| 20 +++
>  lib/vhost/version.map  |  3 ++
>  lib/vhost/vhost.c  | 72 --
>  6 files changed, 108 insertions(+), 5 deletions(-)
> 
> --
> 2.17.1

Series applied to next-virtio/main, thanks


[Bug 1114] build failures with clang 15

2022-10-26 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1114

Bug ID: 1114
   Summary: build failures with clang 15
   Product: DPDK
   Version: unspecified
  Hardware: All
OS: All
Status: UNCONFIRMED
  Severity: normal
  Priority: Normal
 Component: other
  Assignee: dev@dpdk.org
  Reporter: alia...@nvidia.com
  Target Milestone: ---

DPDK build fails with clang 15 in Fedora Rawhide due to unused variables:

lib/eal/common/rte_service.c:110:6: error: variable 'count' set but not used
[-Werror,-Wunused-but-set-variable]
lib/vhost/virtio_net.c:1880:11: error: variable 'remained' set but not used
[-Werror,-Wunused-but-set-variable]
drivers/bus/dpaa/base/qbman/bman.h:522:6: error: variable 'aq_count' set but
not used [-Werror,-Wunused-but-set-variable]
(And in many other places).

Probably caused by this change:
https://releases.llvm.org/15.0.0/tools/clang/docs/ReleaseNotes.html
> -Wunused-but-set-variable now also warns if the variable is only used by
> unary operators.

-- 
You are receiving this mail because:
You are the assignee for the bug.

RE: [PATCH v2] net/virtio: add queue and port ID in some logs

2022-10-26 Thread Xia, Chenbo
> -Original Message-
> From: Olivier Matz 
> Sent: Monday, October 17, 2022 4:01 PM
> To: dev@dpdk.org
> Cc: Maxime Coquelin ; Xia, Chenbo
> 
> Subject: [PATCH v2] net/virtio: add queue and port ID in some logs
> 
> Add the queue id and/or the port id in some logs, so it is easier to
> understand what happens.
> 
> Signed-off-by: Olivier Matz 
> Reviewed-by: Maxime Coquelin 
> ---
> 
> v2
> * use %u instead of %d for unsigned types
> 
>  drivers/net/virtio/virtio_ethdev.c | 6 --
>  drivers/net/virtio/virtio_rxtx.c   | 3 ++-
>  2 files changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/virtio/virtio_ethdev.c
> b/drivers/net/virtio/virtio_ethdev.c
> index 574f671158..760ba4e368 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -2814,7 +2814,8 @@ virtio_dev_start(struct rte_eth_dev *dev)
>   return -EINVAL;
>   }
> 
> - PMD_INIT_LOG(DEBUG, "nb_queues=%d", nb_queues);
> + PMD_INIT_LOG(DEBUG, "nb_queues=%u (port=%u)", nb_queues,
> +  dev->data->port_id);
> 
>   for (i = 0; i < dev->data->nb_rx_queues; i++) {
>   vq = virtnet_rxq_to_vq(dev->data->rx_queues[i]);
> @@ -2828,7 +2829,8 @@ virtio_dev_start(struct rte_eth_dev *dev)
>   virtqueue_notify(vq);
>   }
> 
> - PMD_INIT_LOG(DEBUG, "Notified backend at initialization");
> + PMD_INIT_LOG(DEBUG, "Notified backend at initialization (port=%u)",
> +  dev->data->port_id);
> 
>   for (i = 0; i < dev->data->nb_rx_queues; i++) {
>   vq = virtnet_rxq_to_vq(dev->data->rx_queues[i]);
> diff --git a/drivers/net/virtio/virtio_rxtx.c
> b/drivers/net/virtio/virtio_rxtx.c
> index 4795893ec7..d9d40832e0 100644
> --- a/drivers/net/virtio/virtio_rxtx.c
> +++ b/drivers/net/virtio/virtio_rxtx.c
> @@ -793,7 +793,8 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev
> *dev, uint16_t queue_idx)
>   vq_update_avail_idx(vq);
>   }
> 
> - PMD_INIT_LOG(DEBUG, "Allocated %d bufs", nbufs);
> + PMD_INIT_LOG(DEBUG, "Allocated %d bufs (port=%u queue=%u)", nbufs,
> +  dev->data->port_id, queue_idx);
> 
>   VIRTQUEUE_DUMP(vq);
> 
> --
> 2.30.2

Applied to next-virtio/main, thanks


Re: [PATCH] net/gve: fix meson build failure on non-Linux platforms

2022-10-26 Thread Ferruh Yigit

On 10/26/2022 9:51 AM, David Marchand wrote:

CAUTION: This message has originated from an External Source. Please use proper 
judgment and caution when opening attachments, clicking links, or responding to 
this email.


On Wed, Oct 26, 2022 at 10:44 AM Junfeng Guo  wrote:


Meson build may fail on FreeBSD with gcc and clang, due to missing
the header file linux/pci_regs.h on non-Linux platform. Thus, in
this patch, we removed the file include and added the used Macros
derived from linux/pci_regs.h.

Fixes: 3047a5ac8e66 ("net/gve: add support for device initialization")
Cc: sta...@dpdk.org


...
No need for Cc: stable.




Signed-off-by: Junfeng Guo 
---
  drivers/net/gve/gve_ethdev.c | 14 +-
  1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c
index b0f7b98daa..e968317737 100644
--- a/drivers/net/gve/gve_ethdev.c
+++ b/drivers/net/gve/gve_ethdev.c
@@ -1,12 +1,24 @@
  /* SPDX-License-Identifier: BSD-3-Clause
   * Copyright(C) 2022 Intel Corporation
   */
-#include 

  #include "gve_ethdev.h"
  #include "base/gve_adminq.h"
  #include "base/gve_register.h"

+/*
+ * Following macros are derived from linux/pci_regs.h, however,
+ * we can't simply include that header here, as there is no such
+ * file for non-Linux platform.
+ */
+#define PCI_CFG_SPACE_SIZE 256
+#define PCI_CAPABILITY_LIST0x34/* Offset of first capability list 
entry */
+#define PCI_STD_HEADER_SIZEOF  64
+#define PCI_CAP_SIZEOF 4
+#define PCI_CAP_ID_MSIX0x11/* MSI-X */
+#define PCI_MSIX_FLAGS 2   /* Message Control */
+#define PCI_MSIX_FLAGS_QSIZE   0x07FF  /* Table size */


No, don't introduce such defines in a driver.
We have a PCI library, that provides some defines.
So fix your driver to use them.



I can see 'bnx2x' driver is using some flags from 'dev/pci/pcireg.h' 
header for FreeBSD, unfortunately macro names are not same so a #ifdef 
is required. Also I don't know it has all macros or if the header is 
available in all versions etc..


Other option can be to define them all for DPDK in common pci header? As 
far as I can see we don't have all defined right now.




Re: [PATCH v6 08/27] eal: deprecate RTE_FUNC_PTR_* macros

2022-10-26 Thread David Marchand
On Wed, Oct 26, 2022 at 11:05 AM Morten Brørup  
wrote:
>
> > From: David Marchand [mailto:david.march...@redhat.com]
> > Sent: Wednesday, 14 September 2022 09.58
> >
> > Those macros have no real value and are easily replaced with a simple
> > if() block.
> >
> > Existing users have been converted using a new cocci script.
> > Deprecate them.
> >
> > Signed-off-by: David Marchand 
> > ---
>
> [...]
>
> >  /* Macros to check for invalid function pointers */
> > -#define RTE_FUNC_PTR_OR_ERR_RET(func, retval) do { \
> > +#define RTE_FUNC_PTR_OR_ERR_RET(func, retval)
> > RTE_DEPRECATED(RTE_FUNC_PTR_OR_ERR_RET) \
> > +do { \
> >   if ((func) == NULL) \
> >   return retval; \
> >  } while (0)
> >
> > -#define RTE_FUNC_PTR_OR_RET(func) do { \
> > +#define RTE_FUNC_PTR_OR_RET(func) RTE_DEPRECATED(RTE_FUNC_PTR_OR_RET)
> > \
> > +do { \
> >   if ((func) == NULL) \
> >   return; \
> >  } while (0)
>
> rte_ethdev.h has somewhat similar macros [1]:
>
> /* Macros to check for valid port */
> #define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \
> if (!rte_eth_dev_is_valid_port(port_id)) { \
> RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \
> return retval; \
> } \
> } while (0)
>
> #define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \
> if (!rte_eth_dev_is_valid_port(port_id)) { \
> RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \
> return; \
> } \
> } while (0)
>
> However, these log an error message. It makes me wonder about consistency...
>
> Are the RTE_FUNC_PTR_OR_* macros a special case, or should similar macros, 
> such as RTE_ETH_VALID_PORTID_OR_*, also be deprecated?
>
> Or should the RTE_FUNC_PTR_OR_* macros be modified to log an error message 
> instead of being deprecated?

The difference is that those ethdev macros validate something
expressed in their name "is this portid valid".

The RTE_FUNC_PTR_OR_* macros were really just a if() + return block.
I don't see what message to log for them.


-- 
David Marchand



[PATCH v4 1/2] app/testpmd: fix vlan offload of rxq

2022-10-26 Thread Mingjin Ye
After setting vlan offload in testpmd, the result is not updated
to rxq. Therefore, the queue needs to be reconfigured after
executing the "vlan offload" related commands.

Fixes: a47aa8b97afe ("app/testpmd: add vlan offload support")
Cc: sta...@dpdk.org

Signed-off-by: Mingjin Ye 
---
 app/test-pmd/cmdline.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 17be2de402..ce125f549f 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -4133,6 +4133,7 @@ cmd_vlan_offload_parsed(void *parsed_result,
else
vlan_extend_set(port_id, on);
 
+   cmd_reconfig_device_queue(port_id, 1, 1);
return;
 }
 
-- 
2.34.1



[PATCH v4 2/2] net/ice: fix vlan offload

2022-10-26 Thread Mingjin Ye
The vlan tag and flag in Rx descriptor are not processed on vector path,
then the upper application can't fetch the tci from mbuf.

This commit add handling of vlan RX offloading.

Fixes: c68a52b8b38c ("net/ice: support vector SSE in Rx")
Fixes: ece1f8a8f1c8 ("net/ice: switch to flexible descriptor in SSE path")
Fixes: 12443386a0b0 ("net/ice: support flex Rx descriptor RxDID22")
Fixes: 214f452f7d5f ("net/ice: add AVX2 offload Rx")
Fixes: 7f85d5ebcfe1 ("net/ice: add AVX512 vector path")
Fixes: 295968d17407 ("ethdev: add namespace")
Fixes: 808a17b3c1e6 ("net/ice: add Rx AVX512 offload path")
Cc: sta...@dpdk.org

Signed-off-by: Mingjin Ye 

v3:
* Fix macros in ice_rxtx_vec_sse.c source file.
v4:
* Fix ice_rx_desc_to_olflags_v define in ice_rxtx_vec_sse.c source file.
---
 drivers/net/ice/ice_rxtx_vec_avx2.c   | 136 +-
 drivers/net/ice/ice_rxtx_vec_avx512.c | 156 +-
 drivers/net/ice/ice_rxtx_vec_sse.c| 132 --
 3 files changed, 335 insertions(+), 89 deletions(-)

diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c 
b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 31d6af42fd..888666f206 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -238,6 +238,7 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, 
struct rte_mbuf **rx_pkts,
RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
RTE_MBUF_F_RX_RSS_HASH, 0);
 
+
RTE_SET_USED(avx_aligned); /* for 32B descriptors we don't use this */
 
uint16_t i, received;
@@ -474,7 +475,7 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, 
struct rte_mbuf **rx_pkts,
 * will cause performance drop to get into this context.
 */
if 
(rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-   RTE_ETH_RX_OFFLOAD_RSS_HASH) {
+   (RTE_ETH_RX_OFFLOAD_RSS_HASH | 
RTE_ETH_RX_OFFLOAD_VLAN)) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh7 =
_mm_load_si128
@@ -529,33 +530,112 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, 
struct rte_mbuf **rx_pkts,
 * to shift the 32b RSS hash value to the
 * highest 32b of each 128b before mask
 */
-   __m256i rss_hash6_7 =
-   _mm256_slli_epi64(raw_desc_bh6_7, 32);
-   __m256i rss_hash4_5 =
-   _mm256_slli_epi64(raw_desc_bh4_5, 32);
-   __m256i rss_hash2_3 =
-   _mm256_slli_epi64(raw_desc_bh2_3, 32);
-   __m256i rss_hash0_1 =
-   _mm256_slli_epi64(raw_desc_bh0_1, 32);
-
-   __m256i rss_hash_msk =
-   _mm256_set_epi32(0x, 0, 0, 0,
-0x, 0, 0, 0);
-
-   rss_hash6_7 = _mm256_and_si256
-   (rss_hash6_7, rss_hash_msk);
-   rss_hash4_5 = _mm256_and_si256
-   (rss_hash4_5, rss_hash_msk);
-   rss_hash2_3 = _mm256_and_si256
-   (rss_hash2_3, rss_hash_msk);
-   rss_hash0_1 = _mm256_and_si256
-   (rss_hash0_1, rss_hash_msk);
-
-   mb6_7 = _mm256_or_si256(mb6_7, rss_hash6_7);
-   mb4_5 = _mm256_or_si256(mb4_5, rss_hash4_5);
-   mb2_3 = _mm256_or_si256(mb2_3, rss_hash2_3);
-   mb0_1 = _mm256_or_si256(mb0_1, rss_hash0_1);
-   } /* if() on RSS hash parsing */
+   if 
(rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
+   RTE_ETH_RX_OFFLOAD_RSS_HASH) {
+   __m256i rss_hash6_7 =
+   
_mm256_slli_epi64(raw_desc_bh6_7, 32);
+   __m256i rss_hash4_5 =
+   
_mm256_slli_epi64(raw_desc_bh4_5, 32);
+   __m256i rss_hash2_3 =
+   
_mm256_slli_epi64(raw_desc_bh2_3, 32);
+   __m256i rss_hash0_1 =
+   
_mm256_slli_epi64(raw_desc_bh0_1, 

RE: [PATCH v4] vhost: add new `rte_vhost_vring_call_nonblock` API

2022-10-26 Thread Xia, Chenbo
> -Original Message-
> From: Liu, Changpeng 
> Sent: Monday, October 17, 2022 3:48 PM
> To: dev@dpdk.org
> Cc: Liu, Changpeng ; Maxime Coquelin
> ; Xia, Chenbo ; David
> Marchand 
> Subject: [PATCH v4] vhost: add new `rte_vhost_vring_call_nonblock` API
> 
> Vhost-user library locks all VQ's access lock when processing
> vring based messages, such as SET_VRING_KICK and SET_VRING_CALL,
> and the data processing thread may already be started, e.g: SPDK
> vhost-blk and vhost-scsi will start the data processing thread
> when one vring is ready, then deadlock may happen when SPDK is
> posting interrupts to VM.  Here, we add a new API which allows
> caller to try again later for this case.
> 
> Bugzilla ID: 1015
> Fixes: c5736998305d ("vhost: fix missing virtqueue lock protection")
> 
> Signed-off-by: Changpeng Liu 
> ---
>  doc/guides/prog_guide/vhost_lib.rst|  6 ++
>  doc/guides/rel_notes/release_22_11.rst |  6 ++
>  lib/vhost/rte_vhost.h  | 15 +
>  lib/vhost/version.map  |  3 +++
>  lib/vhost/vhost.c  | 30 ++
>  5 files changed, 60 insertions(+)
> 
> diff --git a/doc/guides/prog_guide/vhost_lib.rst
> b/doc/guides/prog_guide/vhost_lib.rst
> index bad4d819e1..52f0589f51 100644
> --- a/doc/guides/prog_guide/vhost_lib.rst
> +++ b/doc/guides/prog_guide/vhost_lib.rst
> @@ -297,6 +297,12 @@ The following is an overview of some key Vhost API
> functions:
>Clear in-flight packets which are submitted to async channel in vhost
> async data
>path. Completed packets are returned to applications through ``pkts``.
> 
> +* ``rte_vhost_vring_call_nonblock(int vid, uint16_t vring_idx)``
> +
> +  Notify the guest that used descriptors have been added to the vring.
> This function
> +  will return -EAGAIN when vq's access lock is held by other thread, user
> should try
> +  again later.
> +
>  * ``rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id, struct
> rte_vhost_stat_name *names, unsigned int size)``
> 
>This function returns the names of the queue statistics. It requires
> diff --git a/doc/guides/rel_notes/release_22_11.rst
> b/doc/guides/rel_notes/release_22_11.rst
> index 2da8bc9661..b3d1ba043c 100644
> --- a/doc/guides/rel_notes/release_22_11.rst
> +++ b/doc/guides/rel_notes/release_22_11.rst
> @@ -225,6 +225,12 @@ New Features
>sysfs entries to adjust the minimum and maximum uncore frequency values,
>which works on Linux with Intel hardware only.
> 
> +* **Added non-blocking notify API to the vhost library.**
> +
> +  Added ``rte_vhost_vring_call_nonblock`` API to notify the guest that
> +  used descriptors have been added to the vring in non-blocking way. User
> +  should check the return value of this API and try again if needed.
> +
>  * **Rewritten pmdinfo script.**
> 
>The ``dpdk-pmdinfo.py`` script was rewritten to produce valid JSON only.
> diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h
> index bb7d86a432..d22b25cd4e 100644
> --- a/lib/vhost/rte_vhost.h
> +++ b/lib/vhost/rte_vhost.h
> @@ -909,6 +909,21 @@ rte_vhost_clr_inflight_desc_packed(int vid, uint16_t
> vring_idx,
>   */
>  int rte_vhost_vring_call(int vid, uint16_t vring_idx);
> 
> +/**
> + * Notify the guest that used descriptors have been added to the vring.
> This
> + * function acts as a memory barrier.  This function will return -EAGAIN
> when
> + * vq's access lock is held by other thread, user should try again later.
> + *
> + * @param vid
> + *  vhost device ID
> + * @param vring_idx
> + *  vring index
> + * @return
> + *  0 on success, -1 on failure, -EAGAIN for another retry
> + */
> +__rte_experimental
> +int rte_vhost_vring_call_nonblock(int vid, uint16_t vring_idx);
> +
>  /**
>   * Get vhost RX queue avail count.
>   *
> diff --git a/lib/vhost/version.map b/lib/vhost/version.map
> index 7a00b65740..c8c44b8326 100644
> --- a/lib/vhost/version.map
> +++ b/lib/vhost/version.map
> @@ -94,6 +94,9 @@ EXPERIMENTAL {
>   rte_vhost_async_try_dequeue_burst;
>   rte_vhost_driver_get_vdpa_dev_type;
>   rte_vhost_clear_queue;
> +
> + # added in 22.11
> + rte_vhost_vring_call_nonblock;
>  };
> 
>  INTERNAL {
> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> index 8740aa2788..ed6efb003f 100644
> --- a/lib/vhost/vhost.c
> +++ b/lib/vhost/vhost.c
> @@ -1317,6 +1317,36 @@ rte_vhost_vring_call(int vid, uint16_t vring_idx)
>   return 0;
>  }
> 
> +int
> +rte_vhost_vring_call_nonblock(int vid, uint16_t vring_idx)
> +{
> + struct virtio_net *dev;
> + struct vhost_virtqueue *vq;
> +
> + dev = get_device(vid);
> + if (!dev)
> + return -1;
> +
> + if (vring_idx >= VHOST_MAX_VRING)
> + return -1;
> +
> + vq = dev->virtqueue[vring_idx];
> + if (!vq)
> + return -1;
> +
> + if (!rte_spinlock_trylock(&vq->access_lock))
> + return -EAGAIN;
> +
> + if (vq_is_packed(dev))
> + vhost_vring_call_packed(dev

RE: [PATCH v9 00/12] vdpa/ifc: add multi queue support

2022-10-26 Thread Pei, Andy



> -Original Message-
> From: Xia, Chenbo 
> Sent: Wednesday, October 26, 2022 5:00 PM
> To: Pei, Andy ; dev@dpdk.org
> Cc: Xu, Rosen ; Huang, Wei ;
> Cao, Gang ; maxime.coque...@redhat.com
> Subject: RE: [PATCH v9 00/12] vdpa/ifc: add multi queue support
> 
> > -Original Message-
> > From: Pei, Andy 
> > Sent: Wednesday, October 19, 2022 4:41 PM
> > To: dev@dpdk.org
> > Cc: Xia, Chenbo ; Xu, Rosen
> > ; Huang, Wei ; Cao, Gang
> > ; maxime.coque...@redhat.com
> > Subject: [PATCH v9 00/12] vdpa/ifc: add multi queue support
> >
> > v9:
> >  fix some commit message.
> >
> > v8:
> >  change "vdpa_device_type" in "rte_vdpa_device" to "type".
> >
> > v7:
> >  Fill vdpa_device_type in vdpa device registration.
> >
> > v6:
> >  Add vdpa_device_type to rte_vdpa_device to store vDPA device type.
> >
> > v5:
> >  fix some commit message.
> >  rework some code logic.
> >
> > v4:
> >  fix some commit message.
> >  add some commets to code.
> >  fix some code to reduce confusion.
> >
> > v3:
> >  rename device ID macro name.
> >  fix some patch title and commit message.
> >  delete some used marco.
> >  rework some code logic.
> >
> > v2:
> >  fix some coding style issue.
> >  support dynamic enable/disable queue at run time.
> >
> > Andy Pei (10):
> >   vdpa/ifc: add multi-queue support
> >   vdpa/ifc: set max queues based on virtio spec
> >   vdpa/ifc: write queue count to MQ register
> >   vdpa/ifc: only configure enabled queue
> >   vdpa/ifc: change internal function name
> >   vdpa/ifc: add internal API to get device
> >   vdpa/ifc: improve internal list logic
> >   vhost: add type to rte vdpa device
> >   vhost: vDPA blk device gets ready when the first queue is ready
> >   vhost: improve vDPA blk device configure condition
> >
> > Huang Wei (2):
> >   vdpa/ifc: add new device ID for legacy network device
> >   vdpa/ifc: support dynamic enable/disable queue
> >
> >  drivers/vdpa/ifc/base/ifcvf.c | 144
> 
> > drivers/vdpa/ifc/base/ifcvf.h |  16 +++-
> > drivers/vdpa/ifc/ifcvf_vdpa.c | 185
> > +++--
> > -
> >  lib/vhost/socket.c|  15 +---
> >  lib/vhost/vdpa.c  |  15 
> >  lib/vhost/vdpa_driver.h   |   2 +
> >  lib/vhost/vhost_user.c|  38 +
> >  7 files changed, 354 insertions(+), 61 deletions(-)
> >
> > --
> > 1.8.3.1
> 
> Change title ' vhost: add type to rte vdpa device ' to ' vhost: add type to 
> vDPA
> device '
> 
OK.  Thanks Chenbo.

> Series applied to next-virtio/main, Thanks



RE: [PATCH v2 0/2] vhost: fix some async vhost index calculation issues

2022-10-26 Thread Xia, Chenbo
> -Original Message-
> From: Jiang, Cheng1 
> Sent: Tuesday, October 11, 2022 11:08 AM
> To: maxime.coque...@redhat.com; Xia, Chenbo 
> Cc: dev@dpdk.org; Hu, Jiayu ; Ding, Xuan
> ; Ma, WenwuX ; Wang, YuanX
> ; Yang, YvonneX ; He,
> Xingguang ; Jiang, Cheng1 
> Subject: [PATCH v2 0/2] vhost: fix some async vhost index calculation
> issues
> 
> Fix some async vhost index calculation issues.
> 
> v2: fixed fixes tag and replaced some words in commit message.
> 
> Cheng Jiang (2):
>   vhost: fix descs count in async vhost packed ring
>   vhost: fix slot index calculation in async vhost
> 
>  lib/vhost/virtio_net.c | 40 +---
>  1 file changed, 29 insertions(+), 11 deletions(-)
> 
> --
> 2.35.1

Series applied to next-virtio/main, thanks


RE: [PATCH] vhost: promote per-queue stats API to stable

2022-10-26 Thread Xia, Chenbo
> -Original Message-
> From: Maxime Coquelin 
> Sent: Monday, October 10, 2022 11:38 PM
> To: dev@dpdk.org; Xia, Chenbo ;
> david.march...@redhat.com
> Cc: Maxime Coquelin 
> Subject: [PATCH] vhost: promote per-queue stats API to stable
> 
> This patch promotes the per-queue stats API to stable.
> The API has been used by the Vhost PMD since v22.07, and
> David Marchand posted a patch to make use of it in next
> OVS release[0].
> 
> [0]:
> http://patchwork.ozlabs.org/project/openvswitch/patch/20221007111613.16955
> 24-4-david.march...@redhat.com/
> 
> Signed-off-by: Maxime Coquelin 
> ---
>  doc/guides/rel_notes/release_22_11.rst | 4 
>  lib/vhost/rte_vhost.h  | 3 ---
>  lib/vhost/version.map  | 6 +++---
>  3 files changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/doc/guides/rel_notes/release_22_11.rst
> b/doc/guides/rel_notes/release_22_11.rst
> index 37bd392f34..d5d3eeae24 100644
> --- a/doc/guides/rel_notes/release_22_11.rst
> +++ b/doc/guides/rel_notes/release_22_11.rst
> @@ -443,6 +443,10 @@ API Changes
> 
>  * raw/ifgpa: The function ``rte_pmd_ifpga_get_pci_bus`` has been removed.
> 
> +* vhost: Promoted ``rte_vhost_vring_stats_get()``,
> +  ``rte_vhost_vring_stats_get_names()`` and
> ``rte_vhost_vring_stats_reset()``
> +  from experimental to stable.
> +
> 
>  ABI Changes
>  ---
> diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h
> index bb7d86a432..59c98a0afb 100644
> --- a/lib/vhost/rte_vhost.h
> +++ b/lib/vhost/rte_vhost.h
> @@ -1075,7 +1075,6 @@ rte_vhost_slave_config_change(int vid, bool
> need_reply);
>   *  - Failure if lower than 0. The device ID or queue ID is invalid or
>   +statistics collection is not enabled.
>   */
> -__rte_experimental
>  int
>  rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id,
>   struct rte_vhost_stat_name *name, unsigned int size);
> @@ -1103,7 +1102,6 @@ rte_vhost_vring_stats_get_names(int vid, uint16_t
> queue_id,
>   *  - Failure if lower than 0. The device ID or queue ID is invalid, or
>   *statistics collection is not enabled.
>   */
> -__rte_experimental
>  int
>  rte_vhost_vring_stats_get(int vid, uint16_t queue_id,
>   struct rte_vhost_stat *stats, unsigned int n);
> @@ -1120,7 +1118,6 @@ rte_vhost_vring_stats_get(int vid, uint16_t queue_id,
>   *  - Failure if lower than 0. The device ID or queue ID is invalid, or
>   *statistics collection is not enabled.
>   */
> -__rte_experimental
>  int
>  rte_vhost_vring_stats_reset(int vid, uint16_t queue_id);
> 
> diff --git a/lib/vhost/version.map b/lib/vhost/version.map
> index 7a00b65740..8c5e8aa8d3 100644
> --- a/lib/vhost/version.map
> +++ b/lib/vhost/version.map
> @@ -57,6 +57,9 @@ DPDK_23 {
>   rte_vhost_set_vring_base;
>   rte_vhost_va_from_guest_pa;
>   rte_vhost_vring_call;
> + rte_vhost_vring_stats_get;
> + rte_vhost_vring_stats_get_names;
> + rte_vhost_vring_stats_reset;
> 
>   local: *;
>  };
> @@ -88,9 +91,6 @@ EXPERIMENTAL {
> 
>   # added in 22.07
>   rte_vhost_async_get_inflight_thread_unsafe;
> - rte_vhost_vring_stats_get_names;
> - rte_vhost_vring_stats_get;
> - rte_vhost_vring_stats_reset;
>   rte_vhost_async_try_dequeue_burst;
>   rte_vhost_driver_get_vdpa_dev_type;
>   rte_vhost_clear_queue;
> --
> 2.37.3

Applied to next-virtio/main, thanks


Re: [PATCH] net/gve: fix meson build failure on non-Linux platforms

2022-10-26 Thread David Marchand
On Wed, Oct 26, 2022 at 11:10 AM Ferruh Yigit  wrote:
> >> diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c
> >> index b0f7b98daa..e968317737 100644
> >> --- a/drivers/net/gve/gve_ethdev.c
> >> +++ b/drivers/net/gve/gve_ethdev.c
> >> @@ -1,12 +1,24 @@
> >>   /* SPDX-License-Identifier: BSD-3-Clause
> >>* Copyright(C) 2022 Intel Corporation
> >>*/
> >> -#include 
> >>
> >>   #include "gve_ethdev.h"
> >>   #include "base/gve_adminq.h"
> >>   #include "base/gve_register.h"
> >>
> >> +/*
> >> + * Following macros are derived from linux/pci_regs.h, however,
> >> + * we can't simply include that header here, as there is no such
> >> + * file for non-Linux platform.
> >> + */
> >> +#define PCI_CFG_SPACE_SIZE 256
> >> +#define PCI_CAPABILITY_LIST0x34/* Offset of first capability list 
> >> entry */
> >> +#define PCI_STD_HEADER_SIZEOF  64
> >> +#define PCI_CAP_SIZEOF 4
> >> +#define PCI_CAP_ID_MSIX0x11/* MSI-X */
> >> +#define PCI_MSIX_FLAGS 2   /* Message Control */
> >> +#define PCI_MSIX_FLAGS_QSIZE   0x07FF  /* Table size */
> >
> > No, don't introduce such defines in a driver.
> > We have a PCI library, that provides some defines.
> > So fix your driver to use them.
> >

After chat with Ferruh, I am ok if we take this compilation fix for
now and unblock the CI..


>
> I can see 'bnx2x' driver is using some flags from 'dev/pci/pcireg.h'
> header for FreeBSD, unfortunately macro names are not same so a #ifdef
> is required. Also I don't know it has all macros or if the header is
> available in all versions etc..
>
> Other option can be to define them all for DPDK in common pci header? As
> far as I can see we don't have all defined right now.
>

If there are some missing definitions we can add them in the pci header indeed.
Those are constants, defined in the PCIE standard.

I can post a cleanup later, but I would be happy if someone else handles it.

Thanks.


-- 
David Marchand



RE: [PATCH] net/virtio: remove declaration of undefined function

2022-10-26 Thread Xia, Chenbo
> -Original Message-
> From: Olivier Matz 
> Sent: Thursday, September 29, 2022 8:22 PM
> To: dev@dpdk.org
> Cc: Maxime Coquelin ; Xia, Chenbo
> 
> Subject: [PATCH] net/virtio: remove declaration of undefined function
> 
> This function is not defined, remove its declaration.
> 
> Fixes: c1f86306a026 ("virtio: add new driver")
> 
> Signed-off-by: Olivier Matz 
> ---
>  drivers/net/virtio/virtqueue.h | 4 
>  1 file changed, 4 deletions(-)
> 
> diff --git a/drivers/net/virtio/virtqueue.h
> b/drivers/net/virtio/virtqueue.h
> index d100ed8762..f5d8b40cad 100644
> --- a/drivers/net/virtio/virtqueue.h
> +++ b/drivers/net/virtio/virtqueue.h
> @@ -474,10 +474,6 @@ virtqueue_enable_intr(struct virtqueue *vq)
>   virtqueue_enable_intr_split(vq);
>  }
> 
> -/**
> - *  Dump virtqueue internal structures, for debug purpose only.
> - */
> -void virtqueue_dump(struct virtqueue *vq);
>  /**
>   *  Get all mbufs to be freed.
>   */
> --
> 2.30.2

Applied to next-virtio/main, thanks


Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq

2022-10-26 Thread lihuisong (C)



在 2022/10/27 1:10, Mingjin Ye 写道:

After setting vlan offload in testpmd, the result is not updated
to rxq. Therefore, the queue needs to be reconfigured after
executing the "vlan offload" related commands.

Fixes: a47aa8b97afe ("app/testpmd: add vlan offload support")
Cc: sta...@dpdk.org

Signed-off-by: Mingjin Ye 
---
  app/test-pmd/cmdline.c | 1 +
  1 file changed, 1 insertion(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 17be2de402..ce125f549f 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -4133,6 +4133,7 @@ cmd_vlan_offload_parsed(void *parsed_result,
else
vlan_extend_set(port_id, on);
  
+	cmd_reconfig_device_queue(port_id, 1, 1);
This means that queue offloads need to upadte by re-calling 
dev_configure and setup all queues, right?

If it is, this adds a usage limitation.

return;
  }
  


Re: [EXT] [PATCH resend v5 0/6] crypto/uadk: introduce uadk crypto driver

2022-10-26 Thread Zhangfei Gao
Hi, Akhil

Thanks for your guidance.

On Tue, 25 Oct 2022 at 23:02, Akhil Goyal  wrote:
>
>
> > Introduce a new crypto PMD for hardware accelerators based on UADK [1].
> >
> > UADK is a framework for user applications to access hardware accelerators.
> > UADK relies on IOMMU SVA (Shared Virtual Address) feature, which share
> > the same page table between IOMMU and MMU.
> > Thereby user application can directly use virtual address for device dma,
> > which enhances the performance as well as easy usability.
> >
> > [1]  https://github.com/Linaro/uadk
> >
> > Test:
> > sudo dpdk-test --vdev=crypto_uadk (--log-level=6)
> > RTE>>cryptodev_uadk_autotest
> > RTE>>quit
> >
> > resend:
> > fix uadk lib, still use 
>
> There is nothing called as resend in DPDK ML.
> Please simply increment version number each time you send the set.
>
>
> Also add release notes update in the patch
> where the driver implementation is complete.
>
> Compilation issues are all done now.
> Have sent some other comments on the patches. Please send the v6 ASAP.
> We need to close RC2 in a couple of days.
>

Got it, have addressed the comments and updated to v6
https://github.com/Linaro/dpdk/commits/next-9.11-v6
Will send tonight.

Some uncertainty
1. The updated uadk.rst
https://github.com/Linaro/dpdk/blob/next-9.11-v6/doc/guides/cryptodevs/uadk.rst

2. About the release notes change,
Do I add a new patch in the last: 07
https://github.com/Linaro/dpdk/commit/f2294300662def012e00d5a3c5466bbf8436904b

Or merge it to the first patch 01.

3. I update uadk.rst and test in the same patch, is that OK.
https://github.com/Linaro/dpdk/commit/4be993e9995def1061dcfdd7863a7f2ba6d5da6c

Thanks


RE: [PATCH] net/mlx5: fix assert failure in item translation

2022-10-26 Thread Raslan Darawsheh
Hi,

> -Original Message-
> From: Suanming Mou 
> Sent: Wednesday, October 26, 2022 5:09 AM
> To: Matan Azrad ; Slava Ovsiienko
> 
> Cc: dev@dpdk.org; Raslan Darawsheh 
> Subject: [PATCH] net/mlx5: fix assert failure in item translation
> 
> When HW Steering is enabled, mlx5dr translates flow items using DV
> item translation functions to configure flows in root flow table.
> Represented port item stores vport metadata tag in thread-local
> workspace when translate to DV spec. Due to the thread-local workspace
> was not created in HW Steering, it caused an assertion in
> mlx5_flow_get_thread_workspace()
> 
> This patch adds initialization of thread-local workspace when flow items
> are translated to DV spec in HW Steering mode.
> 
> Fixes: cfddba76af4f ("net/mlx5: add hardware steering item translation
> function")
> 
> Signed-off-by: Suanming Mou 
> Acked-by: Viacheslav Ovsiienko 

Squashed into relevant commit in next-net-mlx,

Kindest regards,
Raslan Darawsheh


RE: [PATCH] net/mlx5/hws: set ft_type on root matcher creation

2022-10-26 Thread Raslan Darawsheh
Hi,

> -Original Message-
> From: Suanming Mou 
> Sent: Wednesday, October 26, 2022 5:10 AM
> To: Matan Azrad ; Slava Ovsiienko
> 
> Cc: dev@dpdk.org; Raslan Darawsheh ; Alex Vesker
> 
> Subject: [PATCH] net/mlx5/hws: set ft_type on root matcher creation
> 
> The ft_type should be provided to mlx5dv_create_flow_matcher
> if the matcher attributes exist, not only for FDB but for NIC
> as well. If the ft_type is not provided, matcher/rule creation
> might fail.
> 
> Fixes: ac7931dd1908 ("net/mlx5/hws: add HWS matcher object")
> 
> Signed-off-by: Alex Vesker 
> Signed-off-by: Suanming Mou 
> Acked-by: Viacheslav Ovsiienko 

Squashed into relevant commit in next-net-mlx,

Kindest regards,
Raslan Darawsheh


RE: [PATCH] net/mlx5: remove unneeded SQ null checking

2022-10-26 Thread Raslan Darawsheh
Hi,

> -Original Message-
> From: Suanming Mou 
> Sent: Wednesday, October 26, 2022 9:33 AM
> To: Matan Azrad ; Slava Ovsiienko
> 
> Cc: dev@dpdk.org; Raslan Darawsheh 
> Subject: [PATCH] net/mlx5: remove unneeded SQ null checking
> 
> The ASO SQ is a dynamic array in the ASO pool struct. That means
> the checking of the SQ pointer is incorrect and not needed. And
> the mlx5_aso_destroy_sq() function internally also checks if inside
> the SQ any resources need to be released or not. There is no need
> to have that extra checking.
> 
> This commit removes the redundant checking code.
> 
> Fixes: aa90929cd5db ("net/mlx5: add HW steering connection tracking
> support")
> 
> Signed-off-by: Suanming Mou 
> Acked-by: Matan Azrad 

Squashed into relevant commit into next-net-mlx,

Kindest regards,
Raslan Darawsheh


Re: [EXT] [PATCH resend v5 6/6] test/crypto: add cryptodev_uadk_autotest

2022-10-26 Thread Zhangfei Gao
On Tue, 25 Oct 2022 at 22:57, Akhil Goyal  wrote:
>
> > Subject: [EXT] [PATCH resend v5 6/6] test/crypto: add 
> > cryptodev_uadk_autotest
> Rewrite patch title as
> test/crypto: support uadk PMD
>
> >
> Add a line
> Updated test application to run autotest for uadk crypto PMD.
> > Example:
> > sudo dpdk-test --vdev=crypto_uadk --log-level=6
> > RTE>>cryptodev_uadk_autotest
> > RTE>>quit
>
> Add this information in uadk.rst as well.

Is it OK to add uadk.rst change in this patch?


Re: [PATCH] net/gve: fix meson build failure on non-Linux platforms

2022-10-26 Thread Ferruh Yigit

On 10/26/2022 10:33 AM, David Marchand wrote:

On Wed, Oct 26, 2022 at 11:10 AM Ferruh Yigit  wrote:

diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c
index b0f7b98daa..e968317737 100644
--- a/drivers/net/gve/gve_ethdev.c
+++ b/drivers/net/gve/gve_ethdev.c
@@ -1,12 +1,24 @@
   /* SPDX-License-Identifier: BSD-3-Clause
* Copyright(C) 2022 Intel Corporation
*/
-#include 

   #include "gve_ethdev.h"
   #include "base/gve_adminq.h"
   #include "base/gve_register.h"

+/*
+ * Following macros are derived from linux/pci_regs.h, however,
+ * we can't simply include that header here, as there is no such
+ * file for non-Linux platform.
+ */
+#define PCI_CFG_SPACE_SIZE 256
+#define PCI_CAPABILITY_LIST0x34/* Offset of first capability list 
entry */
+#define PCI_STD_HEADER_SIZEOF  64
+#define PCI_CAP_SIZEOF 4
+#define PCI_CAP_ID_MSIX0x11/* MSI-X */
+#define PCI_MSIX_FLAGS 2   /* Message Control */
+#define PCI_MSIX_FLAGS_QSIZE   0x07FF  /* Table size */




@Junfeng,

Can you please move these defines to 'gve_ethdev.h'?
Beginning of 'gve_ethdev.c' seems too unrelated for PCI defines.


No, don't introduce such defines in a driver.
We have a PCI library, that provides some defines.
So fix your driver to use them.



After chat with Ferruh, I am ok if we take this compilation fix for
now and unblock the CI..



ack





I can see 'bnx2x' driver is using some flags from 'dev/pci/pcireg.h'
header for FreeBSD, unfortunately macro names are not same so a #ifdef
is required. Also I don't know it has all macros or if the header is
available in all versions etc..

Other option can be to define them all for DPDK in common pci header? As
far as I can see we don't have all defined right now.



If there are some missing definitions we can add them in the pci header indeed.
Those are constants, defined in the PCIE standard.

I can post a cleanup later, but I would be happy if someone else handles it.



@Junfeng,

Can you have bandwidth for this cleanup?

Task is to define PCI macros in common pci header, ('bus_pci_driver.h' I 
guess ?), and update drivers to use new common ones, remove local 
definition from drivers.


DPDK macros can use Linux macro names with RTE_ prefix and with a define 
guard, as done in following:

https://elixir.bootlin.com/dpdk/v22.07/source/drivers/bus/pci/linux/pci_init.h#L50



[dpdk-dev v4] crypto/ipsec_mb: multi-process IPC request handler

2022-10-26 Thread Kai Ji
As the queue pair used in secondary process need to be setuped by
the primary process, this patch add an IPC register function to help
secondary process to send out queue-pair setup reguest to primary
process via IPC messages. A new "qp_in_used_pid" param stores the PID
to provide the ownership of the queue-pair so that only the PID matched
queue-pair can be free'd in the request.

Signed-off-by: Kai Ji 
---
v4:
- review comments resolved

v3:
- remove shared memzone as qp_conf params can be passed directly from
ipc message.

v2:
- add in shared memzone for data exchange between multi-process
---
 drivers/crypto/ipsec_mb/ipsec_mb_ops.c | 129 -
 drivers/crypto/ipsec_mb/ipsec_mb_private.c |  24 +++-
 drivers/crypto/ipsec_mb/ipsec_mb_private.h |  48 +++-
 3 files changed, 195 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c 
b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
index cedcaa2742..bf18d692bd 100644
--- a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
+++ b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
@@ -3,6 +3,7 @@
  */

 #include 
+#include 

 #include 
 #include 
@@ -93,6 +94,46 @@ ipsec_mb_info_get(struct rte_cryptodev *dev,
}
 }

+static int
+ipsec_mb_secondary_qp_op(int dev_id, int qp_id,
+   const struct rte_cryptodev_qp_conf *qp_conf,
+   int socket_id, enum ipsec_mb_mp_req_type op_type)
+{
+   int ret;
+   struct rte_mp_msg qp_req_msg;
+   struct rte_mp_msg *qp_resp_msg;
+   struct rte_mp_reply qp_resp;
+   struct ipsec_mb_mp_param *req_param;
+   struct ipsec_mb_mp_param *resp_param;
+   struct timespec ts = {.tv_sec = 1, .tv_nsec = 0};
+
+   memset(&qp_req_msg, 0, sizeof(IPSEC_MB_MP_MSG));
+   memcpy(qp_req_msg.name, IPSEC_MB_MP_MSG, sizeof(IPSEC_MB_MP_MSG));
+   req_param = (struct ipsec_mb_mp_param *)&qp_req_msg.param;
+
+   qp_req_msg.len_param = sizeof(struct ipsec_mb_mp_param);
+   req_param->type = op_type;
+   req_param->dev_id = dev_id;
+   req_param->qp_id = qp_id;
+   req_param->socket_id = socket_id;
+   req_param->process_id = getpid();
+   if (qp_conf) {
+   req_param->nb_descriptors = qp_conf->nb_descriptors;
+   req_param->mp_session = (void *)qp_conf->mp_session;
+   }
+
+   qp_req_msg.num_fds = 0;
+   ret = rte_mp_request_sync(&qp_req_msg, &qp_resp, &ts);
+   if (ret) {
+   RTE_LOG(ERR, USER1, "Create MR request to primary process 
failed.");
+   return -1;
+   }
+   qp_resp_msg = &qp_resp.msgs[0];
+   resp_param = (struct ipsec_mb_mp_param *)qp_resp_msg->param;
+
+   return resp_param->result;
+}
+
 /** Release queue pair */
 int
 ipsec_mb_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
@@ -100,7 +141,10 @@ ipsec_mb_qp_release(struct rte_cryptodev *dev, uint16_t 
qp_id)
struct ipsec_mb_qp *qp = dev->data->queue_pairs[qp_id];
struct rte_ring *r = NULL;

-   if (qp != NULL && rte_eal_process_type() == RTE_PROC_PRIMARY) {
+   if (qp != NULL)
+   return 0;
+
+   if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
r = rte_ring_lookup(qp->name);
rte_ring_free(r);

@@ -115,6 +159,9 @@ ipsec_mb_qp_release(struct rte_cryptodev *dev, uint16_t 
qp_id)
 #endif
rte_free(qp);
dev->data->queue_pairs[qp_id] = NULL;
+   } else { /* secondary process */
+   return ipsec_mb_secondary_qp_op(dev->data->dev_id, qp_id,
+   NULL, 0, RTE_IPSEC_MB_MP_REQ_QP_FREE);
}
return 0;
 }
@@ -222,9 +269,13 @@ ipsec_mb_qp_setup(struct rte_cryptodev *dev, uint16_t 
qp_id,
 #endif
qp = dev->data->queue_pairs[qp_id];
if (qp == NULL) {
-   IPSEC_MB_LOG(ERR, "Primary process hasn't configured 
device qp.");
-   return -EINVAL;
+   IPSEC_MB_LOG(DEBUG, "Secondary process setting up 
device qp.");
+   return ipsec_mb_secondary_qp_op(dev->data->dev_id, 
qp_id,
+   qp_conf, socket_id, 
RTE_IPSEC_MB_MP_REQ_QP_SET);
}
+
+   IPSEC_MB_LOG(ERR, "Queue pair already setup'ed.");
+   return -EINVAL;
} else {
/* Free memory prior to re-allocation if needed. */
if (dev->data->queue_pairs[qp_id] != NULL)
@@ -296,6 +347,78 @@ ipsec_mb_qp_setup(struct rte_cryptodev *dev, uint16_t 
qp_id,
return ret;
 }

+int
+ipsec_mb_ipc_request(const struct rte_mp_msg *mp_msg, const void *peer)
+{
+   struct rte_mp_msg ipc_resp;
+   struct ipsec_mb_mp_param *resp_param =
+   (struct ipsec_mb_mp_param *)ipc_resp.param;
+   const struct ipsec_mb_mp_param *req_param =
+   (const struct ipsec_mb_mp_param *)mp_msg->param;
+
+   int ret;
+   struct rte_cryptodev *dev;
+  

Re: [EXT] [PATCH resend v5 5/6] crypto/uadk: support auth algorithms

2022-10-26 Thread Zhangfei Gao




On 2022/10/25 下午10:38, Akhil Goyal wrote:

Hash algorithms:

* ``RTE_CRYPTO_AUTH_MD5``
* ``RTE_CRYPTO_AUTH_MD5_HMAC``
* ``RTE_CRYPTO_AUTH_SHA1``
* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
* ``RTE_CRYPTO_AUTH_SHA224``
* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
* ``RTE_CRYPTO_AUTH_SHA256``
* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
* ``RTE_CRYPTO_AUTH_SHA384``
* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
* ``RTE_CRYPTO_AUTH_SHA512``
* ``RTE_CRYPTO_AUTH_SHA512_HMAC``


It is better to rewrite the description as

Support added for MD5, SHA1, SHA224, SHA256, SHA384, SHA512
Authentication algorithms with and without HMAC.


Signed-off-by: Zhangfei Gao 



---
  doc/guides/cryptodevs/features/uadk.ini |  12 +
  doc/guides/cryptodevs/uadk.rst  |  15 +
  drivers/crypto/uadk/uadk_crypto_pmd.c   | 459 
  3 files changed, 486 insertions(+)





@@ -72,6 +84,252 @@ RTE_LOG_REGISTER_DEFAULT(uadk_crypto_logtype,
INFO);
## __VA_ARGS__)

  static const struct rte_cryptodev_capabilities uadk_crypto_v2_capabilities[] 
= {
+   {   /* MD5 HMAC */
+   .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+   {.sym = {
+   .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+   {.auth = {
+   .algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+   .block_size = 64,
+   .key_size = {
+   .min = 0,
+   .max = 0,
+   .increment = 0
+   },

It seems there is a mistake here.
Auth algos with HMAC do have keys > 0.

I am not sure if your test cases are getting passed with these capabilities.

Same comment for all the HMAC capabilities.


After double confirmed, and studied the openssl crypto config.
The hash do not have key, so value is 0
But HMAC need key,
After using openssl crypto value,
the passed case increased from 50 to 72, though no reported fail case in 
both case.


Thanks for the reminding.



RE: [PATCH] net/gve: fix meson build failure on non-Linux platforms

2022-10-26 Thread Gao, DaxueX
> From: Junfeng Guo 
> Sent: 2022年10月26日 16:43
> To: Zhang, Qi Z ; Wu, Jingjing ;
> ferruh.yi...@xilinx.com; Xing, Beilei 
> Cc: dev@dpdk.org; Li, Xiaoyun ;
> awogbem...@google.com; Richardson, Bruce ;
> hemant.agra...@nxp.com; step...@networkplumber.org; Xia, Chenbo
> ; Zhang, Helin ; Guo, Junfeng
> ; sta...@dpdk.org
> Subject: [PATCH] net/gve: fix meson build failure on non-Linux platforms
> 
> Meson build may fail on FreeBSD with gcc and clang, due to missing the header
> file linux/pci_regs.h on non-Linux platform. Thus, in this patch, we removed 
> the
> file include and added the used Macros derived from linux/pci_regs.h.
> 
> Fixes: 3047a5ac8e66 ("net/gve: add support for device initialization")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Junfeng Guo 

Tested-by: Daxue Gao 


[PATCH v12 00/18] add support for idpf PMD in DPDK

2022-10-26 Thread Junfeng Guo
This patchset introduced the idpf (Infrastructure Data Path Function)
PMD in DPDK for Intel® IPU E2000 (Device ID: 0x1452).
The Intel® IPU E2000 targets to deliver high performance under real
workloads with security and isolation.
Please refer to
https://www.intel.com/content/www/us/en/products/network-io/infrastructure-processing-units/asic/e2000-asic.html
for more information.

Linux upstream is still ongoing, previous work refers to
https://patchwork.ozlabs.org/project/intel-wired-lan/patch/20220128001009.721392-20-alan.br...@intel.com/.

v2-v4:
fixed some coding style issues and did some refactors.

v5:
fixed typo.

v6-v9:
fixed build errors and coding style issues.

v11:
 - move shared code to common/idpf/base
 - Create one vport if there's no vport devargs
 - Refactor if conditions according to coding style
 - Refactor virtual channel return values
 - Refine dev_stop function
 - Refine RSS lut/key
 - Fix build error

v12:
 - Refine dev_configure
 - Fix coding style accoding to the comments
 - Re-order patch
 - Romove dev_supported_ptypes_get

Junfeng Guo (18):
  common/idpf: introduce common library
  net/idpf: add support for device initialization
  net/idpf: add Tx queue setup
  net/idpf: add Rx queue setup
  net/idpf: add support for device start and stop
  net/idpf: add support for queue start
  net/idpf: add support for queue stop
  net/idpf: add queue release
  net/idpf: add support for MTU configuration
  net/idpf: add support for basic Rx datapath
  net/idpf: add support for basic Tx datapath
  net/idpf: support packet type get
  net/idpf: add support for write back based on ITR expire
  net/idpf: add support for RSS
  net/idpf: add support for Rx offloading
  net/idpf: add support for Tx offloading
  net/idpf: add AVX512 data path for single queue model
  net/idpf: add support for timestamp offload

 MAINTAINERS   |9 +
 doc/guides/nics/features/idpf.ini |   18 +
 doc/guides/nics/idpf.rst  |   85 +
 doc/guides/nics/index.rst |1 +
 doc/guides/rel_notes/release_22_11.rst|6 +
 drivers/common/idpf/base/idpf_alloc.h |   22 +
 drivers/common/idpf/base/idpf_common.c|  364 +++
 drivers/common/idpf/base/idpf_controlq.c  |  691 
 drivers/common/idpf/base/idpf_controlq.h  |  224 ++
 drivers/common/idpf/base/idpf_controlq_api.h  |  234 ++
 .../common/idpf/base/idpf_controlq_setup.c|  179 +
 drivers/common/idpf/base/idpf_devids.h|   18 +
 drivers/common/idpf/base/idpf_lan_pf_regs.h   |  134 +
 drivers/common/idpf/base/idpf_lan_txrx.h  |  428 +++
 drivers/common/idpf/base/idpf_lan_vf_regs.h   |  114 +
 drivers/common/idpf/base/idpf_osdep.h |  364 +++
 drivers/common/idpf/base/idpf_prototype.h |   45 +
 drivers/common/idpf/base/idpf_type.h  |  106 +
 drivers/common/idpf/base/meson.build  |   14 +
 drivers/common/idpf/base/siov_regs.h  |   47 +
 drivers/common/idpf/base/virtchnl.h   | 2866 +
 drivers/common/idpf/base/virtchnl2.h  | 1462 +
 drivers/common/idpf/base/virtchnl2_lan_desc.h |  606 
 .../common/idpf/base/virtchnl_inline_ipsec.h  |  567 
 drivers/common/idpf/meson.build   |4 +
 drivers/common/idpf/version.map   |   12 +
 drivers/common/meson.build|1 +
 drivers/net/idpf/idpf_ethdev.c| 1280 
 drivers/net/idpf/idpf_ethdev.h|  252 ++
 drivers/net/idpf/idpf_logs.h  |   56 +
 drivers/net/idpf/idpf_rxtx.c  | 2305 +
 drivers/net/idpf/idpf_rxtx.h  |  291 ++
 drivers/net/idpf/idpf_rxtx_vec_avx512.c   |  871 +
 drivers/net/idpf/idpf_rxtx_vec_common.h   |  100 +
 drivers/net/idpf/idpf_vchnl.c | 1430 
 drivers/net/idpf/meson.build  |   44 +
 drivers/net/idpf/version.map  |3 +
 drivers/net/meson.build   |1 +
 38 files changed, 15254 insertions(+)
 create mode 100644 doc/guides/nics/features/idpf.ini
 create mode 100644 doc/guides/nics/idpf.rst
 create mode 100644 drivers/common/idpf/base/idpf_alloc.h
 create mode 100644 drivers/common/idpf/base/idpf_common.c
 create mode 100644 drivers/common/idpf/base/idpf_controlq.c
 create mode 100644 drivers/common/idpf/base/idpf_controlq.h
 create mode 100644 drivers/common/idpf/base/idpf_controlq_api.h
 create mode 100644 drivers/common/idpf/base/idpf_controlq_setup.c
 create mode 100644 drivers/common/idpf/base/idpf_devids.h
 create mode 100644 drivers/common/idpf/base/idpf_lan_pf_regs.h
 create mode 100644 drivers/common/idpf/base/idpf_lan_txrx.h
 create mode 100644 drivers/common/idpf/base/idpf_lan_vf_regs.h
 create mode 100644 drivers/common/idpf/base/idpf_osdep.h
 create mode 100644 drivers/common/idpf/base/idpf_prototype.h
 create mode 100644 drivers/common/idpf/base/idpf_type.h
 create mode 10064

[PATCH v12 02/18] net/idpf: add support for device initialization

2022-10-26 Thread Junfeng Guo
Support device init and add the following dev ops:
 - dev_configure
 - dev_close
 - dev_infos_get

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Xiao Wang 
Signed-off-by: Wenjun Wu 
Signed-off-by: Junfeng Guo 
---
 MAINTAINERS|   9 +
 doc/guides/nics/features/idpf.ini  |   9 +
 doc/guides/nics/idpf.rst   |  66 ++
 doc/guides/nics/index.rst  |   1 +
 doc/guides/rel_notes/release_22_11.rst |   6 +
 drivers/net/idpf/idpf_ethdev.c | 886 +
 drivers/net/idpf/idpf_ethdev.h | 189 ++
 drivers/net/idpf/idpf_logs.h   |  56 ++
 drivers/net/idpf/idpf_vchnl.c  | 468 +
 drivers/net/idpf/meson.build   |  15 +
 drivers/net/idpf/version.map   |   3 +
 drivers/net/meson.build|   1 +
 12 files changed, 1709 insertions(+)
 create mode 100644 doc/guides/nics/features/idpf.ini
 create mode 100644 doc/guides/nics/idpf.rst
 create mode 100644 drivers/net/idpf/idpf_ethdev.c
 create mode 100644 drivers/net/idpf/idpf_ethdev.h
 create mode 100644 drivers/net/idpf/idpf_logs.h
 create mode 100644 drivers/net/idpf/idpf_vchnl.c
 create mode 100644 drivers/net/idpf/meson.build
 create mode 100644 drivers/net/idpf/version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 92b381bc30..34f8b9cc61 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -764,6 +764,15 @@ F: drivers/net/ice/
 F: doc/guides/nics/ice.rst
 F: doc/guides/nics/features/ice.ini
 
+Intel idpf
+M: Jingjing Wu 
+M: Beilei Xing 
+T: git://dpdk.org/next/dpdk-next-net-intel
+F: drivers/net/idpf/
+F: drivers/common/idpf/
+F: doc/guides/nics/idpf.rst
+F: doc/guides/nics/features/idpf.ini
+
 Intel igc
 M: Junfeng Guo 
 M: Simei Su 
diff --git a/doc/guides/nics/features/idpf.ini 
b/doc/guides/nics/features/idpf.ini
new file mode 100644
index 00..46aab2eb61
--- /dev/null
+++ b/doc/guides/nics/features/idpf.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'idpf' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux= Y
+x86-32   = Y
+x86-64   = Y
diff --git a/doc/guides/nics/idpf.rst b/doc/guides/nics/idpf.rst
new file mode 100644
index 00..c1001d5d0c
--- /dev/null
+++ b/doc/guides/nics/idpf.rst
@@ -0,0 +1,66 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+Copyright(c) 2022 Intel Corporation.
+
+IDPF Poll Mode Driver
+==
+
+The [*EXPERIMENTAL*] idpf PMD (**librte_net_idpf**) provides poll mode driver 
support for
+Intel® Infrastructure Processing Unit (Intel® IPU) E2000.
+
+
+Linux Prerequisites
+---
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux ` to setup 
the basic DPDK environment.
+
+- To get better performance on Intel platforms, please follow the "How to get 
best performance with NICs on Intel platforms"
+  section of the :ref:`Getting Started Guide for Linux `.
+
+
+Pre-Installation Configuration
+--
+
+Runtime Config Options
+~~
+
+- ``vport`` (default ``0``)
+
+  The IDPF PMD supports creation of multiple vports for one PCI device, each 
vport
+  corresponds to a single ethdev. Using the ``devargs`` parameter ``vport`` 
the user
+  can specify the vports with specific ID to be created, for example::
+
+-a ca:00.0,vport=[0,2,3]
+
+  Then idpf PMD will create 3 vports (ethdevs) for device ca:00.0.
+  NOTE: If the parameter is not provided, the vport 0 will be created by 
default.
+
+- ``rx_single`` (default ``0``)
+
+  There're two queue modes supported by Intel® IPU Ethernet ES2000 Series, 
single queue
+  mode and split queue mode for Rx queue. User can choose Rx queue mode by the 
``devargs``
+  parameter ``rx_single``.
+
+-a ca:00.0,rx_single=1
+
+  Then idpf PMD will configure Rx queue with single queue mode. Otherwise, 
split queue
+  mode is chosen by default.
+
+- ``tx_single`` (default ``0``)
+
+  There're two queue modes supported by Intel® IPU Ethernet ES2000 Series, 
single queue
+  mode and split queue mode for Tx queue. User can choose Tx queue mode by the 
``devargs``
+  parameter ``tx_single``.
+
+-a ca:00.0,tx_single=1
+
+  Then idpf PMD will configure Tx queue with single queue mode. Otherwise, 
split queue
+  mode is chosen by default.
+
+
+Driver compilation and testing
+--
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC 
`
+for details.
+
+
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 32c7544968..b4dd5ea522 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -33,6 +33,7 @@ Network Interface Controller Drivers
 hns3
 i40e
 ice
+idpf
 igb
 igc
 ionic
diff --git a/doc/guides/rel_notes/release_22_11.rst 
b/doc/guides/rel_notes/release_22_11.rst
index 1c3daf141d..4af790de37 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b

[PATCH v12 03/18] net/idpf: add Tx queue setup

2022-10-26 Thread Junfeng Guo
Add support for tx_queue_setup ops.

In the single queue model, the same descriptor queue is used by SW to
post buffer descriptors to HW and by HW to post completed descriptors
to SW.

In the split queue model, "RX buffer queues" are used to pass
descriptor buffers from SW to HW while Rx queues are used only to
pass the descriptor completions, that is, descriptors that point
to completed buffers, from HW to SW. This is contrary to the single
queue model in which Rx queues are used for both purposes.

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c |  13 ++
 drivers/net/idpf/idpf_rxtx.c   | 364 +
 drivers/net/idpf/idpf_rxtx.h   |  70 +++
 drivers/net/idpf/idpf_vchnl.c  |   2 -
 drivers/net/idpf/meson.build   |   1 +
 5 files changed, 448 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/idpf/idpf_rxtx.c
 create mode 100644 drivers/net/idpf/idpf_rxtx.h

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 112dccf5c9..11f8b4ba1c 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -11,6 +11,7 @@
 #include 
 
 #include "idpf_ethdev.h"
+#include "idpf_rxtx.h"
 
 #define IDPF_TX_SINGLE_Q   "tx_single"
 #define IDPF_RX_SINGLE_Q   "rx_single"
@@ -37,6 +38,7 @@ static void idpf_adapter_rel(struct idpf_adapter *adapter);
 static const struct eth_dev_ops idpf_eth_dev_ops = {
.dev_configure  = idpf_dev_configure,
.dev_close  = idpf_dev_close,
+   .tx_queue_setup = idpf_tx_queue_setup,
.dev_infos_get  = idpf_dev_info_get,
 };
 
@@ -56,6 +58,17 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
 
dev_info->max_mac_addrs = IDPF_NUM_MACADDR_MAX;
 
+   dev_info->default_txconf = (struct rte_eth_txconf) {
+   .tx_free_thresh = IDPF_DEFAULT_TX_FREE_THRESH,
+   .tx_rs_thresh = IDPF_DEFAULT_TX_RS_THRESH,
+   };
+
+   dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+   .nb_max = IDPF_MAX_RING_DESC,
+   .nb_min = IDPF_MIN_RING_DESC,
+   .nb_align = IDPF_ALIGN_RING_DESC,
+   };
+
return 0;
 }
 
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
new file mode 100644
index 00..4afa0a2560
--- /dev/null
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -0,0 +1,364 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include 
+#include 
+
+#include "idpf_ethdev.h"
+#include "idpf_rxtx.h"
+
+static int
+check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
+   uint16_t tx_free_thresh)
+{
+   /* TX descriptors will have their RS bit set after tx_rs_thresh
+* descriptors have been used. The TX descriptor ring will be cleaned
+* after tx_free_thresh descriptors are used or if the number of
+* descriptors required to transmit a packet is greater than the
+* number of free TX descriptors.
+*
+* The following constraints must be satisfied:
+*  - tx_rs_thresh must be less than the size of the ring minus 2.
+*  - tx_free_thresh must be less than the size of the ring minus 3.
+*  - tx_rs_thresh must be less than or equal to tx_free_thresh.
+*  - tx_rs_thresh must be a divisor of the ring size.
+*
+* One descriptor in the TX ring is used as a sentinel to avoid a H/W
+* race condition, hence the maximum threshold constraints. When set
+* to zero use default values.
+*/
+   if (tx_rs_thresh >= (nb_desc - 2)) {
+   PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
+"number of TX descriptors (%u) minus 2",
+tx_rs_thresh, nb_desc);
+   return -EINVAL;
+   }
+   if (tx_free_thresh >= (nb_desc - 3)) {
+   PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be less than the "
+"number of TX descriptors (%u) minus 3.",
+tx_free_thresh, nb_desc);
+   return -EINVAL;
+   }
+   if (tx_rs_thresh > tx_free_thresh) {
+   PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
+"equal to tx_free_thresh (%u).",
+tx_rs_thresh, tx_free_thresh);
+   return -EINVAL;
+   }
+   if ((nb_desc % tx_rs_thresh) != 0) {
+   PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
+"number of TX descriptors (%u).",
+tx_rs_thresh, nb_desc);
+   return -EINVAL;
+   }
+
+   return 0;
+}
+
+static void
+reset_split_tx_descq(struct idpf_tx_queue *txq)
+{
+   struct idpf_tx_entry *txe;
+   uint32_t i, size;
+

[PATCH v12 04/18] net/idpf: add Rx queue setup

2022-10-26 Thread Junfeng Guo
Add support for rx_queue_setup ops.

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c |  11 +
 drivers/net/idpf/idpf_rxtx.c   | 400 +
 drivers/net/idpf/idpf_rxtx.h   |  46 
 3 files changed, 457 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 11f8b4ba1c..0585153f69 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -38,6 +38,7 @@ static void idpf_adapter_rel(struct idpf_adapter *adapter);
 static const struct eth_dev_ops idpf_eth_dev_ops = {
.dev_configure  = idpf_dev_configure,
.dev_close  = idpf_dev_close,
+   .rx_queue_setup = idpf_rx_queue_setup,
.tx_queue_setup = idpf_tx_queue_setup,
.dev_infos_get  = idpf_dev_info_get,
 };
@@ -63,12 +64,22 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
.tx_rs_thresh = IDPF_DEFAULT_TX_RS_THRESH,
};
 
+   dev_info->default_rxconf = (struct rte_eth_rxconf) {
+   .rx_free_thresh = IDPF_DEFAULT_RX_FREE_THRESH,
+   };
+
dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
.nb_max = IDPF_MAX_RING_DESC,
.nb_min = IDPF_MIN_RING_DESC,
.nb_align = IDPF_ALIGN_RING_DESC,
};
 
+   dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+   .nb_max = IDPF_MAX_RING_DESC,
+   .nb_min = IDPF_MIN_RING_DESC,
+   .nb_align = IDPF_ALIGN_RING_DESC,
+   };
+
return 0;
 }
 
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 4afa0a2560..25dd5d85d5 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -8,6 +8,21 @@
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
 
+static int
+check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
+{
+   /* The following constraints must be satisfied:
+*   thresh < rxq->nb_rx_desc
+*/
+   if (thresh >= nb_desc) {
+   PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u",
+thresh, nb_desc);
+   return -EINVAL;
+   }
+
+   return 0;
+}
+
 static int
 check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
uint16_t tx_free_thresh)
@@ -56,6 +71,87 @@ check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
return 0;
 }
 
+static void
+reset_split_rx_descq(struct idpf_rx_queue *rxq)
+{
+   uint16_t len;
+   uint32_t i;
+
+   if (rxq == NULL)
+   return;
+
+   len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+   for (i = 0; i < len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3);
+i++)
+   ((volatile char *)rxq->rx_ring)[i] = 0;
+
+   rxq->rx_tail = 0;
+   rxq->expected_gen_id = 1;
+}
+
+static void
+reset_split_rx_bufq(struct idpf_rx_queue *rxq)
+{
+   uint16_t len;
+   uint32_t i;
+
+   if (rxq == NULL)
+   return;
+
+   len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+   for (i = 0; i < len * sizeof(struct virtchnl2_splitq_rx_buf_desc);
+i++)
+   ((volatile char *)rxq->rx_ring)[i] = 0;
+
+   memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+   for (i = 0; i < IDPF_RX_MAX_BURST; i++)
+   rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+   /* The next descriptor id which can be received. */
+   rxq->rx_next_avail = 0;
+
+   /* The next descriptor id which can be refilled. */
+   rxq->rx_tail = 0;
+   /* The number of descriptors which can be refilled. */
+   rxq->nb_rx_hold = rxq->nb_rx_desc - 1;
+
+   rxq->bufq1 = NULL;
+   rxq->bufq2 = NULL;
+}
+
+static void
+reset_single_rx_queue(struct idpf_rx_queue *rxq)
+{
+   uint16_t len;
+   uint32_t i;
+
+   if (rxq == NULL)
+   return;
+
+   len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+   for (i = 0; i < len * sizeof(struct virtchnl2_singleq_rx_buf_desc);
+i++)
+   ((volatile char *)rxq->rx_ring)[i] = 0;
+
+   memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+   for (i = 0; i < IDPF_RX_MAX_BURST; i++)
+   rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+   rxq->rx_tail = 0;
+   rxq->nb_rx_hold = 0;
+
+   if (rxq->pkt_first_seg != NULL)
+   rte_pktmbuf_free(rxq->pkt_first_seg);
+
+   rxq->pkt_first_seg = NULL;
+   rxq->pkt_last_seg = NULL;
+}
+
 static void
 reset_split_tx_descq(struct idpf_tx_queue *txq)
 {
@@ -145,6 +241,310 @@ reset_single_tx_queue(struct idpf_tx_queue *txq)
txq->next_rs = txq->rs_thresh - 1;
 }
 
+static int
+idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq,
+uint16_t queue_idx, uint16_t r

[PATCH v12 05/18] net/idpf: add support for device start and stop

2022-10-26 Thread Junfeng Guo
Add dev ops dev_start, dev_stop and link_update.

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c | 51 ++
 1 file changed, 51 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 0585153f69..e1db7ccb4f 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -30,17 +30,38 @@ static const char * const idpf_valid_args[] = {
 };
 
 static int idpf_dev_configure(struct rte_eth_dev *dev);
+static int idpf_dev_start(struct rte_eth_dev *dev);
+static int idpf_dev_stop(struct rte_eth_dev *dev);
 static int idpf_dev_close(struct rte_eth_dev *dev);
 static int idpf_dev_info_get(struct rte_eth_dev *dev,
 struct rte_eth_dev_info *dev_info);
 static void idpf_adapter_rel(struct idpf_adapter *adapter);
 
+static int
+idpf_dev_link_update(struct rte_eth_dev *dev,
+__rte_unused int wait_to_complete)
+{
+   struct rte_eth_link new_link;
+
+   memset(&new_link, 0, sizeof(new_link));
+
+   new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+   new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+   new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
+ RTE_ETH_LINK_SPEED_FIXED);
+
+   return rte_eth_linkstatus_set(dev, &new_link);
+}
+
 static const struct eth_dev_ops idpf_eth_dev_ops = {
.dev_configure  = idpf_dev_configure,
+   .dev_start  = idpf_dev_start,
+   .dev_stop   = idpf_dev_stop,
.dev_close  = idpf_dev_close,
.rx_queue_setup = idpf_rx_queue_setup,
.tx_queue_setup = idpf_tx_queue_setup,
.dev_infos_get  = idpf_dev_info_get,
+   .link_update= idpf_dev_link_update,
 };
 
 static int
@@ -284,6 +305,36 @@ idpf_dev_configure(struct rte_eth_dev *dev)
return 0;
 }
 
+static int
+idpf_dev_start(struct rte_eth_dev *dev)
+{
+   struct idpf_vport *vport = dev->data->dev_private;
+
+   if (dev->data->mtu > vport->max_mtu) {
+   PMD_DRV_LOG(ERR, "MTU should be less than %d", vport->max_mtu);
+   return -1;
+   }
+
+   vport->max_pkt_len = dev->data->mtu + IDPF_ETH_OVERHEAD;
+
+   if (idpf_vc_ena_dis_vport(vport, true) != 0) {
+   PMD_DRV_LOG(ERR, "Failed to enable vport");
+   return -1;
+   }
+
+   return 0;
+}
+
+static int
+idpf_dev_stop(struct rte_eth_dev *dev)
+{
+   struct idpf_vport *vport = dev->data->dev_private;
+
+   idpf_vc_ena_dis_vport(vport, false);
+
+   return 0;
+}
+
 static int
 idpf_dev_close(struct rte_eth_dev *dev)
 {
-- 
2.34.1



[PATCH v12 06/18] net/idpf: add support for queue start

2022-10-26 Thread Junfeng Guo
Add support for these device ops:
 - rx_queue_start
 - tx_queue_start

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c |  40 +++
 drivers/net/idpf/idpf_ethdev.h |   9 +
 drivers/net/idpf/idpf_rxtx.c   | 218 
 drivers/net/idpf/idpf_rxtx.h   |   6 +
 drivers/net/idpf/idpf_vchnl.c  | 447 +
 5 files changed, 720 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index e1db7ccb4f..29bac29bc3 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -58,6 +58,8 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.dev_start  = idpf_dev_start,
.dev_stop   = idpf_dev_stop,
.dev_close  = idpf_dev_close,
+   .rx_queue_start = idpf_rx_queue_start,
+   .tx_queue_start = idpf_tx_queue_start,
.rx_queue_setup = idpf_rx_queue_setup,
.tx_queue_setup = idpf_tx_queue_setup,
.dev_infos_get  = idpf_dev_info_get,
@@ -305,6 +307,39 @@ idpf_dev_configure(struct rte_eth_dev *dev)
return 0;
 }
 
+static int
+idpf_start_queues(struct rte_eth_dev *dev)
+{
+   struct idpf_rx_queue *rxq;
+   struct idpf_tx_queue *txq;
+   int err = 0;
+   int i;
+
+   for (i = 0; i < dev->data->nb_tx_queues; i++) {
+   txq = dev->data->tx_queues[i];
+   if (txq == NULL || txq->tx_deferred_start)
+   continue;
+   err = idpf_tx_queue_start(dev, i);
+   if (err != 0) {
+   PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i);
+   return err;
+   }
+   }
+
+   for (i = 0; i < dev->data->nb_rx_queues; i++) {
+   rxq = dev->data->rx_queues[i];
+   if (rxq == NULL || rxq->rx_deferred_start)
+   continue;
+   err = idpf_rx_queue_start(dev, i);
+   if (err != 0) {
+   PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i);
+   return err;
+   }
+   }
+
+   return err;
+}
+
 static int
 idpf_dev_start(struct rte_eth_dev *dev)
 {
@@ -317,6 +352,11 @@ idpf_dev_start(struct rte_eth_dev *dev)
 
vport->max_pkt_len = dev->data->mtu + IDPF_ETH_OVERHEAD;
 
+   if (idpf_start_queues(dev) != 0) {
+   PMD_DRV_LOG(ERR, "Failed to start queues");
+   return -1;
+   }
+
if (idpf_vc_ena_dis_vport(vport, true) != 0) {
PMD_DRV_LOG(ERR, "Failed to enable vport");
return -1;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 84ae6641e2..96c22009e9 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -24,7 +24,9 @@
 #define IDPF_DEFAULT_TXQ_NUM   16
 
 #define IDPF_INVALID_VPORT_IDX 0x
+#define IDPF_TXQ_PER_GRP   1
 #define IDPF_TX_COMPLQ_PER_GRP 1
+#define IDPF_RXQ_PER_GRP   1
 #define IDPF_RX_BUFQ_PER_GRP   2
 
 #define IDPF_CTLQ_ID   -1
@@ -182,6 +184,13 @@ int idpf_vc_check_api_version(struct idpf_adapter 
*adapter);
 int idpf_vc_get_caps(struct idpf_adapter *adapter);
 int idpf_vc_create_vport(struct idpf_adapter *adapter);
 int idpf_vc_destroy_vport(struct idpf_vport *vport);
+int idpf_vc_config_rxqs(struct idpf_vport *vport);
+int idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id);
+int idpf_vc_config_txqs(struct idpf_vport *vport);
+int idpf_vc_config_txq(struct idpf_vport *vport, uint16_t txq_id);
+int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
+ bool rx, bool on);
+int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
 int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
 int idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
  uint16_t buf_len, uint8_t *buf);
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 25dd5d85d5..8f73e3f3bb 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -349,6 +349,7 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t 
queue_idx,
rxq->rx_free_thresh = rx_free_thresh;
rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
rxq->port_id = dev->data->port_id;
+   rxq->rx_deferred_start = rx_conf->rx_deferred_start;
rxq->rx_hdr_len = 0;
rxq->adapter = adapter;
rxq->offloads = offloads;
@@ -480,6 +481,7 @@ idpf_rx_single_queue_setup(struct rte_eth_dev *dev, 
uint16_t queue_idx,
rxq->rx_free_thresh = rx_free_thresh;
rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
rxq->port_id = dev->data->port_id;
+   rxq->rx_deferred_start = rx_conf->rx_deferred_start;
rxq->rx_hdr_len 

[PATCH v12 07/18] net/idpf: add support for queue stop

2022-10-26 Thread Junfeng Guo
Add support for these device ops:
 - rx_queue_stop
 - tx_queue_stop

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 doc/guides/nics/features/idpf.ini |   1 +
 drivers/net/idpf/idpf_ethdev.c|  14 ++-
 drivers/net/idpf/idpf_rxtx.c  | 148 ++
 drivers/net/idpf/idpf_rxtx.h  |  13 +++
 drivers/net/idpf/idpf_vchnl.c |  69 ++
 5 files changed, 242 insertions(+), 3 deletions(-)

diff --git a/doc/guides/nics/features/idpf.ini 
b/doc/guides/nics/features/idpf.ini
index 46aab2eb61..c25fa5de3f 100644
--- a/doc/guides/nics/features/idpf.ini
+++ b/doc/guides/nics/features/idpf.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Queue start/stop = Y
 Linux= Y
 x86-32   = Y
 x86-64   = Y
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 29bac29bc3..05f087a03c 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -59,7 +59,9 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.dev_stop   = idpf_dev_stop,
.dev_close  = idpf_dev_close,
.rx_queue_start = idpf_rx_queue_start,
+   .rx_queue_stop  = idpf_rx_queue_stop,
.tx_queue_start = idpf_tx_queue_start,
+   .tx_queue_stop  = idpf_tx_queue_stop,
.rx_queue_setup = idpf_rx_queue_setup,
.tx_queue_setup = idpf_tx_queue_setup,
.dev_infos_get  = idpf_dev_info_get,
@@ -347,22 +349,26 @@ idpf_dev_start(struct rte_eth_dev *dev)
 
if (dev->data->mtu > vport->max_mtu) {
PMD_DRV_LOG(ERR, "MTU should be less than %d", vport->max_mtu);
-   return -1;
+   goto err_mtu;
}
 
vport->max_pkt_len = dev->data->mtu + IDPF_ETH_OVERHEAD;
 
if (idpf_start_queues(dev) != 0) {
PMD_DRV_LOG(ERR, "Failed to start queues");
-   return -1;
+   goto err_mtu;
}
 
if (idpf_vc_ena_dis_vport(vport, true) != 0) {
PMD_DRV_LOG(ERR, "Failed to enable vport");
-   return -1;
+   goto err_vport;
}
 
return 0;
+err_vport:
+   idpf_stop_queues(dev);
+err_mtu:
+   return -1;
 }
 
 static int
@@ -372,6 +378,8 @@ idpf_dev_stop(struct rte_eth_dev *dev)
 
idpf_vc_ena_dis_vport(vport, false);
 
+   idpf_stop_queues(dev);
+
return 0;
 }
 
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 8f73e3f3bb..ef11855978 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -71,6 +71,55 @@ check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
return 0;
 }
 
+static void
+release_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+   uint16_t i;
+
+   if (rxq->sw_ring == NULL)
+   return;
+
+   for (i = 0; i < rxq->nb_rx_desc; i++) {
+   if (rxq->sw_ring[i] != NULL) {
+   rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+   rxq->sw_ring[i] = NULL;
+   }
+   }
+}
+
+static void
+release_txq_mbufs(struct idpf_tx_queue *txq)
+{
+   uint16_t nb_desc, i;
+
+   if (txq == NULL || txq->sw_ring == NULL) {
+   PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
+   return;
+   }
+
+   if (txq->sw_nb_desc != 0) {
+   /* For split queue model, descriptor ring */
+   nb_desc = txq->sw_nb_desc;
+   } else {
+   /* For single queue model */
+   nb_desc = txq->nb_tx_desc;
+   }
+   for (i = 0; i < nb_desc; i++) {
+   if (txq->sw_ring[i].mbuf != NULL) {
+   rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+   txq->sw_ring[i].mbuf = NULL;
+   }
+   }
+}
+
+static const struct idpf_rxq_ops def_rxq_ops = {
+   .release_mbufs = release_rxq_mbufs,
+};
+
+static const struct idpf_txq_ops def_txq_ops = {
+   .release_mbufs = release_txq_mbufs,
+};
+
 static void
 reset_split_rx_descq(struct idpf_rx_queue *rxq)
 {
@@ -122,6 +171,14 @@ reset_split_rx_bufq(struct idpf_rx_queue *rxq)
rxq->bufq2 = NULL;
 }
 
+static inline void
+reset_split_rx_queue(struct idpf_rx_queue *rxq)
+{
+   reset_split_rx_descq(rxq);
+   reset_split_rx_bufq(rxq->bufq1);
+   reset_split_rx_bufq(rxq->bufq2);
+}
+
 static void
 reset_single_rx_queue(struct idpf_rx_queue *rxq)
 {
@@ -301,6 +358,7 @@ idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct 
idpf_rx_queue *bufq,
bufq->q_set = true;
bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+   bufq->ops = &def_rxq_ops;

[PATCH v12 08/18] net/idpf: add queue release

2022-10-26 Thread Junfeng Guo
Add support for queue operations:
 - rx_queue_release
 - tx_queue_release

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c |  2 +
 drivers/net/idpf/idpf_rxtx.c   | 81 ++
 drivers/net/idpf/idpf_rxtx.h   |  3 ++
 3 files changed, 86 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 05f087a03c..dee1e03d43 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -63,7 +63,9 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.tx_queue_start = idpf_tx_queue_start,
.tx_queue_stop  = idpf_tx_queue_stop,
.rx_queue_setup = idpf_rx_queue_setup,
+   .rx_queue_release   = idpf_dev_rx_queue_release,
.tx_queue_setup = idpf_tx_queue_setup,
+   .tx_queue_release   = idpf_dev_tx_queue_release,
.dev_infos_get  = idpf_dev_info_get,
.link_update= idpf_dev_link_update,
 };
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index ef11855978..382f0090f3 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -171,6 +171,51 @@ reset_split_rx_bufq(struct idpf_rx_queue *rxq)
rxq->bufq2 = NULL;
 }
 
+static void
+idpf_rx_queue_release(void *rxq)
+{
+   struct idpf_rx_queue *q = rxq;
+
+   if (q == NULL)
+   return;
+
+   /* Split queue */
+   if (q->bufq1 != NULL && q->bufq2 != NULL) {
+   q->bufq1->ops->release_mbufs(q->bufq1);
+   rte_free(q->bufq1->sw_ring);
+   rte_memzone_free(q->bufq1->mz);
+   rte_free(q->bufq1);
+   q->bufq2->ops->release_mbufs(q->bufq2);
+   rte_free(q->bufq2->sw_ring);
+   rte_memzone_free(q->bufq2->mz);
+   rte_free(q->bufq2);
+   rte_memzone_free(q->mz);
+   rte_free(q);
+   return;
+   }
+
+   /* Single queue */
+   q->ops->release_mbufs(q);
+   rte_free(q->sw_ring);
+   rte_memzone_free(q->mz);
+   rte_free(q);
+}
+
+static void
+idpf_tx_queue_release(void *txq)
+{
+   struct idpf_tx_queue *q = txq;
+
+   if (q == NULL)
+   return;
+
+   rte_free(q->complq);
+   q->ops->release_mbufs(q);
+   rte_free(q->sw_ring);
+   rte_memzone_free(q->mz);
+   rte_free(q);
+}
+
 static inline void
 reset_split_rx_queue(struct idpf_rx_queue *rxq)
 {
@@ -392,6 +437,12 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, 
uint16_t queue_idx,
if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
return -EINVAL;
 
+   /* Free memory if needed */
+   if (dev->data->rx_queues[queue_idx] != NULL) {
+   idpf_rx_queue_release(dev->data->rx_queues[queue_idx]);
+   dev->data->rx_queues[queue_idx] = NULL;
+   }
+
/* Setup Rx description queue */
rxq = rte_zmalloc_socket("idpf rxq",
 sizeof(struct idpf_rx_queue),
@@ -524,6 +575,12 @@ idpf_rx_single_queue_setup(struct rte_eth_dev *dev, 
uint16_t queue_idx,
if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
return -EINVAL;
 
+   /* Free memory if needed */
+   if (dev->data->rx_queues[queue_idx] != NULL) {
+   idpf_rx_queue_release(dev->data->rx_queues[queue_idx]);
+   dev->data->rx_queues[queue_idx] = NULL;
+   }
+
/* Setup Rx description queue */
rxq = rte_zmalloc_socket("idpf rxq",
 sizeof(struct idpf_rx_queue),
@@ -630,6 +687,12 @@ idpf_tx_split_queue_setup(struct rte_eth_dev *dev, 
uint16_t queue_idx,
if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
return -EINVAL;
 
+   /* Free memory if needed. */
+   if (dev->data->tx_queues[queue_idx] != NULL) {
+   idpf_tx_queue_release(dev->data->tx_queues[queue_idx]);
+   dev->data->tx_queues[queue_idx] = NULL;
+   }
+
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("idpf split txq",
 sizeof(struct idpf_tx_queue),
@@ -754,6 +817,12 @@ idpf_tx_single_queue_setup(struct rte_eth_dev *dev, 
uint16_t queue_idx,
if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
return -EINVAL;
 
+   /* Free memory if needed. */
+   if (dev->data->tx_queues[queue_idx] != NULL) {
+   idpf_tx_queue_release(dev->data->tx_queues[queue_idx]);
+   dev->data->tx_queues[queue_idx] = NULL;
+   }
+
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("idpf txq",
 sizeof(struct idpf_tx_queue),
@@ -1102,6 +1171,18 @@ idpf_tx_queue_stop(struct rte_eth_dev *d

[PATCH v12 09/18] net/idpf: add support for MTU configuration

2022-10-26 Thread Junfeng Guo
Add dev ops mtu_set.

Signed-off-by: Beilei Xing 
Signed-off-by: Junfeng Guo 
---
 doc/guides/nics/features/idpf.ini |  1 +
 drivers/net/idpf/idpf_ethdev.c| 14 ++
 2 files changed, 15 insertions(+)

diff --git a/doc/guides/nics/features/idpf.ini 
b/doc/guides/nics/features/idpf.ini
index c25fa5de3f..c8f4a0d6ed 100644
--- a/doc/guides/nics/features/idpf.ini
+++ b/doc/guides/nics/features/idpf.ini
@@ -5,6 +5,7 @@
 ;
 [Features]
 Queue start/stop = Y
+MTU update   = Y
 Linux= Y
 x86-32   = Y
 x86-64   = Y
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index dee1e03d43..94f4671057 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -35,6 +35,7 @@ static int idpf_dev_stop(struct rte_eth_dev *dev);
 static int idpf_dev_close(struct rte_eth_dev *dev);
 static int idpf_dev_info_get(struct rte_eth_dev *dev,
 struct rte_eth_dev_info *dev_info);
+static int idpf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 static void idpf_adapter_rel(struct idpf_adapter *adapter);
 
 static int
@@ -68,6 +69,7 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.tx_queue_release   = idpf_dev_tx_queue_release,
.dev_infos_get  = idpf_dev_info_get,
.link_update= idpf_dev_link_update,
+   .mtu_set= idpf_dev_mtu_set,
 };
 
 static int
@@ -110,6 +112,18 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
return 0;
 }
 
+static int
+idpf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
+{
+   /* mtu setting is forbidden if port is start */
+   if (dev->data->dev_started) {
+   PMD_DRV_LOG(ERR, "port must be stopped before configuration");
+   return -EBUSY;
+   }
+
+   return 0;
+}
+
 static int
 idpf_init_vport_req_info(struct rte_eth_dev *dev)
 {
-- 
2.34.1



[PATCH v12 10/18] net/idpf: add support for basic Rx datapath

2022-10-26 Thread Junfeng Guo
Add basic Rx support in split queue mode and single queue mode.

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c |   2 +
 drivers/net/idpf/idpf_rxtx.c   | 273 +
 drivers/net/idpf/idpf_rxtx.h   |   7 +-
 3 files changed, 281 insertions(+), 1 deletion(-)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 94f4671057..df2a760673 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -375,6 +375,8 @@ idpf_dev_start(struct rte_eth_dev *dev)
goto err_mtu;
}
 
+   idpf_set_rx_function(dev);
+
if (idpf_vc_ena_dis_vport(vport, true) != 0) {
PMD_DRV_LOG(ERR, "Failed to enable vport");
goto err_vport;
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 382f0090f3..4e3fa39b9d 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1209,3 +1209,276 @@ idpf_stop_queues(struct rte_eth_dev *dev)
}
 }
 
+
+static void
+idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
+{
+   volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_ring;
+   volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_desc;
+   uint16_t nb_refill = rx_bufq->rx_free_thresh;
+   uint16_t nb_desc = rx_bufq->nb_rx_desc;
+   uint16_t next_avail = rx_bufq->rx_tail;
+   struct rte_mbuf *nmb[rx_bufq->rx_free_thresh];
+   struct rte_eth_dev *dev;
+   uint64_t dma_addr;
+   uint16_t delta;
+   int i;
+
+   if (rx_bufq->nb_rx_hold < rx_bufq->rx_free_thresh)
+   return;
+
+   rx_buf_ring = rx_bufq->rx_ring;
+   delta = nb_desc - next_avail;
+   if (unlikely(delta < nb_refill)) {
+   if (likely(rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, delta) == 
0)) {
+   for (i = 0; i < delta; i++) {
+   rx_buf_desc = &rx_buf_ring[next_avail + i];
+   rx_bufq->sw_ring[next_avail + i] = nmb[i];
+   dma_addr = 
rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+   rx_buf_desc->hdr_addr = 0;
+   rx_buf_desc->pkt_addr = dma_addr;
+   }
+   nb_refill -= delta;
+   next_avail = 0;
+   rx_bufq->nb_rx_hold -= delta;
+   } else {
+   dev = &rte_eth_devices[rx_bufq->port_id];
+   dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
+   PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u 
queue_id=%u",
+  rx_bufq->port_id, rx_bufq->queue_id);
+   return;
+   }
+   }
+
+   if (nb_desc - next_avail >= nb_refill) {
+   if (likely(rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, nb_refill) 
== 0)) {
+   for (i = 0; i < nb_refill; i++) {
+   rx_buf_desc = &rx_buf_ring[next_avail + i];
+   rx_bufq->sw_ring[next_avail + i] = nmb[i];
+   dma_addr = 
rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+   rx_buf_desc->hdr_addr = 0;
+   rx_buf_desc->pkt_addr = dma_addr;
+   }
+   next_avail += nb_refill;
+   rx_bufq->nb_rx_hold -= nb_refill;
+   } else {
+   dev = &rte_eth_devices[rx_bufq->port_id];
+   dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
+   PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u 
queue_id=%u",
+  rx_bufq->port_id, rx_bufq->queue_id);
+   }
+   }
+
+   IDPF_PCI_REG_WRITE(rx_bufq->qrx_tail, next_avail);
+
+   rx_bufq->rx_tail = next_avail;
+}
+
+uint16_t
+idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+   volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
+   volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
+   uint16_t pktlen_gen_bufq_id;
+   struct idpf_rx_queue *rxq;
+   struct rte_mbuf *rxm;
+   uint16_t rx_id_bufq1;
+   uint16_t rx_id_bufq2;
+   uint16_t pkt_len;
+   uint16_t bufq_id;
+   uint16_t gen_id;
+   uint16_t rx_id;
+   uint16_t nb_rx;
+
+   nb_rx = 0;
+   rxq = rx_queue;
+
+   if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
+   return nb_rx;
+
+   rx_id = rxq->rx_tail;
+   rx_id_bufq1 = rxq->bufq1->rx_next_avail;
+   rx_id_bufq2 = rxq->bufq2->rx_next_avail;
+   rx_desc_ring = rxq->rx_ring;
+
+   while (nb_rx < nb_pkts) {
+   rx_desc = &rx_desc_ring[rx_id];
+
+

[PATCH v12 11/18] net/idpf: add support for basic Tx datapath

2022-10-26 Thread Junfeng Guo
Add basic Tx support in split queue mode and single queue mode.

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c |   3 +
 drivers/net/idpf/idpf_ethdev.h |   1 +
 drivers/net/idpf/idpf_rxtx.c   | 357 +
 drivers/net/idpf/idpf_rxtx.h   |  10 +
 4 files changed, 371 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index df2a760673..6fb56e584d 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -88,6 +88,8 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
 
dev_info->max_mac_addrs = IDPF_NUM_MACADDR_MAX;
 
+   dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+
dev_info->default_txconf = (struct rte_eth_txconf) {
.tx_free_thresh = IDPF_DEFAULT_TX_FREE_THRESH,
.tx_rs_thresh = IDPF_DEFAULT_TX_RS_THRESH,
@@ -376,6 +378,7 @@ idpf_dev_start(struct rte_eth_dev *dev)
}
 
idpf_set_rx_function(dev);
+   idpf_set_tx_function(dev);
 
if (idpf_vc_ena_dis_vport(vport, true) != 0) {
PMD_DRV_LOG(ERR, "Failed to enable vport");
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 96c22009e9..af0a8e2970 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -35,6 +35,7 @@
 
 #define IDPF_MIN_BUF_SIZE  1024
 #define IDPF_MAX_FRAME_SIZE9728
+#define IDPF_MIN_FRAME_SIZE14
 
 #define IDPF_NUM_MACADDR_MAX   64
 
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 4e3fa39b9d..86942b2b5f 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1366,6 +1366,148 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
return nb_rx;
 }
 
+static inline void
+idpf_split_tx_free(struct idpf_tx_queue *cq)
+{
+   volatile struct idpf_splitq_tx_compl_desc *compl_ring = cq->compl_ring;
+   volatile struct idpf_splitq_tx_compl_desc *txd;
+   uint16_t next = cq->tx_tail;
+   struct idpf_tx_entry *txe;
+   struct idpf_tx_queue *txq;
+   uint16_t gen, qid, q_head;
+   uint8_t ctype;
+
+   txd = &compl_ring[next];
+   gen = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+   IDPF_TXD_COMPLQ_GEN_M) >> IDPF_TXD_COMPLQ_GEN_S;
+   if (gen != cq->expected_gen_id)
+   return;
+
+   ctype = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+   IDPF_TXD_COMPLQ_COMPL_TYPE_M) >> IDPF_TXD_COMPLQ_COMPL_TYPE_S;
+   qid = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+   IDPF_TXD_COMPLQ_QID_M) >> IDPF_TXD_COMPLQ_QID_S;
+   q_head = rte_le_to_cpu_16(txd->q_head_compl_tag.compl_tag);
+   txq = cq->txqs[qid - cq->tx_start_qid];
+
+   switch (ctype) {
+   case IDPF_TXD_COMPLT_RE:
+   if (q_head == 0)
+   txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+   else
+   txq->last_desc_cleaned = q_head - 1;
+   if (unlikely((txq->last_desc_cleaned % 32) == 0)) {
+   PMD_DRV_LOG(ERR, "unexpected desc (head = %u) 
completion.",
+   q_head);
+   return;
+   }
+
+   break;
+   case IDPF_TXD_COMPLT_RS:
+   txq->nb_free++;
+   txq->nb_used--;
+   txe = &txq->sw_ring[q_head];
+   if (txe->mbuf != NULL) {
+   rte_pktmbuf_free_seg(txe->mbuf);
+   txe->mbuf = NULL;
+   }
+   break;
+   default:
+   PMD_DRV_LOG(ERR, "unknown completion type.");
+   return;
+   }
+
+   if (++next == cq->nb_tx_desc) {
+   next = 0;
+   cq->expected_gen_id ^= 1;
+   }
+
+   cq->tx_tail = next;
+}
+
+uint16_t
+idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+   struct idpf_tx_queue *txq = (struct idpf_tx_queue *)tx_queue;
+   volatile struct idpf_flex_tx_sched_desc *txr;
+   volatile struct idpf_flex_tx_sched_desc *txd;
+   struct idpf_tx_entry *sw_ring;
+   struct idpf_tx_entry *txe, *txn;
+   uint16_t nb_used, tx_id, sw_id;
+   struct rte_mbuf *tx_pkt;
+   uint16_t nb_to_clean;
+   uint16_t nb_tx = 0;
+
+   if (unlikely(txq == NULL) || unlikely(!txq->q_started))
+   return nb_tx;
+
+   txr = txq->desc_ring;
+   sw_ring = txq->sw_ring;
+   tx_id = txq->tx_tail;
+   sw_id = txq->sw_tail;
+   txe = &sw_ring[sw_id];
+
+   for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+   tx_pkt = tx_pkts[nb_tx];
+
+   if (txq->nb_free <= txq->free_thresh) {
+   /* TODO: Need to refine
+* 1. free and clean: Better to decide

[PATCH v12 12/18] net/idpf: support packet type get

2022-10-26 Thread Junfeng Guo
Get packet type during receiving packets.

Signed-off-by: Wenjun Wu 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c |   6 +
 drivers/net/idpf/idpf_ethdev.h |   6 +
 drivers/net/idpf/idpf_rxtx.c   |  11 ++
 drivers/net/idpf/idpf_rxtx.h   |   5 +
 drivers/net/idpf/idpf_vchnl.c  | 240 +
 5 files changed, 268 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 6fb56e584d..630bdabcd4 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -709,6 +709,12 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct 
idpf_adapter *adapter)
goto err_api;
}
 
+   ret = idpf_get_pkt_type(adapter);
+   if (ret != 0) {
+   PMD_INIT_LOG(ERR, "Failed to set ptype table");
+   goto err_api;
+   }
+
adapter->caps = rte_zmalloc("idpf_caps",
sizeof(struct virtchnl2_get_capabilities), 0);
if (adapter->caps == NULL) {
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index af0a8e2970..db9af58f72 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -39,6 +39,8 @@
 
 #define IDPF_NUM_MACADDR_MAX   64
 
+#define IDPF_MAX_PKT_TYPE  1024
+
 #define IDPF_VLAN_TAG_SIZE 4
 #define IDPF_ETH_OVERHEAD \
(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + IDPF_VLAN_TAG_SIZE * 2)
@@ -125,6 +127,8 @@ struct idpf_adapter {
/* Max config queue number per VC message */
uint32_t max_rxq_per_msg;
uint32_t max_txq_per_msg;
+
+   uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
 };
 
 TAILQ_HEAD(idpf_adapter_list, idpf_adapter);
@@ -182,6 +186,7 @@ atomic_set_cmd(struct idpf_adapter *adapter, enum 
virtchnl_ops ops)
 struct idpf_adapter *idpf_find_adapter(struct rte_pci_device *pci_dev);
 void idpf_handle_virtchnl_msg(struct rte_eth_dev *dev);
 int idpf_vc_check_api_version(struct idpf_adapter *adapter);
+int idpf_get_pkt_type(struct idpf_adapter *adapter);
 int idpf_vc_get_caps(struct idpf_adapter *adapter);
 int idpf_vc_create_vport(struct idpf_adapter *adapter);
 int idpf_vc_destroy_vport(struct idpf_vport *vport);
@@ -193,6 +198,7 @@ int idpf_switch_queue(struct idpf_vport *vport, uint16_t 
qid,
  bool rx, bool on);
 int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
 int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
+int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
 int idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
  uint16_t buf_len, uint8_t *buf);
 
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 86942b2b5f..f9f751a6ad 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1282,6 +1282,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
uint16_t pktlen_gen_bufq_id;
struct idpf_rx_queue *rxq;
+   const uint32_t *ptype_tbl;
struct rte_mbuf *rxm;
uint16_t rx_id_bufq1;
uint16_t rx_id_bufq2;
@@ -1301,6 +1302,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
rx_id_bufq1 = rxq->bufq1->rx_next_avail;
rx_id_bufq2 = rxq->bufq2->rx_next_avail;
rx_desc_ring = rxq->rx_ring;
+   ptype_tbl = rxq->adapter->ptype_tbl;
 
while (nb_rx < nb_pkts) {
rx_desc = &rx_desc_ring[rx_id];
@@ -1348,6 +1350,10 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
rxm->next = NULL;
rxm->nb_segs = 1;
rxm->port = rxq->port_id;
+   rxm->packet_type =
+   ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) 
&
+  VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >>
+ VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S];
 
rx_pkts[nb_rx++] = rxm;
}
@@ -1534,6 +1540,7 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
volatile union virtchnl2_rx_desc *rxdp;
union virtchnl2_rx_desc rxd;
struct idpf_rx_queue *rxq;
+   const uint32_t *ptype_tbl;
uint16_t rx_id, nb_hold;
struct rte_eth_dev *dev;
uint16_t rx_packet_len;
@@ -1552,6 +1559,7 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
 
rx_id = rxq->rx_tail;
rx_ring = rxq->rx_ring;
+   ptype_tbl = rxq->adapter->ptype_tbl;
 
while (nb_rx < nb_pkts) {
rxdp = &rx_ring[rx_id];
@@ -1604,6 +1612,9 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
rxm->pkt_len = rx_packet_len;
rxm->data_len = rx_packet_len;
rxm->port = rxq->port_id;
+   rxm->packet_type =
+   
ptype_tbl[(ui

[PATCH v12 13/18] net/idpf: add support for write back based on ITR expire

2022-10-26 Thread Junfeng Guo
Enable write back on ITR expire, then packets can be received one by
one.

Signed-off-by: Beilei Xing 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c | 116 +
 drivers/net/idpf/idpf_ethdev.h |  13 
 drivers/net/idpf/idpf_vchnl.c  | 111 +++
 3 files changed, 240 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 630bdabcd4..8045073780 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -327,6 +327,90 @@ idpf_dev_configure(struct rte_eth_dev *dev)
return 0;
 }
 
+static int
+idpf_config_rx_queues_irqs(struct rte_eth_dev *dev)
+{
+   struct idpf_vport *vport = dev->data->dev_private;
+   struct idpf_adapter *adapter = vport->adapter;
+   struct virtchnl2_queue_vector *qv_map;
+   struct idpf_hw *hw = &adapter->hw;
+   uint32_t dynctl_reg_start;
+   uint32_t itrn_reg_start;
+   uint32_t dynctl_val, itrn_val;
+   uint16_t i;
+
+   qv_map = rte_zmalloc("qv_map",
+   dev->data->nb_rx_queues *
+   sizeof(struct virtchnl2_queue_vector), 0);
+   if (qv_map == NULL) {
+   PMD_DRV_LOG(ERR, "Failed to allocate %d queue-vector map",
+   dev->data->nb_rx_queues);
+   goto qv_map_alloc_err;
+   }
+
+   /* Rx interrupt disabled, Map interrupt only for writeback */
+
+   /* The capability flags adapter->caps->other_caps here should be
+* compared with bit VIRTCHNL2_CAP_WB_ON_ITR. The if condition should
+* be updated when the FW can return correct flag bits.
+*/
+   if (adapter->caps->other_caps != 0) {
+   dynctl_reg_start =
+   vport->recv_vectors->vchunks.vchunks->dynctl_reg_start;
+   itrn_reg_start =
+   vport->recv_vectors->vchunks.vchunks->itrn_reg_start;
+   dynctl_val = IDPF_READ_REG(hw, dynctl_reg_start);
+   PMD_DRV_LOG(DEBUG, "Value of dynctl_reg_start is 0x%x",
+   dynctl_val);
+   itrn_val = IDPF_READ_REG(hw, itrn_reg_start);
+   PMD_DRV_LOG(DEBUG, "Value of itrn_reg_start is 0x%x", itrn_val);
+   /* Force write-backs by setting WB_ON_ITR bit in DYN_CTL
+* register. WB_ON_ITR and INTENA are mutually exclusive
+* bits. Setting WB_ON_ITR bits means TX and RX Descs
+* are written back based on ITR expiration irrespective
+* of INTENA setting.
+*/
+   /* TBD: need to tune INTERVAL value for better performance. */
+   if (itrn_val != 0)
+   IDPF_WRITE_REG(hw,
+  dynctl_reg_start,
+  VIRTCHNL2_ITR_IDX_0  <<
+  PF_GLINT_DYN_CTL_ITR_INDX_S |
+  PF_GLINT_DYN_CTL_WB_ON_ITR_M |
+  itrn_val <<
+  PF_GLINT_DYN_CTL_INTERVAL_S);
+   else
+   IDPF_WRITE_REG(hw,
+  dynctl_reg_start,
+  VIRTCHNL2_ITR_IDX_0  <<
+  PF_GLINT_DYN_CTL_ITR_INDX_S |
+  PF_GLINT_DYN_CTL_WB_ON_ITR_M |
+  IDPF_DFLT_INTERVAL <<
+  PF_GLINT_DYN_CTL_INTERVAL_S);
+   }
+   for (i = 0; i < dev->data->nb_rx_queues; i++) {
+   /* map all queues to the same vector */
+   qv_map[i].queue_id = vport->chunks_info.rx_start_qid + i;
+   qv_map[i].vector_id =
+   vport->recv_vectors->vchunks.vchunks->start_vector_id;
+   }
+   vport->qv_map = qv_map;
+
+   if (idpf_vc_config_irq_map_unmap(vport, true) != 0) {
+   PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+   goto config_irq_map_err;
+   }
+
+   return 0;
+
+config_irq_map_err:
+   rte_free(vport->qv_map);
+   vport->qv_map = NULL;
+
+qv_map_alloc_err:
+   return -1;
+}
+
 static int
 idpf_start_queues(struct rte_eth_dev *dev)
 {
@@ -364,6 +448,10 @@ static int
 idpf_dev_start(struct rte_eth_dev *dev)
 {
struct idpf_vport *vport = dev->data->dev_private;
+   struct idpf_adapter *adapter = vport->adapter;
+   uint16_t num_allocated_vectors =
+   adapter->caps->num_allocated_vectors;
+   uint16_t req_vecs_num;
 
if (dev->data->mtu > vport->max_mtu) {
PMD_DRV_LOG(ERR, "MTU should be less than %d", vport->max_mtu);
@@ -372,6 +460,23 @@ idpf_dev_start(struct rte_eth_dev *dev)
 
vport->max_pkt_len = dev->data->mtu + IDPF_ETH_OVERHEAD;
 
+   req_vecs_num = IDPF_DFLT_Q_VEC_NUM;
+  

[PATCH v12 14/18] net/idpf: add support for RSS

2022-10-26 Thread Junfeng Guo
Add RSS support.

Signed-off-by: Beilei Xing 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c | 116 +
 drivers/net/idpf/idpf_ethdev.h |  26 
 drivers/net/idpf/idpf_vchnl.c  |  97 +++
 3 files changed, 239 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 8045073780..ea7d3bfd7e 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -86,6 +86,7 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
dev_info->max_mtu = dev_info->max_rx_pktlen - IDPF_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+   dev_info->flow_type_rss_offloads = IDPF_RSS_OFFLOAD_ALL;
dev_info->max_mac_addrs = IDPF_NUM_MACADDR_MAX;
 
dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
@@ -198,6 +199,8 @@ idpf_parse_devarg_id(char *name)
return val;
 }
 
+#define IDPF_RSS_KEY_LEN 52
+
 static int
 idpf_init_vport(struct rte_eth_dev *dev)
 {
@@ -218,6 +221,10 @@ idpf_init_vport(struct rte_eth_dev *dev)
vport->max_mtu = vport_info->max_mtu;
rte_memcpy(vport->default_mac_addr,
   vport_info->default_mac_addr, ETH_ALEN);
+   vport->rss_algorithm = vport_info->rss_algorithm;
+   vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
+vport_info->rss_key_size);
+   vport->rss_lut_size = vport_info->rss_lut_size;
vport->sw_idx = idx;
 
for (i = 0; i < vport_info->chunks.num_chunks; i++) {
@@ -275,10 +282,102 @@ idpf_init_vport(struct rte_eth_dev *dev)
return 0;
 }
 
+static int
+idpf_config_rss(struct idpf_vport *vport)
+{
+   int ret;
+
+   ret = idpf_vc_set_rss_key(vport);
+   if (ret != 0) {
+   PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+   return ret;
+   }
+
+   ret = idpf_vc_set_rss_lut(vport);
+   if (ret != 0) {
+   PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+   return ret;
+   }
+
+   ret = idpf_vc_set_rss_hash(vport);
+   if (ret != 0) {
+   PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+   return ret;
+   }
+
+   return ret;
+}
+
+static int
+idpf_init_rss(struct idpf_vport *vport)
+{
+   struct rte_eth_rss_conf *rss_conf;
+   uint16_t i, nb_q, lut_size;
+   int ret = 0;
+
+   rss_conf = &vport->dev_data->dev_conf.rx_adv_conf.rss_conf;
+   nb_q = vport->dev_data->nb_rx_queues;
+
+   vport->rss_key = rte_zmalloc("rss_key",
+vport->rss_key_size, 0);
+   if (vport->rss_key == NULL) {
+   PMD_INIT_LOG(ERR, "Failed to allocate RSS key");
+   ret = -ENOMEM;
+   goto err_alloc_key;
+   }
+
+   lut_size = vport->rss_lut_size;
+   vport->rss_lut = rte_zmalloc("rss_lut",
+sizeof(uint32_t) * lut_size, 0);
+   if (vport->rss_lut == NULL) {
+   PMD_INIT_LOG(ERR, "Failed to allocate RSS lut");
+   ret = -ENOMEM;
+   goto err_alloc_lut;
+   }
+
+   if (rss_conf->rss_key == NULL) {
+   for (i = 0; i < vport->rss_key_size; i++)
+   vport->rss_key[i] = (uint8_t)rte_rand();
+   } else if (rss_conf->rss_key_len != vport->rss_key_size) {
+   PMD_INIT_LOG(ERR, "Invalid RSS key length in RSS configuration, 
should be %d",
+vport->rss_key_size);
+   ret = -EINVAL;
+   goto err_cfg_key;
+   } else {
+   rte_memcpy(vport->rss_key, rss_conf->rss_key,
+  vport->rss_key_size);
+   }
+
+   for (i = 0; i < lut_size; i++)
+   vport->rss_lut[i] = i % nb_q;
+
+   vport->rss_hf = IDPF_DEFAULT_RSS_HASH_EXPANDED;
+
+   ret = idpf_config_rss(vport);
+   if (ret != 0) {
+   PMD_INIT_LOG(ERR, "Failed to configure RSS");
+   goto err_cfg_key;
+   }
+
+   return ret;
+
+err_cfg_key:
+   rte_free(vport->rss_lut);
+   vport->rss_lut = NULL;
+err_alloc_lut:
+   rte_free(vport->rss_key);
+   vport->rss_key = NULL;
+err_alloc_key:
+   return ret;
+}
+
 static int
 idpf_dev_configure(struct rte_eth_dev *dev)
 {
+   struct idpf_vport *vport = dev->data->dev_private;
struct rte_eth_conf *conf = &dev->data->dev_conf;
+   struct idpf_adapter *adapter = vport->adapter;
+   int ret;
 
if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
PMD_INIT_LOG(ERR, "Setting link speed is not supported");
@@ -324,6 +423,17 @@ idpf_dev_configure(struct rte_eth_dev *dev)
return -1;
}
 
+   if (adapter->caps->rss_caps != 0 && dev->data->nb_rx_queues != 0) {
+   ret = idpf_init_rss(vport);
+   if (ret != 0) {
+  

[PATCH v12 15/18] net/idpf: add support for Rx offloading

2022-10-26 Thread Junfeng Guo
Add Rx offloading support:
 - support CHKSUM and RSS offload for split queue model
 - support CHKSUM offload for single queue model

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 doc/guides/nics/features/idpf.ini |   5 ++
 drivers/net/idpf/idpf_ethdev.c|   6 ++
 drivers/net/idpf/idpf_rxtx.c  | 122 ++
 3 files changed, 133 insertions(+)

diff --git a/doc/guides/nics/features/idpf.ini 
b/doc/guides/nics/features/idpf.ini
index c8f4a0d6ed..6eefac9529 100644
--- a/doc/guides/nics/features/idpf.ini
+++ b/doc/guides/nics/features/idpf.ini
@@ -3,9 +3,14 @@
 ;
 ; Refer to default.ini for the full list of available PMD features.
 ;
+; A feature with "P" indicates only be supported when non-vector path
+; is selected.
+;
 [Features]
 Queue start/stop = Y
 MTU update   = Y
+L3 checksum offload  = P
+L4 checksum offload  = P
 Linux= Y
 x86-32   = Y
 x86-64   = Y
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index ea7d3bfd7e..88f3db3267 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -89,6 +89,12 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
dev_info->flow_type_rss_offloads = IDPF_RSS_OFFLOAD_ALL;
dev_info->max_mac_addrs = IDPF_NUM_MACADDR_MAX;
 
+   dev_info->rx_offload_capa =
+   RTE_ETH_RX_OFFLOAD_IPV4_CKSUM   |
+   RTE_ETH_RX_OFFLOAD_UDP_CKSUM|
+   RTE_ETH_RX_OFFLOAD_TCP_CKSUM|
+   RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+
dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
dev_info->default_txconf = (struct rte_eth_txconf) {
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index f9f751a6ad..cdd20702d8 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1209,6 +1209,72 @@ idpf_stop_queues(struct rte_eth_dev *dev)
}
 }
 
+#define IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S   \
+   (RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S) | \
+RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S) | \
+RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S) |\
+RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S))
+
+static inline uint64_t
+idpf_splitq_rx_csum_offload(uint8_t err)
+{
+   uint64_t flags = 0;
+
+   if (unlikely((err & 
RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S)) == 0))
+   return flags;
+
+   if (likely((err & IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S) == 0)) {
+   flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+ RTE_MBUF_F_RX_L4_CKSUM_GOOD);
+   return flags;
+   }
+
+   if (unlikely((err & 
RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S)) != 0))
+   flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+   else
+   flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+
+   if (unlikely((err & 
RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S)) != 0))
+   flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+   else
+   flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+
+   if (unlikely((err & 
RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S)) != 0))
+   flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
+
+   if (unlikely((err & 
RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S)) != 0))
+   flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
+   else
+   flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
+
+   return flags;
+}
+
+#define IDPF_RX_FLEX_DESC_ADV_HASH1_S  0
+#define IDPF_RX_FLEX_DESC_ADV_HASH2_S  16
+#define IDPF_RX_FLEX_DESC_ADV_HASH3_S  24
+
+static inline uint64_t
+idpf_splitq_rx_rss_offload(struct rte_mbuf *mb,
+  volatile struct virtchnl2_rx_flex_desc_adv_nic_3 
*rx_desc)
+{
+   uint8_t status_err0_qw0;
+   uint64_t flags = 0;
+
+   status_err0_qw0 = rx_desc->status_err0_qw0;
+
+   if ((status_err0_qw0 & 
RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S)) != 0) {
+   flags |= RTE_MBUF_F_RX_RSS_HASH;
+   mb->hash.rss = (rte_le_to_cpu_16(rx_desc->hash1) <<
+   IDPF_RX_FLEX_DESC_ADV_HASH1_S) |
+   ((uint32_t)(rx_desc->ff2_mirrid_hash2.hash2) <<
+   IDPF_RX_FLEX_DESC_ADV_HASH2_S) |
+   ((uint32_t)(rx_desc->hash3) <<
+   IDPF_RX_FLEX_DESC_ADV_HASH3_S);
+   }
+
+   return flags;
+}
 
 static void
 idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
@@ -1283,9 +1349,11 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
uint16_t pktlen_gen_bufq_id;
struct idpf_rx_queue *rxq;
const uint32_t *ptype_tbl;
+   uint8_

[PATCH v12 16/18] net/idpf: add support for Tx offloading

2022-10-26 Thread Junfeng Guo
Add Tx offloading support:
 - support TSO

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 doc/guides/nics/features/idpf.ini |   1 +
 drivers/net/idpf/idpf_ethdev.c|   4 +-
 drivers/net/idpf/idpf_rxtx.c  | 125 +-
 drivers/net/idpf/idpf_rxtx.h  |  22 ++
 4 files changed, 149 insertions(+), 3 deletions(-)

diff --git a/doc/guides/nics/features/idpf.ini 
b/doc/guides/nics/features/idpf.ini
index 6eefac9529..86c182021f 100644
--- a/doc/guides/nics/features/idpf.ini
+++ b/doc/guides/nics/features/idpf.ini
@@ -9,6 +9,7 @@
 [Features]
 Queue start/stop = Y
 MTU update   = Y
+TSO  = P
 L3 checksum offload  = P
 L4 checksum offload  = P
 Linux= Y
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 88f3db3267..868f28d003 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -95,7 +95,9 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
RTE_ETH_RX_OFFLOAD_TCP_CKSUM|
RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
-   dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+   dev_info->tx_offload_capa =
+   RTE_ETH_TX_OFFLOAD_TCP_TSO  |
+   RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
dev_info->default_txconf = (struct rte_eth_txconf) {
.tx_free_thresh = IDPF_DEFAULT_TX_FREE_THRESH,
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index cdd20702d8..1403e4bafe 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1506,6 +1506,49 @@ idpf_split_tx_free(struct idpf_tx_queue *cq)
cq->tx_tail = next;
 }
 
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+idpf_calc_context_desc(uint64_t flags)
+{
+   if ((flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
+   return 1;
+
+   return 0;
+}
+
+/* set TSO context descriptor
+ */
+static inline void
+idpf_set_splitq_tso_ctx(struct rte_mbuf *mbuf,
+   union idpf_tx_offload tx_offload,
+   volatile union idpf_flex_tx_ctx_desc *ctx_desc)
+{
+   uint16_t cmd_dtype;
+   uint32_t tso_len;
+   uint8_t hdr_len;
+
+   if (tx_offload.l4_len == 0) {
+   PMD_TX_LOG(DEBUG, "L4 length set to 0");
+   return;
+   }
+
+   hdr_len = tx_offload.l2_len +
+   tx_offload.l3_len +
+   tx_offload.l4_len;
+   cmd_dtype = IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX |
+   IDPF_TX_FLEX_CTX_DESC_CMD_TSO;
+   tso_len = mbuf->pkt_len - hdr_len;
+
+   ctx_desc->tso.qw1.cmd_dtype = rte_cpu_to_le_16(cmd_dtype);
+   ctx_desc->tso.qw0.hdr_len = hdr_len;
+   ctx_desc->tso.qw0.mss_rt =
+   rte_cpu_to_le_16((uint16_t)mbuf->tso_segsz &
+IDPF_TXD_FLEX_CTX_MSS_RT_M);
+   ctx_desc->tso.qw0.flex_tlen =
+   rte_cpu_to_le_32(tso_len &
+IDPF_TXD_FLEX_CTX_MSS_RT_M);
+}
+
 uint16_t
 idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
  uint16_t nb_pkts)
@@ -1514,11 +1557,14 @@ idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf 
**tx_pkts,
volatile struct idpf_flex_tx_sched_desc *txr;
volatile struct idpf_flex_tx_sched_desc *txd;
struct idpf_tx_entry *sw_ring;
+   union idpf_tx_offload tx_offload = {0};
struct idpf_tx_entry *txe, *txn;
uint16_t nb_used, tx_id, sw_id;
struct rte_mbuf *tx_pkt;
uint16_t nb_to_clean;
uint16_t nb_tx = 0;
+   uint64_t ol_flags;
+   uint16_t nb_ctx;
 
if (unlikely(txq == NULL) || unlikely(!txq->q_started))
return nb_tx;
@@ -1548,7 +1594,29 @@ idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf 
**tx_pkts,
 
if (txq->nb_free < tx_pkt->nb_segs)
break;
-   nb_used = tx_pkt->nb_segs;
+
+   ol_flags = tx_pkt->ol_flags;
+   tx_offload.l2_len = tx_pkt->l2_len;
+   tx_offload.l3_len = tx_pkt->l3_len;
+   tx_offload.l4_len = tx_pkt->l4_len;
+   tx_offload.tso_segsz = tx_pkt->tso_segsz;
+   /* Calculate the number of context descriptors needed. */
+   nb_ctx = idpf_calc_context_desc(ol_flags);
+   nb_used = tx_pkt->nb_segs + nb_ctx;
+
+   /* context descriptor */
+   if (nb_ctx != 0) {
+   volatile union idpf_flex_tx_ctx_desc *ctx_desc =
+   (volatile union idpf_flex_tx_ctx_desc *)&txr[tx_id];
+
+   if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
+   idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
+   ctx_desc);
+
+   tx_id

[PATCH v12 17/18] net/idpf: add AVX512 data path for single queue model

2022-10-26 Thread Junfeng Guo
Add support of AVX512 vector data path for single queue model.

Signed-off-by: Wenjun Wu 
Signed-off-by: Junfeng Guo 
---
 doc/guides/nics/idpf.rst|  19 +
 drivers/net/idpf/idpf_ethdev.c  |   3 +-
 drivers/net/idpf/idpf_ethdev.h  |   5 +
 drivers/net/idpf/idpf_rxtx.c| 145 
 drivers/net/idpf/idpf_rxtx.h|  21 +
 drivers/net/idpf/idpf_rxtx_vec_avx512.c | 871 
 drivers/net/idpf/idpf_rxtx_vec_common.h | 100 +++
 drivers/net/idpf/meson.build|  28 +
 8 files changed, 1191 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/idpf/idpf_rxtx_vec_avx512.c
 create mode 100644 drivers/net/idpf/idpf_rxtx_vec_common.h

diff --git a/doc/guides/nics/idpf.rst b/doc/guides/nics/idpf.rst
index c1001d5d0c..3039c61748 100644
--- a/doc/guides/nics/idpf.rst
+++ b/doc/guides/nics/idpf.rst
@@ -64,3 +64,22 @@ Refer to the document :ref:`compiling and testing a PMD for 
a NIC tx_offload_capa =
RTE_ETH_TX_OFFLOAD_TCP_TSO  |
-   RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+   RTE_ETH_TX_OFFLOAD_MULTI_SEGS   |
+   RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
dev_info->default_txconf = (struct rte_eth_txconf) {
.tx_free_thresh = IDPF_DEFAULT_TX_FREE_THRESH,
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 8d0804f603..7d54e5db60 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -162,6 +162,11 @@ struct idpf_adapter {
uint32_t max_txq_per_msg;
 
uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
+
+   bool rx_vec_allowed;
+   bool tx_vec_allowed;
+   bool rx_use_avx512;
+   bool tx_use_avx512;
 };
 
 TAILQ_HEAD(idpf_adapter_list, idpf_adapter);
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 1403e4bafe..bdb67ceab8 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -4,9 +4,11 @@
 
 #include 
 #include 
+#include 
 
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
+#include "idpf_rxtx_vec_common.h"
 
 static int
 check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
@@ -252,6 +254,8 @@ reset_single_rx_queue(struct idpf_rx_queue *rxq)
 
rxq->pkt_first_seg = NULL;
rxq->pkt_last_seg = NULL;
+   rxq->rxrearm_start = 0;
+   rxq->rxrearm_nb = 0;
 }
 
 static void
@@ -2070,25 +2074,166 @@ idpf_prep_pkts(__rte_unused void *tx_queue, struct 
rte_mbuf **tx_pkts,
return i;
 }
 
+static void __rte_cold
+release_rxq_mbufs_vec(struct idpf_rx_queue *rxq)
+{
+   const uint16_t mask = rxq->nb_rx_desc - 1;
+   uint16_t i;
+
+   if (rxq->sw_ring == NULL || rxq->rxrearm_nb >= rxq->nb_rx_desc)
+   return;
+
+   /* free all mbufs that are valid in the ring */
+   if (rxq->rxrearm_nb == 0) {
+   for (i = 0; i < rxq->nb_rx_desc; i++) {
+   if (rxq->sw_ring[i] != NULL)
+   rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+   }
+   } else {
+   for (i = rxq->rx_tail; i != rxq->rxrearm_start; i = (i + 1) & 
mask) {
+   if (rxq->sw_ring[i] != NULL)
+   rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+   }
+   }
+
+   rxq->rxrearm_nb = rxq->nb_rx_desc;
+
+   /* set all entries to NULL */
+   memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
+}
+
+static const struct idpf_rxq_ops def_singleq_rx_ops_vec = {
+   .release_mbufs = release_rxq_mbufs_vec,
+};
+
+static inline int
+idpf_singleq_rx_vec_setup_default(struct idpf_rx_queue *rxq)
+{
+   uintptr_t p;
+   struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
+
+   mb_def.nb_segs = 1;
+   mb_def.data_off = RTE_PKTMBUF_HEADROOM;
+   mb_def.port = rxq->port_id;
+   rte_mbuf_refcnt_set(&mb_def, 1);
+
+   /* prevent compiler reordering: rearm_data covers previous fields */
+   rte_compiler_barrier();
+   p = (uintptr_t)&mb_def.rearm_data;
+   rxq->mbuf_initializer = *(uint64_t *)p;
+   return 0;
+}
+
+int __rte_cold
+idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq)
+{
+   rxq->ops = &def_singleq_rx_ops_vec;
+   return idpf_singleq_rx_vec_setup_default(rxq);
+}
+
 void
 idpf_set_rx_function(struct rte_eth_dev *dev)
 {
struct idpf_vport *vport = dev->data->dev_private;
+#ifdef RTE_ARCH_X86
+   struct idpf_adapter *ad = vport->adapter;
+   struct idpf_rx_queue *rxq;
+   int i;
+
+   if (idpf_rx_vec_dev_check_default(dev) == IDPF_VECTOR_PATH &&
+   rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+   ad->rx_vec_allowed = true;
+
+   if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
+#ifdef CC_AVX512_SUPPORT
+   if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 
&&
+   rte_cpu

[PATCH v12 18/18] net/idpf: add support for timestamp offload

2022-10-26 Thread Junfeng Guo
Add support for timestamp offload.

Signed-off-by: Wenjing Qiao 
Signed-off-by: Junfeng Guo 
---
 doc/guides/nics/features/idpf.ini |  1 +
 drivers/net/idpf/idpf_ethdev.c|  5 +-
 drivers/net/idpf/idpf_ethdev.h|  3 ++
 drivers/net/idpf/idpf_rxtx.c  | 65 ++
 drivers/net/idpf/idpf_rxtx.h  | 90 +++
 5 files changed, 163 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/idpf.ini 
b/doc/guides/nics/features/idpf.ini
index 86c182021f..5fbad9600c 100644
--- a/doc/guides/nics/features/idpf.ini
+++ b/doc/guides/nics/features/idpf.ini
@@ -12,6 +12,7 @@ MTU update   = Y
 TSO  = P
 L3 checksum offload  = P
 L4 checksum offload  = P
+Timestamp offload= P
 Linux= Y
 x86-32   = Y
 x86-64   = Y
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 62b13b602e..cd18d59fb7 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -22,6 +22,8 @@ rte_spinlock_t idpf_adapter_lock;
 struct idpf_adapter_list idpf_adapter_list;
 bool idpf_adapter_list_init;
 
+uint64_t idpf_timestamp_dynflag;
+
 static const char * const idpf_valid_args[] = {
IDPF_TX_SINGLE_Q,
IDPF_RX_SINGLE_Q,
@@ -98,7 +100,8 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
dev_info->tx_offload_capa =
RTE_ETH_TX_OFFLOAD_TCP_TSO  |
RTE_ETH_TX_OFFLOAD_MULTI_SEGS   |
-   RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
+   RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE   |
+   RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
dev_info->default_txconf = (struct rte_eth_txconf) {
.tx_free_thresh = IDPF_DEFAULT_TX_FREE_THRESH,
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 7d54e5db60..ccdf4abe40 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -167,6 +167,9 @@ struct idpf_adapter {
bool tx_vec_allowed;
bool rx_use_avx512;
bool tx_use_avx512;
+
+   /* For PTP */
+   uint64_t time_hw;
 };
 
 TAILQ_HEAD(idpf_adapter_list, idpf_adapter);
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index bdb67ceab8..0fc4dca9ec 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -10,6 +10,8 @@
 #include "idpf_rxtx.h"
 #include "idpf_rxtx_vec_common.h"
 
+static int idpf_timestamp_dynfield_offset = -1;
+
 static int
 check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
 {
@@ -900,6 +902,24 @@ idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t 
queue_idx,
return idpf_tx_split_queue_setup(dev, queue_idx, nb_desc,
 socket_id, tx_conf);
 }
+
+static int
+idpf_register_ts_mbuf(struct idpf_rx_queue *rxq)
+{
+   int err;
+   if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0) {
+   /* Register mbuf field and flag for Rx timestamp */
+   err = 
rte_mbuf_dyn_rx_timestamp_register(&idpf_timestamp_dynfield_offset,
+
&idpf_timestamp_dynflag);
+   if (err != 0) {
+   PMD_DRV_LOG(ERR,
+   "Cannot register mbuf field/flag for 
timestamp");
+   return -EINVAL;
+   }
+   }
+   return 0;
+}
+
 static int
 idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
 {
@@ -993,6 +1013,13 @@ idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t 
rx_queue_id)
return -EINVAL;
}
 
+   err = idpf_register_ts_mbuf(rxq);
+   if (err != 0) {
+   PMD_DRV_LOG(ERR, "fail to regidter timestamp mbuf %u",
+   rx_queue_id);
+   return -EIO;
+   }
+
if (rxq->bufq1 == NULL) {
/* Single queue */
err = idpf_alloc_single_rxq_mbufs(rxq);
@@ -1354,6 +1381,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
struct idpf_rx_queue *rxq;
const uint32_t *ptype_tbl;
uint8_t status_err0_qw1;
+   struct idpf_adapter *ad;
struct rte_mbuf *rxm;
uint16_t rx_id_bufq1;
uint16_t rx_id_bufq2;
@@ -1363,9 +1391,11 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
uint16_t gen_id;
uint16_t rx_id;
uint16_t nb_rx;
+   uint64_t ts_ns;
 
nb_rx = 0;
rxq = rx_queue;
+   ad = rxq->adapter;
 
if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
return nb_rx;
@@ -1376,6 +1406,9 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
rx_desc_ring = rxq->rx_ring;
ptype_tbl = rxq->adapter->ptype_tbl;
 
+   if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
+   rxq->hw_register_set = 1;
+
 

RE: [EXT] [PATCH resend v5 0/6] crypto/uadk: introduce uadk crypto driver

2022-10-26 Thread Akhil Goyal
> 
> Hi, Akhil
> 
> Thanks for your guidance.
> 
> On Tue, 25 Oct 2022 at 23:02, Akhil Goyal  wrote:
> >
> >
> > > Introduce a new crypto PMD for hardware accelerators based on UADK [1].
> > >
> > > UADK is a framework for user applications to access hardware accelerators.
> > > UADK relies on IOMMU SVA (Shared Virtual Address) feature, which share
> > > the same page table between IOMMU and MMU.
> > > Thereby user application can directly use virtual address for device dma,
> > > which enhances the performance as well as easy usability.
> > >
> > > [1]  https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__github.com_Linaro_uadk&d=DwIBaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=Dn
> L7Si2wl_PRwpZ9TWey3eu68gBzn7DkPwuqhd6WNyo&m=tyhRlU0KYYHgoC9wyL
> _TYFJMfkt98378qTt70kHNizUdEEmLt_dSBdzMJS4IOREt&s=ZdQO3qCVbcJePPFh
> OGL2IsFh0AoEdndgscdqMT_QzhY&e=
> > >
> > > Test:
> > > sudo dpdk-test --vdev=crypto_uadk (--log-level=6)
> > > RTE>>cryptodev_uadk_autotest
> > > RTE>>quit
> > >
> > > resend:
> > > fix uadk lib, still use 
> >
> > There is nothing called as resend in DPDK ML.
> > Please simply increment version number each time you send the set.
> >
> >
> > Also add release notes update in the patch
> > where the driver implementation is complete.
> >
> > Compilation issues are all done now.
> > Have sent some other comments on the patches. Please send the v6 ASAP.
> > We need to close RC2 in a couple of days.
> >
> 
> Got it, have addressed the comments and updated to v6
> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__github.com_Linaro_dpdk_commits_next-2D9.11-
> 2Dv6&d=DwIBaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=DnL7Si2wl_PRwpZ9TWey3e
> u68gBzn7DkPwuqhd6WNyo&m=tyhRlU0KYYHgoC9wyL_TYFJMfkt98378qTt70kH
> NizUdEEmLt_dSBdzMJS4IOREt&s=8D-
> LSc3u5Dbhbp5kiEynMpS46QYwVLM0AuwAMY5HdYU&e=
> Will send tonight.
> 
> Some uncertainty
> 1. The updated uadk.rst
>  
> https://github.com/Linaro/dpdk/blob/next-9.11-v6/doc/guides/cryptodevs/uadk.rst
>  
This seems fine.

> 
> 2. About the release notes change,
> Do I add a new patch in the last: 07
>  
> https://github.com/Linaro/dpdk/commit/f2294300662def012e00d5a3c5466bbf8436904b#diff-b994e803545bf8101a466b464d2171d96f01226aee3401862f0c0a3c1897ffb5R199
>  

No need for a separate release notes patch
You can squash it in your last patch.

> 
> Or merge it to the first patch 01.
> 
> 3. I update uadk.rst and test in the same patch, is that OK.

Yes

> 
> Thanks


[dpdk-dev v4] crypto/ipsec_mb: multi-process IPC request handler

2022-10-26 Thread Kai Ji
As the queue pair used in secondary process need to be setuped by
the primary process, this patch add an IPC register function to help
secondary process to send out queue-pair setup reguest to primary
process via IPC messages. A new "qp_in_used_pid" param stores the PID
to provide the ownership of the queue-pair so that only the PID matched
queue-pair can be free'd in the request.

Signed-off-by: Kai Ji 
---
 drivers/crypto/ipsec_mb/ipsec_mb_ops.c | 129 -
 drivers/crypto/ipsec_mb/ipsec_mb_private.c |  24 +++-
 drivers/crypto/ipsec_mb/ipsec_mb_private.h |  38 +-
 3 files changed, 185 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c 
b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
index cedcaa2742..bf18d692bd 100644
--- a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
+++ b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
@@ -3,6 +3,7 @@
  */
 
 #include 
+#include 
 
 #include 
 #include 
@@ -93,6 +94,46 @@ ipsec_mb_info_get(struct rte_cryptodev *dev,
}
 }
 
+static int
+ipsec_mb_secondary_qp_op(int dev_id, int qp_id,
+   const struct rte_cryptodev_qp_conf *qp_conf,
+   int socket_id, enum ipsec_mb_mp_req_type op_type)
+{
+   int ret;
+   struct rte_mp_msg qp_req_msg;
+   struct rte_mp_msg *qp_resp_msg;
+   struct rte_mp_reply qp_resp;
+   struct ipsec_mb_mp_param *req_param;
+   struct ipsec_mb_mp_param *resp_param;
+   struct timespec ts = {.tv_sec = 1, .tv_nsec = 0};
+
+   memset(&qp_req_msg, 0, sizeof(IPSEC_MB_MP_MSG));
+   memcpy(qp_req_msg.name, IPSEC_MB_MP_MSG, sizeof(IPSEC_MB_MP_MSG));
+   req_param = (struct ipsec_mb_mp_param *)&qp_req_msg.param;
+
+   qp_req_msg.len_param = sizeof(struct ipsec_mb_mp_param);
+   req_param->type = op_type;
+   req_param->dev_id = dev_id;
+   req_param->qp_id = qp_id;
+   req_param->socket_id = socket_id;
+   req_param->process_id = getpid();
+   if (qp_conf) {
+   req_param->nb_descriptors = qp_conf->nb_descriptors;
+   req_param->mp_session = (void *)qp_conf->mp_session;
+   }
+
+   qp_req_msg.num_fds = 0;
+   ret = rte_mp_request_sync(&qp_req_msg, &qp_resp, &ts);
+   if (ret) {
+   RTE_LOG(ERR, USER1, "Create MR request to primary process 
failed.");
+   return -1;
+   }
+   qp_resp_msg = &qp_resp.msgs[0];
+   resp_param = (struct ipsec_mb_mp_param *)qp_resp_msg->param;
+
+   return resp_param->result;
+}
+
 /** Release queue pair */
 int
 ipsec_mb_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
@@ -100,7 +141,10 @@ ipsec_mb_qp_release(struct rte_cryptodev *dev, uint16_t 
qp_id)
struct ipsec_mb_qp *qp = dev->data->queue_pairs[qp_id];
struct rte_ring *r = NULL;
 
-   if (qp != NULL && rte_eal_process_type() == RTE_PROC_PRIMARY) {
+   if (qp != NULL)
+   return 0;
+
+   if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
r = rte_ring_lookup(qp->name);
rte_ring_free(r);
 
@@ -115,6 +159,9 @@ ipsec_mb_qp_release(struct rte_cryptodev *dev, uint16_t 
qp_id)
 #endif
rte_free(qp);
dev->data->queue_pairs[qp_id] = NULL;
+   } else { /* secondary process */
+   return ipsec_mb_secondary_qp_op(dev->data->dev_id, qp_id,
+   NULL, 0, RTE_IPSEC_MB_MP_REQ_QP_FREE);
}
return 0;
 }
@@ -222,9 +269,13 @@ ipsec_mb_qp_setup(struct rte_cryptodev *dev, uint16_t 
qp_id,
 #endif
qp = dev->data->queue_pairs[qp_id];
if (qp == NULL) {
-   IPSEC_MB_LOG(ERR, "Primary process hasn't configured 
device qp.");
-   return -EINVAL;
+   IPSEC_MB_LOG(DEBUG, "Secondary process setting up 
device qp.");
+   return ipsec_mb_secondary_qp_op(dev->data->dev_id, 
qp_id,
+   qp_conf, socket_id, 
RTE_IPSEC_MB_MP_REQ_QP_SET);
}
+
+   IPSEC_MB_LOG(ERR, "Queue pair already setup'ed.");
+   return -EINVAL;
} else {
/* Free memory prior to re-allocation if needed. */
if (dev->data->queue_pairs[qp_id] != NULL)
@@ -296,6 +347,78 @@ ipsec_mb_qp_setup(struct rte_cryptodev *dev, uint16_t 
qp_id,
return ret;
 }
 
+int
+ipsec_mb_ipc_request(const struct rte_mp_msg *mp_msg, const void *peer)
+{
+   struct rte_mp_msg ipc_resp;
+   struct ipsec_mb_mp_param *resp_param =
+   (struct ipsec_mb_mp_param *)ipc_resp.param;
+   const struct ipsec_mb_mp_param *req_param =
+   (const struct ipsec_mb_mp_param *)mp_msg->param;
+
+   int ret;
+   struct rte_cryptodev *dev;
+   struct ipsec_mb_qp *qp;
+   struct rte_cryptodev_qp_conf queue_conf;
+   int dev_id = req_param->dev_id;
+   int qp_id = req_param->qp_id;
+
+   queue_conf.nb_descriptors 

RE: [PATCH] net/gve: fix meson build failure on non-Linux platforms

2022-10-26 Thread Guo, Junfeng


> -Original Message-
> From: Ferruh Yigit 
> Sent: Wednesday, October 26, 2022 18:01
> To: David Marchand ; Guo, Junfeng
> 
> Cc: Zhang, Qi Z ; Wu, Jingjing
> ; Xing, Beilei ;
> dev@dpdk.org; Li, Xiaoyun ;
> awogbem...@google.com; Richardson, Bruce
> ; hemant.agra...@nxp.com;
> step...@networkplumber.org; Xia, Chenbo ;
> Zhang, Helin 
> Subject: Re: [PATCH] net/gve: fix meson build failure on non-Linux
> platforms
> 
> On 10/26/2022 10:33 AM, David Marchand wrote:
> > On Wed, Oct 26, 2022 at 11:10 AM Ferruh Yigit 
> wrote:
>  diff --git a/drivers/net/gve/gve_ethdev.c
> b/drivers/net/gve/gve_ethdev.c
>  index b0f7b98daa..e968317737 100644
>  --- a/drivers/net/gve/gve_ethdev.c
>  +++ b/drivers/net/gve/gve_ethdev.c
>  @@ -1,12 +1,24 @@
> /* SPDX-License-Identifier: BSD-3-Clause
>  * Copyright(C) 2022 Intel Corporation
>  */
>  -#include 
> 
> #include "gve_ethdev.h"
> #include "base/gve_adminq.h"
> #include "base/gve_register.h"
> 
>  +/*
>  + * Following macros are derived from linux/pci_regs.h, however,
>  + * we can't simply include that header here, as there is no such
>  + * file for non-Linux platform.
>  + */
>  +#define PCI_CFG_SPACE_SIZE 256
>  +#define PCI_CAPABILITY_LIST0x34/* Offset of first capability
> list entry */
>  +#define PCI_STD_HEADER_SIZEOF  64
>  +#define PCI_CAP_SIZEOF 4
>  +#define PCI_CAP_ID_MSIX0x11/* MSI-X */
>  +#define PCI_MSIX_FLAGS 2   /* Message Control */
>  +#define PCI_MSIX_FLAGS_QSIZE   0x07FF  /* Table size */
> >>>
> 
> @Junfeng,
> 
> Can you please move these defines to 'gve_ethdev.h'?
> Beginning of 'gve_ethdev.c' seems too unrelated for PCI defines.

Sure, will update this in v2. Thanks!

> 
> >>> No, don't introduce such defines in a driver.
> >>> We have a PCI library, that provides some defines.
> >>> So fix your driver to use them.
> >>>
> >
> > After chat with Ferruh, I am ok if we take this compilation fix for
> > now and unblock the CI..
> >
> 
> ack
> 
> >
> >>
> >> I can see 'bnx2x' driver is using some flags from 'dev/pci/pcireg.h'
> >> header for FreeBSD, unfortunately macro names are not same so a
> #ifdef
> >> is required. Also I don't know it has all macros or if the header is
> >> available in all versions etc..
> >>
> >> Other option can be to define them all for DPDK in common pci header?
> As
> >> far as I can see we don't have all defined right now.
> >>
> >
> > If there are some missing definitions we can add them in the pci header
> indeed.
> > Those are constants, defined in the PCIE standard.
> >
> > I can post a cleanup later, but I would be happy if someone else handles
> it.
> >
> 
> @Junfeng,
> 
> Can you have bandwidth for this cleanup?
> 
> Task is to define PCI macros in common pci header, ('bus_pci_driver.h' I
> guess ?), and update drivers to use new common ones, remove local
> definition from drivers.

Yes, the task makes sense and is quite clear. 
How about doing this cleanup later, maybe next week with a new patch?
Quite busy this week due to RC2...
Thanks! :)

> 
> DPDK macros can use Linux macro names with RTE_ prefix and with a
> define
> guard, as done in following:
> https://elixir.bootlin.com/dpdk/v22.07/source/drivers/bus/pci/linux/pci_i
> nit.h#L50

I'll try with this, thanks for the comments!



[PATCH v2] net/gve: fix meson build failure on non-Linux platforms

2022-10-26 Thread Junfeng Guo
Meson build may fail on FreeBSD with gcc and clang, due to missing
the header file linux/pci_regs.h on non-Linux platform. Thus, in
this patch, we removed the file include and added the used Macros
derived from linux/pci_regs.h.

Fixes: 3047a5ac8e66 ("net/gve: add support for device initialization")

Signed-off-by: Junfeng Guo 
---
 drivers/net/gve/gve_ethdev.c |  1 -
 drivers/net/gve/gve_ethdev.h | 13 +
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c
index b0f7b98daa..18c879b8d1 100644
--- a/drivers/net/gve/gve_ethdev.c
+++ b/drivers/net/gve/gve_ethdev.c
@@ -1,7 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright(C) 2022 Intel Corporation
  */
-#include 
 
 #include "gve_ethdev.h"
 #include "base/gve_adminq.h"
diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h
index 36b334c36b..f6cac3ff2b 100644
--- a/drivers/net/gve/gve_ethdev.h
+++ b/drivers/net/gve/gve_ethdev.h
@@ -11,6 +11,19 @@
 
 #include "base/gve.h"
 
+/*
+ * Following macros are derived from linux/pci_regs.h, however,
+ * we can't simply include that header here, as there is no such
+ * file for non-Linux platform.
+ */
+#define PCI_CFG_SPACE_SIZE 256
+#define PCI_CAPABILITY_LIST0x34/* Offset of first capability list 
entry */
+#define PCI_STD_HEADER_SIZEOF  64
+#define PCI_CAP_SIZEOF 4
+#define PCI_CAP_ID_MSIX0x11/* MSI-X */
+#define PCI_MSIX_FLAGS 2   /* Message Control */
+#define PCI_MSIX_FLAGS_QSIZE   0x07FF  /* Table size */
+
 #define GVE_DEFAULT_RX_FREE_THRESH  512
 #define GVE_DEFAULT_TX_FREE_THRESH  256
 #define GVE_TX_MAX_FREE_SZ  512
-- 
2.34.1



[dpdk-dev v4] crypto/ipsec_mb: multi-process IPC request handler

2022-10-26 Thread Kai Ji
As the queue pair used in secondary process need to be setuped by
the primary process, this patch add an IPC register function to help
secondary process to send out queue-pair setup reguest to primary
process via IPC messages. A new "qp_in_used_pid" param stores the PID
to provide the ownership of the queue-pair so that only the PID matched
queue-pair can be free'd in the request.

Signed-off-by: Kai Ji 
---
v4:
- review comments resolved

v3:
- remove shared memzone as qp_conf params can be passed directly from
ipc message.

v2:
- add in shared memzone for data exchange between multi-process
---
 drivers/crypto/ipsec_mb/ipsec_mb_ops.c | 129 -
 drivers/crypto/ipsec_mb/ipsec_mb_private.c |  24 +++-
 drivers/crypto/ipsec_mb/ipsec_mb_private.h |  38 +-
 3 files changed, 185 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c 
b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
index cedcaa2742..bf18d692bd 100644
--- a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
+++ b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
@@ -3,6 +3,7 @@
  */

 #include 
+#include 

 #include 
 #include 
@@ -93,6 +94,46 @@ ipsec_mb_info_get(struct rte_cryptodev *dev,
}
 }

+static int
+ipsec_mb_secondary_qp_op(int dev_id, int qp_id,
+   const struct rte_cryptodev_qp_conf *qp_conf,
+   int socket_id, enum ipsec_mb_mp_req_type op_type)
+{
+   int ret;
+   struct rte_mp_msg qp_req_msg;
+   struct rte_mp_msg *qp_resp_msg;
+   struct rte_mp_reply qp_resp;
+   struct ipsec_mb_mp_param *req_param;
+   struct ipsec_mb_mp_param *resp_param;
+   struct timespec ts = {.tv_sec = 1, .tv_nsec = 0};
+
+   memset(&qp_req_msg, 0, sizeof(IPSEC_MB_MP_MSG));
+   memcpy(qp_req_msg.name, IPSEC_MB_MP_MSG, sizeof(IPSEC_MB_MP_MSG));
+   req_param = (struct ipsec_mb_mp_param *)&qp_req_msg.param;
+
+   qp_req_msg.len_param = sizeof(struct ipsec_mb_mp_param);
+   req_param->type = op_type;
+   req_param->dev_id = dev_id;
+   req_param->qp_id = qp_id;
+   req_param->socket_id = socket_id;
+   req_param->process_id = getpid();
+   if (qp_conf) {
+   req_param->nb_descriptors = qp_conf->nb_descriptors;
+   req_param->mp_session = (void *)qp_conf->mp_session;
+   }
+
+   qp_req_msg.num_fds = 0;
+   ret = rte_mp_request_sync(&qp_req_msg, &qp_resp, &ts);
+   if (ret) {
+   RTE_LOG(ERR, USER1, "Create MR request to primary process 
failed.");
+   return -1;
+   }
+   qp_resp_msg = &qp_resp.msgs[0];
+   resp_param = (struct ipsec_mb_mp_param *)qp_resp_msg->param;
+
+   return resp_param->result;
+}
+
 /** Release queue pair */
 int
 ipsec_mb_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
@@ -100,7 +141,10 @@ ipsec_mb_qp_release(struct rte_cryptodev *dev, uint16_t 
qp_id)
struct ipsec_mb_qp *qp = dev->data->queue_pairs[qp_id];
struct rte_ring *r = NULL;

-   if (qp != NULL && rte_eal_process_type() == RTE_PROC_PRIMARY) {
+   if (qp != NULL)
+   return 0;
+
+   if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
r = rte_ring_lookup(qp->name);
rte_ring_free(r);

@@ -115,6 +159,9 @@ ipsec_mb_qp_release(struct rte_cryptodev *dev, uint16_t 
qp_id)
 #endif
rte_free(qp);
dev->data->queue_pairs[qp_id] = NULL;
+   } else { /* secondary process */
+   return ipsec_mb_secondary_qp_op(dev->data->dev_id, qp_id,
+   NULL, 0, RTE_IPSEC_MB_MP_REQ_QP_FREE);
}
return 0;
 }
@@ -222,9 +269,13 @@ ipsec_mb_qp_setup(struct rte_cryptodev *dev, uint16_t 
qp_id,
 #endif
qp = dev->data->queue_pairs[qp_id];
if (qp == NULL) {
-   IPSEC_MB_LOG(ERR, "Primary process hasn't configured 
device qp.");
-   return -EINVAL;
+   IPSEC_MB_LOG(DEBUG, "Secondary process setting up 
device qp.");
+   return ipsec_mb_secondary_qp_op(dev->data->dev_id, 
qp_id,
+   qp_conf, socket_id, 
RTE_IPSEC_MB_MP_REQ_QP_SET);
}
+
+   IPSEC_MB_LOG(ERR, "Queue pair already setup'ed.");
+   return -EINVAL;
} else {
/* Free memory prior to re-allocation if needed. */
if (dev->data->queue_pairs[qp_id] != NULL)
@@ -296,6 +347,78 @@ ipsec_mb_qp_setup(struct rte_cryptodev *dev, uint16_t 
qp_id,
return ret;
 }

+int
+ipsec_mb_ipc_request(const struct rte_mp_msg *mp_msg, const void *peer)
+{
+   struct rte_mp_msg ipc_resp;
+   struct ipsec_mb_mp_param *resp_param =
+   (struct ipsec_mb_mp_param *)ipc_resp.param;
+   const struct ipsec_mb_mp_param *req_param =
+   (const struct ipsec_mb_mp_param *)mp_msg->param;
+
+   int ret;
+   struct rte_cryptodev *dev;
+   s

RE: [PATCH v6 08/27] eal: deprecate RTE_FUNC_PTR_* macros

2022-10-26 Thread Morten Brørup
> From: David Marchand [mailto:david.march...@redhat.com]
> Sent: Wednesday, 26 October 2022 11.21
> 
> On Wed, Oct 26, 2022 at 11:05 AM Morten Brørup
>  wrote:
> >
> > > From: David Marchand [mailto:david.march...@redhat.com]
> > > Sent: Wednesday, 14 September 2022 09.58
> > >
> > > Those macros have no real value and are easily replaced with a
> simple
> > > if() block.
> > >
> > > Existing users have been converted using a new cocci script.
> > > Deprecate them.
> > >
> > > Signed-off-by: David Marchand 
> > > ---
> >
> > [...]
> >
> > >  /* Macros to check for invalid function pointers */
> > > -#define RTE_FUNC_PTR_OR_ERR_RET(func, retval) do { \
> > > +#define RTE_FUNC_PTR_OR_ERR_RET(func, retval)
> > > RTE_DEPRECATED(RTE_FUNC_PTR_OR_ERR_RET) \
> > > +do { \
> > >   if ((func) == NULL) \
> > >   return retval; \
> > >  } while (0)
> > >
> > > -#define RTE_FUNC_PTR_OR_RET(func) do { \
> > > +#define RTE_FUNC_PTR_OR_RET(func)
> RTE_DEPRECATED(RTE_FUNC_PTR_OR_RET)
> > > \
> > > +do { \
> > >   if ((func) == NULL) \
> > >   return; \
> > >  } while (0)
> >
> > rte_ethdev.h has somewhat similar macros [1]:
> >
> > /* Macros to check for valid port */
> > #define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \
> > if (!rte_eth_dev_is_valid_port(port_id)) { \
> > RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id);
> \
> > return retval; \
> > } \
> > } while (0)
> >
> > #define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \
> > if (!rte_eth_dev_is_valid_port(port_id)) { \
> > RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id);
> \
> > return; \
> > } \
> > } while (0)
> >
> > However, these log an error message. It makes me wonder about
> consistency...
> >
> > Are the RTE_FUNC_PTR_OR_* macros a special case, or should similar
> macros, such as RTE_ETH_VALID_PORTID_OR_*, also be deprecated?
> >
> > Or should the RTE_FUNC_PTR_OR_* macros be modified to log an error
> message instead of being deprecated?
> 
> The difference is that those ethdev macros validate something
> expressed in their name "is this portid valid".
> 
> The RTE_FUNC_PTR_OR_* macros were really just a if() + return block.
> I don't see what message to log for them.

The RTE_ETH_VALID_PORTID_OR_* macros seem to serve the same purpose as the 
RTE_FUNC_PTR_OR_* macros.

Is it just a coincidence that the RTE_FUNC_PTR_OR_* macros don't log an error 
message, while similar macros do?

Cleaning up is certainly good, so I don't object to this patch!

I'm only asking if a broader scope applies:

Are the RTE_FUNC_PTR_OR_* macros a unique case (so they can be removed as 
suggested), or should we add an error message to the macros instead of removing 
them?



[PATCH v6 0/6] crypto/uadk: introduce uadk crypto driver

2022-10-26 Thread Zhangfei Gao
Introduce a new crypto PMD for hardware accelerators based on UADK [1].

UADK is a framework for user applications to access hardware accelerators.
UADK relies on IOMMU SVA (Shared Virtual Address) feature, which share
the same page table between IOMMU and MMU.
Thereby user application can directly use virtual address for device dma,
which enhances the performance as well as easy usability.

[1] https://github.com/Linaro/uadk

Test:
sudo dpdk-test --vdev=crypto_uadk (--log-level=6)
RTE>>cryptodev_uadk_autotest
RTE>>quit

Update in v6:
Akhil helped review the v5, and address all the comments from Akhil
01: add rel_notes info, update uadk.rst, put struct in header, etc.
02: move struct to header
04: move struct to header, remove RTE_CRYPTODEV_FF_SYM_SESSIONLESS
05: fixed key_size in HMAC mode
06: updated app/test/meson.build and uadk.rst in the mean time.

Update in v5
Patch 1 fix the build issue when uadk is installed to specific folder
And update doc accordingly

Update in v4:
Akril suggest dpdk use pkg-config, So
Enable uadk support x86 local build, and support pkg-config.
Use pkg-config feature for the uadk crypto pmd.
Add build uadk library steps in doc
Test on both x86 and arm.
x86 can build and install, but can not test since no device.

Resend v3:
Rebase on next/for-main, which just merged the series
"cryptodev: rework session framework".

Update in v3:
Split patches according to Akhil's suggestions
Please split the patches as below.
1. introduce driver - create files with meson.build and with probe/remove
   and device ops defined but not implemented.
   You do not need to write empty functions.
   Add basic documentation also which defines what the driver is.
   You can explain the build dependency here.
2. define queue structs and setup/remove APIs
3. Add data path
4. implement cipher op. Add capabilities and documentation of what is
   supported in each of the patches. Add feature flags etc.
5. implement auth,  add capabilities and documentation
6. test app changes.

Update in v2:
Change uadk_supported_platform to uadk_crypto_version, which matches better
than platform.
enum uadk_crypto_version {
UADK_CRYPTO_V2,
UADK_CRYPTO_V3,
};

Update in v1, compared with rfc

Suggested from Akhil Goyal 
Only consider crypto PMD first
Split patch into small (individually compiled) patches.
Update MAINTAINERS and doc/guides/cryptodevs/features/uadk.ini

Zhangfei Gao (6):
  crypto/uadk: introduce uadk crypto driver
  crypto/uadk: support basic operations
  crypto/uadk: support enqueue/dequeue operations
  crypto/uadk: support cipher algorithms
  crypto/uadk: support auth algorithms
  test/crypto: support uadk PMD

 MAINTAINERS   |6 +
 app/test/meson.build  |1 +
 app/test/test_cryptodev.c |7 +
 app/test/test_cryptodev.h |1 +
 doc/guides/cryptodevs/features/uadk.ini   |   55 +
 doc/guides/cryptodevs/index.rst   |1 +
 doc/guides/cryptodevs/uadk.rst|   96 ++
 doc/guides/rel_notes/release_22_11.rst|6 +
 drivers/crypto/meson.build|1 +
 drivers/crypto/uadk/meson.build   |   30 +
 drivers/crypto/uadk/uadk_crypto_pmd.c | 1075 +
 drivers/crypto/uadk/uadk_crypto_pmd_private.h |   79 ++
 drivers/crypto/uadk/version.map   |3 +
 13 files changed, 1361 insertions(+)
 create mode 100644 doc/guides/cryptodevs/features/uadk.ini
 create mode 100644 doc/guides/cryptodevs/uadk.rst
 create mode 100644 drivers/crypto/uadk/meson.build
 create mode 100644 drivers/crypto/uadk/uadk_crypto_pmd.c
 create mode 100644 drivers/crypto/uadk/uadk_crypto_pmd_private.h
 create mode 100644 drivers/crypto/uadk/version.map

-- 
2.38.1



[PATCH v6 2/6] crypto/uadk: support basic operations

2022-10-26 Thread Zhangfei Gao
Support the basic dev control operations: configure, close, start,
stop and get info, as well as queue pairs operations.

Signed-off-by: Zhangfei Gao 
---
 drivers/crypto/uadk/uadk_crypto_pmd.c | 194 +-
 drivers/crypto/uadk/uadk_crypto_pmd_private.h |  19 ++
 2 files changed, 204 insertions(+), 9 deletions(-)

diff --git a/drivers/crypto/uadk/uadk_crypto_pmd.c 
b/drivers/crypto/uadk/uadk_crypto_pmd.c
index 34df3ccbb9..b383029a45 100644
--- a/drivers/crypto/uadk/uadk_crypto_pmd.c
+++ b/drivers/crypto/uadk/uadk_crypto_pmd.c
@@ -17,16 +17,192 @@
 
 static uint8_t uadk_cryptodev_driver_id;
 
+static const struct rte_cryptodev_capabilities uadk_crypto_v2_capabilities[] = 
{
+   /* End of capabilities */
+   RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+/* Configure device */
+static int
+uadk_crypto_pmd_config(struct rte_cryptodev *dev __rte_unused,
+  struct rte_cryptodev_config *config __rte_unused)
+{
+   return 0;
+}
+
+/* Start device */
+static int
+uadk_crypto_pmd_start(struct rte_cryptodev *dev __rte_unused)
+{
+   return 0;
+}
+
+/* Stop device */
+static void
+uadk_crypto_pmd_stop(struct rte_cryptodev *dev __rte_unused)
+{
+}
+
+/* Close device */
+static int
+uadk_crypto_pmd_close(struct rte_cryptodev *dev __rte_unused)
+{
+   return 0;
+}
+
+/* Get device statistics */
+static void
+uadk_crypto_pmd_stats_get(struct rte_cryptodev *dev,
+ struct rte_cryptodev_stats *stats)
+{
+   int qp_id;
+
+   for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+   struct uadk_qp *qp = dev->data->queue_pairs[qp_id];
+
+   stats->enqueued_count += qp->qp_stats.enqueued_count;
+   stats->dequeued_count += qp->qp_stats.dequeued_count;
+   stats->enqueue_err_count += qp->qp_stats.enqueue_err_count;
+   stats->dequeue_err_count += qp->qp_stats.dequeue_err_count;
+   }
+}
+
+/* Reset device statistics */
+static void
+uadk_crypto_pmd_stats_reset(struct rte_cryptodev *dev __rte_unused)
+{
+   int qp_id;
+
+   for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+   struct uadk_qp *qp = dev->data->queue_pairs[qp_id];
+
+   memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+   }
+}
+
+/* Get device info */
+static void
+uadk_crypto_pmd_info_get(struct rte_cryptodev *dev,
+struct rte_cryptodev_info *dev_info)
+{
+   struct uadk_crypto_priv *priv = dev->data->dev_private;
+
+   if (dev_info != NULL) {
+   dev_info->driver_id = dev->driver_id;
+   dev_info->driver_name = dev->device->driver->name;
+   dev_info->max_nb_queue_pairs = 128;
+   /* No limit of number of sessions */
+   dev_info->sym.max_nb_sessions = 0;
+   dev_info->feature_flags = dev->feature_flags;
+
+   if (priv->version == UADK_CRYPTO_V2)
+   dev_info->capabilities = uadk_crypto_v2_capabilities;
+   }
+}
+
+/* Release queue pair */
+static int
+uadk_crypto_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+   struct uadk_qp *qp = dev->data->queue_pairs[qp_id];
+
+   if (qp) {
+   rte_ring_free(qp->processed_pkts);
+   rte_free(qp);
+   dev->data->queue_pairs[qp_id] = NULL;
+   }
+
+   return 0;
+}
+
+/* set a unique name for the queue pair based on its name, dev_id and qp_id */
+static int
+uadk_pmd_qp_set_unique_name(struct rte_cryptodev *dev,
+   struct uadk_qp *qp)
+{
+   unsigned int n = snprintf(qp->name, sizeof(qp->name),
+ "uadk_crypto_pmd_%u_qp_%u",
+ dev->data->dev_id, qp->id);
+
+   if (n >= sizeof(qp->name))
+   return -EINVAL;
+
+   return 0;
+}
+
+/* Create a ring to place process packets on */
+static struct rte_ring *
+uadk_pmd_qp_create_processed_pkts_ring(struct uadk_qp *qp,
+  unsigned int ring_size, int socket_id)
+{
+   struct rte_ring *r = qp->processed_pkts;
+
+   if (r) {
+   if (rte_ring_get_size(r) >= ring_size) {
+   UADK_LOG(INFO, "Reusing existing ring %s for processed 
packets",
+qp->name);
+   return r;
+   }
+
+   UADK_LOG(ERR, "Unable to reuse existing ring %s for processed 
packets",
+qp->name);
+   return NULL;
+   }
+
+   return rte_ring_create(qp->name, ring_size, socket_id,
+  RING_F_EXACT_SZ);
+}
+
+static int
+uadk_crypto_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+const struct rte_cryptodev_qp_conf *qp_conf,
+int socket_id)
+{
+   struct uadk_qp *qp;
+
+   /* Free memory prior to re-allocation if needed. */
+

[PATCH v6 1/6] crypto/uadk: introduce uadk crypto driver

2022-10-26 Thread Zhangfei Gao
Introduce a new crypto PMD for hardware accelerators based on UADK [1].

UADK is a framework for user applications to access hardware accelerators.
UADK relies on IOMMU SVA (Shared Virtual Address) feature, which share
the same page table between IOMMU and MMU.
Thereby user application can directly use virtual address for device dma,
which enhances the performance as well as easy usability.

This patch adds the basic framework.

[1] https://github.com/Linaro/uadk

Signed-off-by: Zhangfei Gao 
---
 MAINTAINERS   |   6 +
 doc/guides/cryptodevs/features/uadk.ini   |  33 ++
 doc/guides/cryptodevs/index.rst   |   1 +
 doc/guides/cryptodevs/uadk.rst|  72 
 doc/guides/rel_notes/release_22_11.rst|   6 +
 drivers/crypto/meson.build|   1 +
 drivers/crypto/uadk/meson.build   |  30 +
 drivers/crypto/uadk/uadk_crypto_pmd.c | 110 ++
 drivers/crypto/uadk/uadk_crypto_pmd_private.h |  25 
 drivers/crypto/uadk/version.map   |   3 +
 10 files changed, 287 insertions(+)
 create mode 100644 doc/guides/cryptodevs/features/uadk.ini
 create mode 100644 doc/guides/cryptodevs/uadk.rst
 create mode 100644 drivers/crypto/uadk/meson.build
 create mode 100644 drivers/crypto/uadk/uadk_crypto_pmd.c
 create mode 100644 drivers/crypto/uadk/uadk_crypto_pmd_private.h
 create mode 100644 drivers/crypto/uadk/version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 6f56111323..bf9baa9070 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1060,6 +1060,12 @@ M: Kai Ji 
 F: drivers/crypto/scheduler/
 F: doc/guides/cryptodevs/scheduler.rst
 
+HiSilicon UADK crypto
+M: Zhangfei Gao 
+F: drivers/crypto/uadk/
+F: doc/guides/cryptodevs/uadk.rst
+F: doc/guides/cryptodevs/features/uadk.ini
+
 Intel QuickAssist
 M: Kai Ji 
 F: drivers/crypto/qat/
diff --git a/doc/guides/cryptodevs/features/uadk.ini 
b/doc/guides/cryptodevs/features/uadk.ini
new file mode 100644
index 00..df5ad40e3d
--- /dev/null
+++ b/doc/guides/cryptodevs/features/uadk.ini
@@ -0,0 +1,33 @@
+;
+; Supported features of the 'uadk' crypto driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+HW Accelerated = Y
+
+;
+; Supported crypto algorithms of the 'uadk' crypto driver.
+;
+[Cipher]
+
+;
+; Supported authentication algorithms of the 'uadk' crypto driver.
+;
+[Auth]
+
+;
+; Supported AEAD algorithms of the 'uadk' crypto driver.
+;
+[AEAD]
+
+;
+; Supported Asymmetric algorithms of the 'uadk' crypto driver.
+;
+[Asymmetric]
+
+;
+; Supported Operating systems of the 'uadk' crypto driver.
+;
+[OS]
+Linux = Y
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 39cca6dbde..cb4ce227e9 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -30,5 +30,6 @@ Crypto Device Drivers
 scheduler
 snow3g
 qat
+uadk
 virtio
 zuc
diff --git a/doc/guides/cryptodevs/uadk.rst b/doc/guides/cryptodevs/uadk.rst
new file mode 100644
index 00..2a0cb300e9
--- /dev/null
+++ b/doc/guides/cryptodevs/uadk.rst
@@ -0,0 +1,72 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+Copyright 2022-2023 Huawei Technologies Co.,Ltd. All rights reserved.
+Copyright 2022-2023 Linaro ltd.
+
+UADK Crypto Poll Mode Driver
+
+
+This code provides the initial implementation of the UADK poll mode
+driver. All cryptographic operations are using UADK library crypto API,
+which is algorithm level API, abstracting accelerators' low level
+implementations.
+
+UADK crypto PMD relies on UADK library [1]
+
+UADK is a framework for user applications to access hardware accelerators.
+UADK relies on IOMMU SVA (Shared Virtual Address) feature, which share
+the same page table between IOMMU and MMU.
+As a result, user application can directly use virtual address for device DMA,
+which enhances the performance as well as easy usability.
+
+
+Features
+
+
+UADK crypto PMD has support for:
+
+
+Test steps
+--
+
+   .. code-block:: console
+
+   1. Build UADK
+   $ git clone https://github.com/Linaro/uadk.git
+   $ cd uadk
+   $ mkdir build
+   $ ./autogen.sh
+   $ ./configure --prefix=$PWD/build
+   $ make
+   $ make install
+
+   * Without --prefix, UADK will be installed to /usr/local/lib by default
+   * If get error:"cannot find -lnuma", please install the libnuma-dev
+
+   2. Run pkg-config libwd to ensure env is setup correctly
+   $ export PKG_CONFIG_PATH=$PWD/build/lib/pkgconfig
+   $ pkg-config libwd --cflags --libs
+   -I/usr/local/include -L/usr/local/lib -lwd
+
+   * export PKG_CONFIG_PATH is required on demand,
+ not needed if UADK is installed to /usr/local/lib
+
+   3. Build DPDK
+   $ cd dpdk
+   $ mkdir build
+   $ meson build (--reconfigure)
+   $ cd build
+   $ ninja
+   $ sudo ninja i

[PATCH v6 3/6] crypto/uadk: support enqueue/dequeue operations

2022-10-26 Thread Zhangfei Gao
This commit adds the enqueue and dequeue operations.

Signed-off-by: Zhangfei Gao 
---
 drivers/crypto/uadk/uadk_crypto_pmd.c | 53 ++-
 1 file changed, 51 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/uadk/uadk_crypto_pmd.c 
b/drivers/crypto/uadk/uadk_crypto_pmd.c
index b383029a45..b889dac8b0 100644
--- a/drivers/crypto/uadk/uadk_crypto_pmd.c
+++ b/drivers/crypto/uadk/uadk_crypto_pmd.c
@@ -208,6 +208,55 @@ static struct rte_cryptodev_ops uadk_crypto_pmd_ops = {
.sym_session_clear  = NULL,
 };
 
+static uint16_t
+uadk_crypto_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops,
+ uint16_t nb_ops)
+{
+   struct uadk_qp *qp = queue_pair;
+   struct rte_crypto_op *op;
+   uint16_t enqd = 0;
+   int i, ret;
+
+   for (i = 0; i < nb_ops; i++) {
+   op = ops[i];
+   op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+
+   if (op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
+   op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+   if (op->status != RTE_CRYPTO_OP_STATUS_ERROR) {
+   ret = rte_ring_enqueue(qp->processed_pkts, (void *)op);
+   if (ret < 0)
+   goto enqueue_err;
+   qp->qp_stats.enqueued_count++;
+   enqd++;
+   } else {
+   /* increment count if failed to enqueue op */
+   qp->qp_stats.enqueue_err_count++;
+   }
+   }
+
+   return enqd;
+
+enqueue_err:
+   qp->qp_stats.enqueue_err_count++;
+   return enqd;
+}
+
+static uint16_t
+uadk_crypto_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
+ uint16_t nb_ops)
+{
+   struct uadk_qp *qp = queue_pair;
+   unsigned int nb_dequeued;
+
+   nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts,
+   (void **)ops, nb_ops, NULL);
+   qp->qp_stats.dequeued_count += nb_dequeued;
+
+   return nb_dequeued;
+}
+
 static int
 uadk_cryptodev_probe(struct rte_vdev_device *vdev)
 {
@@ -244,8 +293,8 @@ uadk_cryptodev_probe(struct rte_vdev_device *vdev)
 
dev->dev_ops = &uadk_crypto_pmd_ops;
dev->driver_id = uadk_cryptodev_driver_id;
-   dev->dequeue_burst = NULL;
-   dev->enqueue_burst = NULL;
+   dev->dequeue_burst = uadk_crypto_dequeue_burst;
+   dev->enqueue_burst = uadk_crypto_enqueue_burst;
dev->feature_flags = RTE_CRYPTODEV_FF_HW_ACCELERATED;
priv = dev->data->dev_private;
priv->version = version;
-- 
2.38.1



[PATCH v6 4/6] crypto/uadk: support cipher algorithms

2022-10-26 Thread Zhangfei Gao
Add support for cipher algorithms,
including AES_ECB, AES_CBC, AES_XTS, and DES_CBC mode.

Signed-off-by: Zhangfei Gao 
---
 doc/guides/cryptodevs/features/uadk.ini   |  10 +
 doc/guides/cryptodevs/uadk.rst|   6 +
 drivers/crypto/uadk/uadk_crypto_pmd.c | 303 +-
 drivers/crypto/uadk/uadk_crypto_pmd_private.h |  23 ++
 4 files changed, 337 insertions(+), 5 deletions(-)

diff --git a/doc/guides/cryptodevs/features/uadk.ini 
b/doc/guides/cryptodevs/features/uadk.ini
index df5ad40e3d..005e08ac8d 100644
--- a/doc/guides/cryptodevs/features/uadk.ini
+++ b/doc/guides/cryptodevs/features/uadk.ini
@@ -4,12 +4,22 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Symmetric crypto   = Y
 HW Accelerated = Y
 
 ;
 ; Supported crypto algorithms of the 'uadk' crypto driver.
 ;
 [Cipher]
+AES CBC (128)  = Y
+AES CBC (192)  = Y
+AES CBC (256)  = Y
+AES ECB (128)  = Y
+AES ECB (192)  = Y
+AES ECB (256)  = Y
+AES XTS (128)  = Y
+AES XTS (256)  = Y
+DES CBC= Y
 
 ;
 ; Supported authentication algorithms of the 'uadk' crypto driver.
diff --git a/doc/guides/cryptodevs/uadk.rst b/doc/guides/cryptodevs/uadk.rst
index 2a0cb300e9..e0ca097961 100644
--- a/doc/guides/cryptodevs/uadk.rst
+++ b/doc/guides/cryptodevs/uadk.rst
@@ -24,6 +24,12 @@ Features
 
 UADK crypto PMD has support for:
 
+Cipher algorithms:
+
+* ``RTE_CRYPTO_CIPHER_AES_ECB``
+* ``RTE_CRYPTO_CIPHER_AES_CBC``
+* ``RTE_CRYPTO_CIPHER_AES_XTS``
+* ``RTE_CRYPTO_CIPHER_DES_CBC``
 
 Test steps
 --
diff --git a/drivers/crypto/uadk/uadk_crypto_pmd.c 
b/drivers/crypto/uadk/uadk_crypto_pmd.c
index b889dac8b0..f112a96033 100644
--- a/drivers/crypto/uadk/uadk_crypto_pmd.c
+++ b/drivers/crypto/uadk/uadk_crypto_pmd.c
@@ -18,6 +18,86 @@
 static uint8_t uadk_cryptodev_driver_id;
 
 static const struct rte_cryptodev_capabilities uadk_crypto_v2_capabilities[] = 
{
+   {   /* AES ECB */
+   .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+   {.sym = {
+   .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+   {.cipher = {
+   .algo = RTE_CRYPTO_CIPHER_AES_ECB,
+   .block_size = 16,
+   .key_size = {
+   .min = 16,
+   .max = 32,
+   .increment = 8
+   },
+   .iv_size = {
+   .min = 0,
+   .max = 0,
+   .increment = 0
+   }
+   }, }
+   }, }
+   },
+   {   /* AES CBC */
+   .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+   {.sym = {
+   .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+   {.cipher = {
+   .algo = RTE_CRYPTO_CIPHER_AES_CBC,
+   .block_size = 16,
+   .key_size = {
+   .min = 16,
+   .max = 32,
+   .increment = 8
+   },
+   .iv_size = {
+   .min = 16,
+   .max = 16,
+   .increment = 0
+   }
+   }, }
+   }, }
+   },
+   {   /* AES XTS */
+   .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+   {.sym = {
+   .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+   {.cipher = {
+   .algo = RTE_CRYPTO_CIPHER_AES_XTS,
+   .block_size = 1,
+   .key_size = {
+   .min = 32,
+   .max = 64,
+   .increment = 32
+   },
+   .iv_size = {
+   .min = 0,
+   .max = 0,
+   .increment = 0
+   }
+   }, }
+   }, }
+   },
+   {   /* DES CBC */
+   .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+   {.sym = {
+   .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+   {.cipher = {
+   .algo = RTE_CRYPTO_CIPHER_DES_CBC,
+   .block_size = 8,
+   .key_size = {
+   .min = 8,
+   .max = 8,
+ 

[PATCH v6 5/6] crypto/uadk: support auth algorithms

2022-10-26 Thread Zhangfei Gao
Add support for MD5, SHA1, SHA224, SHA256, SHA384, SHA512
Authentication algorithms with and without HMAC.

Signed-off-by: Zhangfei Gao 
---
 doc/guides/cryptodevs/features/uadk.ini   |  12 +
 doc/guides/cryptodevs/uadk.rst|  15 +
 drivers/crypto/uadk/uadk_crypto_pmd.c | 447 ++
 drivers/crypto/uadk/uadk_crypto_pmd_private.h |  12 +
 4 files changed, 486 insertions(+)

diff --git a/doc/guides/cryptodevs/features/uadk.ini 
b/doc/guides/cryptodevs/features/uadk.ini
index 005e08ac8d..2e8a37a2b3 100644
--- a/doc/guides/cryptodevs/features/uadk.ini
+++ b/doc/guides/cryptodevs/features/uadk.ini
@@ -25,6 +25,18 @@ DES CBC= Y
 ; Supported authentication algorithms of the 'uadk' crypto driver.
 ;
 [Auth]
+MD5  = Y
+MD5 HMAC = Y
+SHA1 = Y
+SHA1 HMAC= Y
+SHA224   = Y
+SHA224 HMAC  = Y
+SHA256   = Y
+SHA256 HMAC  = Y
+SHA384   = Y
+SHA384 HMAC  = Y
+SHA512   = Y
+SHA512 HMAC  = Y
 
 ;
 ; Supported AEAD algorithms of the 'uadk' crypto driver.
diff --git a/doc/guides/cryptodevs/uadk.rst b/doc/guides/cryptodevs/uadk.rst
index e0ca097961..19bb461e72 100644
--- a/doc/guides/cryptodevs/uadk.rst
+++ b/doc/guides/cryptodevs/uadk.rst
@@ -31,6 +31,21 @@ Cipher algorithms:
 * ``RTE_CRYPTO_CIPHER_AES_XTS``
 * ``RTE_CRYPTO_CIPHER_DES_CBC``
 
+Hash algorithms:
+
+* ``RTE_CRYPTO_AUTH_MD5``
+* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA1``
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA224``
+* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA384``
+* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+
 Test steps
 --
 
diff --git a/drivers/crypto/uadk/uadk_crypto_pmd.c 
b/drivers/crypto/uadk/uadk_crypto_pmd.c
index f112a96033..2ba9804981 100644
--- a/drivers/crypto/uadk/uadk_crypto_pmd.c
+++ b/drivers/crypto/uadk/uadk_crypto_pmd.c
@@ -18,6 +18,252 @@
 static uint8_t uadk_cryptodev_driver_id;
 
 static const struct rte_cryptodev_capabilities uadk_crypto_v2_capabilities[] = 
{
+   {   /* MD5 HMAC */
+   .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+   {.sym = {
+   .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+   {.auth = {
+   .algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+   .block_size = 64,
+   .key_size = {
+   .min = 1,
+   .max = 64,
+   .increment = 1
+   },
+   .digest_size = {
+   .min = 16,
+   .max = 16,
+   .increment = 0
+   },
+   .iv_size = { 0 }
+   }, }
+   }, }
+   },
+   {   /* MD5 */
+   .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+   {.sym = {
+   .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+   {.auth = {
+   .algo = RTE_CRYPTO_AUTH_MD5,
+   .block_size = 64,
+   .key_size = {
+   .min = 0,
+   .max = 0,
+   .increment = 0
+   },
+   .digest_size = {
+   .min = 16,
+   .max = 16,
+   .increment = 0
+   },
+   }, }
+   }, }
+   },
+   {   /* SHA1 HMAC */
+   .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+   {.sym = {
+   .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+   {.auth = {
+   .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+   .block_size = 64,
+   .key_size = {
+   .min = 1,
+   .max = 64,
+   .increment = 1
+   },
+   .digest_size = {
+   .min = 20,
+   .max = 20,
+   .increment = 0
+   },
+   .iv_size = { 0 }
+   }, }
+   }, }
+   },
+   {   /* SHA1 */
+   .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+   {.sym = {
+   .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+   {.aut

[PATCH v6 6/6] test/crypto: support uadk PMD

2022-10-26 Thread Zhangfei Gao
Example:
sudo dpdk-test --vdev=crypto_uadk --log-level=6
RTE>>cryptodev_uadk_autotest
RTE>>quit

Signed-off-by: Zhangfei Gao 
---
 app/test/meson.build   | 1 +
 app/test/test_cryptodev.c  | 7 +++
 app/test/test_cryptodev.h  | 1 +
 doc/guides/cryptodevs/uadk.rst | 3 +++
 4 files changed, 12 insertions(+)

diff --git a/app/test/meson.build b/app/test/meson.build
index 396b133959..f34d19e3c3 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -312,6 +312,7 @@ driver_test_names = [
 'cryptodev_sw_mvsam_autotest',
 'cryptodev_sw_snow3g_autotest',
 'cryptodev_sw_zuc_autotest',
+'cryptodev_uadk_autotest',
 'dmadev_autotest',
 'rawdev_autotest',
 ]
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 43fcef7e73..101a68fb70 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -16538,6 +16538,12 @@ test_cryptodev_qat(void)
return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
 }
 
+static int
+test_cryptodev_uadk(void)
+{
+   return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_UADK_PMD));
+}
+
 static int
 test_cryptodev_virtio(void)
 {
@@ -16881,6 +16887,7 @@ REGISTER_TEST_COMMAND(cryptodev_sw_mvsam_autotest, 
test_cryptodev_mrvl);
 REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_autotest, test_cryptodev_dpaa2_sec);
 REGISTER_TEST_COMMAND(cryptodev_dpaa_sec_autotest, test_cryptodev_dpaa_sec);
 REGISTER_TEST_COMMAND(cryptodev_ccp_autotest, test_cryptodev_ccp);
+REGISTER_TEST_COMMAND(cryptodev_uadk_autotest, test_cryptodev_uadk);
 REGISTER_TEST_COMMAND(cryptodev_virtio_autotest, test_cryptodev_virtio);
 REGISTER_TEST_COMMAND(cryptodev_octeontx_autotest, test_cryptodev_octeontx);
 REGISTER_TEST_COMMAND(cryptodev_caam_jr_autotest, test_cryptodev_caam_jr);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 29a7d4db2b..abd795f54a 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -74,6 +74,7 @@
 #define CRYPTODEV_NAME_CN9K_PMDcrypto_cn9k
 #define CRYPTODEV_NAME_CN10K_PMD   crypto_cn10k
 #define CRYPTODEV_NAME_MLX5_PMDcrypto_mlx5
+#define CRYPTODEV_NAME_UADK_PMDcrypto_uadk
 
 
 enum cryptodev_api_test_type {
diff --git a/doc/guides/cryptodevs/uadk.rst b/doc/guides/cryptodevs/uadk.rst
index 19bb461e72..c9508ecd9e 100644
--- a/doc/guides/cryptodevs/uadk.rst
+++ b/doc/guides/cryptodevs/uadk.rst
@@ -88,6 +88,9 @@ Test steps
$ mount -t hugetlbfs none /mnt/huge_2mb -o pagesize=2MB
 
5. Run test app
+   $ sudo dpdk-test --vdev=crypto_uadk --log-level=6
+ RTE>>cryptodev_uadk_autotest
+ RTE>>quit
 
 
 [1] https://github.com/Linaro/uadk
-- 
2.38.1



[PATCH] app/testpmd: fix MAC header in csum forward engine

2022-10-26 Thread Gregory Etelson
MLX5 SR-IOV TX engine will not transmit Ethernet frame
if destination MAC address matched local port address. The frame ether
looped-back to RX or dropped, depending on the port configuration.

Application running over MLX5 SR-IOV port cannot transmit packet
polled from RX queue as-is. The packet Ethernet destination address
must be changed.

The patch adds new run-time configuration parameter to the `csum`
forwarding engine to control MAC addresses configuration:

testpmd> csum mac-swap on|off 

`mac-swap on`  replace MAC addresses.
`mac-swap off` keep Ethernet header unchanged.

Fixes: 9b4ea7ae77fa ("app/testpmd: revert MAC update in checksum forwarding")
Signed-off-by: Gregory Etelson 
---
 app/test-pmd/cmdline.c  | 50 +
 app/test-pmd/csumonly.c |  6 +
 app/test-pmd/testpmd.c  |  5 +++--
 app/test-pmd/testpmd.h  |  3 ++-
 4 files changed, 61 insertions(+), 3 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 8dc60e9388..3fbcb6ca8f 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -4793,6 +4793,55 @@ static cmdline_parse_inst_t cmd_csum_tunnel = {
},
 };
 
+struct cmd_csum_mac_swap_result {
+   cmdline_fixed_string_t csum;
+   cmdline_fixed_string_t parse;
+   cmdline_fixed_string_t onoff;
+   portid_t port_id;
+};
+
+static void
+cmd_csum_mac_swap_parsed(void *parsed_result,
+  __rte_unused struct cmdline *cl,
+  __rte_unused void *data)
+{
+   struct cmd_csum_mac_swap_result *res = parsed_result;
+
+   if (port_id_is_invalid(res->port_id, ENABLED_WARN))
+   return;
+   if (strcmp(res->onoff, "on") == 0)
+   ports[res->port_id].fwd_mac_swap = 1;
+   else
+   ports[res->port_id].fwd_mac_swap = 0;
+}
+
+static cmdline_parse_token_string_t cmd_csum_mac_swap_csum =
+   TOKEN_STRING_INITIALIZER(struct cmd_csum_mac_swap_result,
+csum, "csum");
+static cmdline_parse_token_string_t cmd_csum_mac_swap_parse =
+   TOKEN_STRING_INITIALIZER(struct cmd_csum_mac_swap_result,
+parse, "mac-swap");
+static cmdline_parse_token_string_t cmd_csum_mac_swap_onoff =
+   TOKEN_STRING_INITIALIZER(struct cmd_csum_mac_swap_result,
+onoff, "on#off");
+static cmdline_parse_token_num_t cmd_csum_mac_swap_portid =
+   TOKEN_NUM_INITIALIZER(struct cmd_csum_mac_swap_result,
+ port_id, RTE_UINT16);
+
+static cmdline_parse_inst_t cmd_csum_mac_swap = {
+   .f = cmd_csum_mac_swap_parsed,
+   .data = NULL,
+   .help_str = "csum mac-swap on|off : "
+   "Enable/Disable forward mac address swap",
+   .tokens = {
+   (void *)&cmd_csum_mac_swap_csum,
+   (void *)&cmd_csum_mac_swap_parse,
+   (void *)&cmd_csum_mac_swap_onoff,
+   (void *)&cmd_csum_mac_swap_portid,
+   NULL,
+   },
+};
+
 /* *** ENABLE HARDWARE SEGMENTATION IN TX NON-TUNNELED PACKETS *** */
 struct cmd_tso_set_result {
cmdline_fixed_string_t tso;
@@ -12628,6 +12677,7 @@ static cmdline_parse_ctx_t builtin_ctx[] = {
(cmdline_parse_inst_t *)&cmd_csum_set,
(cmdline_parse_inst_t *)&cmd_csum_show,
(cmdline_parse_inst_t *)&cmd_csum_tunnel,
+   (cmdline_parse_inst_t *)&cmd_csum_mac_swap,
(cmdline_parse_inst_t *)&cmd_tso_set,
(cmdline_parse_inst_t *)&cmd_tso_show,
(cmdline_parse_inst_t *)&cmd_tunnel_tso_set,
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 144f28819c..1c24598515 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -915,6 +915,12 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 * and inner headers */
 
eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+   if (ports[fs->tx_port].fwd_mac_swap) {
+   rte_ether_addr_copy(&peer_eth_addrs[fs->peer_addr],
+   ð_hdr->dst_addr);
+   rte_ether_addr_copy(&ports[fs->tx_port].eth_addr,
+   ð_hdr->src_addr);
+   }
parse_ethernet(eth_hdr, &info);
l3_hdr = (char *)eth_hdr + info.l2_len;
 
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 97adafacd0..a61142bd32 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -4218,10 +4218,11 @@ init_port(void)
"rte_zmalloc(%d struct rte_port) failed\n",
RTE_MAX_ETHPORTS);
}
-   for (i = 0; i < RTE_MAX_ETHPORTS; i++)
+   for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
+   ports[i].fwd_mac_swap = 1;
ports[i].xstats_info.allocated = false;
-   for (i = 0; i < RTE_MAX_ETHPORTS; i++)
LIST_INIT(&ports[i].flow_tunne

RE: [dpdk-dev v4] crypto/ipsec_mb: multi-process IPC request handler

2022-10-26 Thread Kusztal, ArkadiuszX



> -Original Message-
> From: Kai Ji 
> Sent: Wednesday, October 26, 2022 12:28 PM
> To: dev@dpdk.org
> Cc: gak...@marvell.com; Ji, Kai ; De Lara Guarch, Pablo
> ; Burakov, Anatoly
> 
> Subject: [dpdk-dev v4] crypto/ipsec_mb: multi-process IPC request handler
> 
> As the queue pair used in secondary process need to be setuped by the primary
> process, this patch add an IPC register function to help secondary process to
> send out queue-pair setup reguest to primary process via IPC messages. A new
> "qp_in_used_pid" param stores the PID to provide the ownership of the queue-
> pair so that only the PID matched queue-pair can be free'd in the request.
> 
> Signed-off-by: Kai Ji 
> 
> --
> 2.17.1
Acked-by: Arek Kusztal 



Re: [PATCH v2] net/gve: fix meson build failure on non-Linux platforms

2022-10-26 Thread Ferruh Yigit

On 10/26/2022 11:23 AM, Junfeng Guo wrote:

Meson build may fail on FreeBSD with gcc and clang, due to missing
the header file linux/pci_regs.h on non-Linux platform. Thus, in
this patch, we removed the file include and added the used Macros
derived from linux/pci_regs.h.

Fixes: 3047a5ac8e66 ("net/gve: add support for device initialization")

Signed-off-by: Junfeng Guo 



Squashed into relevant commit in next-net, thanks.


RE: [dpdk-dev v4] crypto/ipsec_mb: multi-process IPC request handler

2022-10-26 Thread De Lara Guarch, Pablo
Hi Kai,

A couple of minor bits left. 

> -Original Message-
> From: Ji, Kai 
> Sent: Wednesday, October 26, 2022 11:28 AM
> To: dev@dpdk.org
> Cc: gak...@marvell.com; Ji, Kai ; De Lara Guarch, Pablo
> ; Burakov, Anatoly
> 
> Subject: [dpdk-dev v4] crypto/ipsec_mb: multi-process IPC request handler
> 
> As the queue pair used in secondary process need to be setuped by the
needs/needed to be set up by...

> primary process, this patch add an IPC register function to help secondary
This patch adds

> process to send out queue-pair setup reguest to primary process via IPC
request

> messages. A new "qp_in_used_pid" param stores the PID to provide the
> ownership of the queue-pair so that only the PID matched queue-pair can be
> free'd in the request.
> 
> Signed-off-by: Kai Ji 


> --- a/drivers/crypto/ipsec_mb/ipsec_mb_private.h
> +++ b/drivers/crypto/ipsec_mb/ipsec_mb_private.h
> @@ -25,6 +25,9 @@
>  /* Maximum length for memzone name */
>  #define IPSEC_MB_MAX_MZ_NAME 32
> 
> +/* ipsec mb multi-process queue pair config */ #define
> IPSEC_MB_MP_MSG
> +"ipsec_mb_mp_msg"
> +
>  enum ipsec_mb_vector_mode {
>   IPSEC_MB_NOT_SUPPORTED = 0,
>   IPSEC_MB_SSE,
> @@ -142,18 +145,49 @@ struct ipsec_mb_qp {
>   enum ipsec_mb_pmd_types pmd_type;
>   /**< pmd type */
>   uint8_t digest_idx;
> + /**< The process id used for queue pairs **/
> + uint16_t qp_used_by_pid;
>   /**< Index of the next
>* slot to be used in temp_digests,
>* to store the digest for a given operation
>*/

Comments are mixed here (digest_idx and qp_used_by_pid).

>   IMB_MGR *mb_mgr;
> - /* Multi buffer manager */
> + /**< Multi buffer manager */
>   const struct rte_memzone *mb_mgr_mz;
> - /* Shared memzone for storing mb_mgr */
> + /**< Shared memzone for storing mb_mgr */
>   __extension__ uint8_t additional_data[];
>   /**< Storing PMD specific additional data */  };
> 



[dpdk-dev v4] crypto/ipsec_mb: multi-process IPC request handler

2022-10-26 Thread Kai Ji
As the queue pair used in secondary process needs to be set up by
the primary process, this patch adds an IPC register function to help
secondary process to send out queue-pair setup reguest to primary
process via IPC request messages. A new "qp_in_used_pid" param stores
the PID to provide the ownership of the queue-pair so that only the PID
matched queue-pair can be free'd in the request.

Signed-off-by: Kai Ji 
Acked-by: Arek Kusztal 
---
v4:
- review comments resolved

v3:
- remove shared memzone as qp_conf params can be passed directly from
ipc message.

v2:
- add in shared memzone for data exchange between multi-process
---
 drivers/crypto/ipsec_mb/ipsec_mb_ops.c | 129 -
 drivers/crypto/ipsec_mb/ipsec_mb_private.c |  24 +++-
 drivers/crypto/ipsec_mb/ipsec_mb_private.h |  38 +-
 3 files changed, 185 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c 
b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
index cedcaa2742..bf18d692bd 100644
--- a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
+++ b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
@@ -3,6 +3,7 @@
  */

 #include 
+#include 

 #include 
 #include 
@@ -93,6 +94,46 @@ ipsec_mb_info_get(struct rte_cryptodev *dev,
}
 }

+static int
+ipsec_mb_secondary_qp_op(int dev_id, int qp_id,
+   const struct rte_cryptodev_qp_conf *qp_conf,
+   int socket_id, enum ipsec_mb_mp_req_type op_type)
+{
+   int ret;
+   struct rte_mp_msg qp_req_msg;
+   struct rte_mp_msg *qp_resp_msg;
+   struct rte_mp_reply qp_resp;
+   struct ipsec_mb_mp_param *req_param;
+   struct ipsec_mb_mp_param *resp_param;
+   struct timespec ts = {.tv_sec = 1, .tv_nsec = 0};
+
+   memset(&qp_req_msg, 0, sizeof(IPSEC_MB_MP_MSG));
+   memcpy(qp_req_msg.name, IPSEC_MB_MP_MSG, sizeof(IPSEC_MB_MP_MSG));
+   req_param = (struct ipsec_mb_mp_param *)&qp_req_msg.param;
+
+   qp_req_msg.len_param = sizeof(struct ipsec_mb_mp_param);
+   req_param->type = op_type;
+   req_param->dev_id = dev_id;
+   req_param->qp_id = qp_id;
+   req_param->socket_id = socket_id;
+   req_param->process_id = getpid();
+   if (qp_conf) {
+   req_param->nb_descriptors = qp_conf->nb_descriptors;
+   req_param->mp_session = (void *)qp_conf->mp_session;
+   }
+
+   qp_req_msg.num_fds = 0;
+   ret = rte_mp_request_sync(&qp_req_msg, &qp_resp, &ts);
+   if (ret) {
+   RTE_LOG(ERR, USER1, "Create MR request to primary process 
failed.");
+   return -1;
+   }
+   qp_resp_msg = &qp_resp.msgs[0];
+   resp_param = (struct ipsec_mb_mp_param *)qp_resp_msg->param;
+
+   return resp_param->result;
+}
+
 /** Release queue pair */
 int
 ipsec_mb_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
@@ -100,7 +141,10 @@ ipsec_mb_qp_release(struct rte_cryptodev *dev, uint16_t 
qp_id)
struct ipsec_mb_qp *qp = dev->data->queue_pairs[qp_id];
struct rte_ring *r = NULL;

-   if (qp != NULL && rte_eal_process_type() == RTE_PROC_PRIMARY) {
+   if (qp != NULL)
+   return 0;
+
+   if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
r = rte_ring_lookup(qp->name);
rte_ring_free(r);

@@ -115,6 +159,9 @@ ipsec_mb_qp_release(struct rte_cryptodev *dev, uint16_t 
qp_id)
 #endif
rte_free(qp);
dev->data->queue_pairs[qp_id] = NULL;
+   } else { /* secondary process */
+   return ipsec_mb_secondary_qp_op(dev->data->dev_id, qp_id,
+   NULL, 0, RTE_IPSEC_MB_MP_REQ_QP_FREE);
}
return 0;
 }
@@ -222,9 +269,13 @@ ipsec_mb_qp_setup(struct rte_cryptodev *dev, uint16_t 
qp_id,
 #endif
qp = dev->data->queue_pairs[qp_id];
if (qp == NULL) {
-   IPSEC_MB_LOG(ERR, "Primary process hasn't configured 
device qp.");
-   return -EINVAL;
+   IPSEC_MB_LOG(DEBUG, "Secondary process setting up 
device qp.");
+   return ipsec_mb_secondary_qp_op(dev->data->dev_id, 
qp_id,
+   qp_conf, socket_id, 
RTE_IPSEC_MB_MP_REQ_QP_SET);
}
+
+   IPSEC_MB_LOG(ERR, "Queue pair already setup'ed.");
+   return -EINVAL;
} else {
/* Free memory prior to re-allocation if needed. */
if (dev->data->queue_pairs[qp_id] != NULL)
@@ -296,6 +347,78 @@ ipsec_mb_qp_setup(struct rte_cryptodev *dev, uint16_t 
qp_id,
return ret;
 }

+int
+ipsec_mb_ipc_request(const struct rte_mp_msg *mp_msg, const void *peer)
+{
+   struct rte_mp_msg ipc_resp;
+   struct ipsec_mb_mp_param *resp_param =
+   (struct ipsec_mb_mp_param *)ipc_resp.param;
+   const struct ipsec_mb_mp_param *req_param =
+   (const struct ipsec_mb_mp_param *)mp_msg->param;
+
+   int ret;
+   str

RE: [dpdk-dev v4] crypto/ipsec_mb: multi-process IPC request handler

2022-10-26 Thread De Lara Guarch, Pablo
Hi Kai,


> -Original Message-
> From: Ji, Kai 
> Sent: Wednesday, October 26, 2022 1:49 PM
> To: dev@dpdk.org
> Cc: gak...@marvell.com; Ji, Kai ; De Lara Guarch, Pablo
> ; Burakov, Anatoly
> 
> Subject: [dpdk-dev v4] crypto/ipsec_mb: multi-process IPC request handler
> 
> As the queue pair used in secondary process needs to be set up by the
> primary process, this patch adds an IPC register function to help secondary
> process to send out queue-pair setup reguest to primary process via IPC

Still typo: reguest -> request.

> request messages. A new "qp_in_used_pid" param stores the PID to provide
> the ownership of the queue-pair so that only the PID matched queue-pair
> can be free'd in the request.
> 
> Signed-off-by: Kai Ji 
> Acked-by: Arek Kusztal 

Just one missing typo to fix. Apart from that:

Acked-by: Pablo de Lara  


[dpdk-dev v5] crypto/ipsec_mb: multi-process IPC request handler

2022-10-26 Thread Kai Ji
As the queue pair used in secondary process needs to be set up by
the primary process, this patch adds an IPC register function to help
secondary process to send out queue-pair setup request to primary
process via IPC request messages. A new "qp_in_used_pid" param stores
the PID to provide the ownership of the queue-pair so that only the PID
matched queue-pair can be free'd in the request.

Signed-off-by: Kai Ji 
Acked-by: Arek Kusztal 
Acked-by: Pablo de Lara 
---
v5:
- minor updates and typo fix

v4:
- review comments resolved

v3:
- remove shared memzone as qp_conf params can be passed directly from
ipc message.

v2:
- add in shared memzone for data exchange between multi-process
---
 drivers/crypto/ipsec_mb/ipsec_mb_ops.c | 129 -
 drivers/crypto/ipsec_mb/ipsec_mb_private.c |  24 +++-
 drivers/crypto/ipsec_mb/ipsec_mb_private.h |  38 +-
 3 files changed, 185 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c 
b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
index cedcaa2742..bf18d692bd 100644
--- a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
+++ b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
@@ -3,6 +3,7 @@
  */

 #include 
+#include 

 #include 
 #include 
@@ -93,6 +94,46 @@ ipsec_mb_info_get(struct rte_cryptodev *dev,
}
 }

+static int
+ipsec_mb_secondary_qp_op(int dev_id, int qp_id,
+   const struct rte_cryptodev_qp_conf *qp_conf,
+   int socket_id, enum ipsec_mb_mp_req_type op_type)
+{
+   int ret;
+   struct rte_mp_msg qp_req_msg;
+   struct rte_mp_msg *qp_resp_msg;
+   struct rte_mp_reply qp_resp;
+   struct ipsec_mb_mp_param *req_param;
+   struct ipsec_mb_mp_param *resp_param;
+   struct timespec ts = {.tv_sec = 1, .tv_nsec = 0};
+
+   memset(&qp_req_msg, 0, sizeof(IPSEC_MB_MP_MSG));
+   memcpy(qp_req_msg.name, IPSEC_MB_MP_MSG, sizeof(IPSEC_MB_MP_MSG));
+   req_param = (struct ipsec_mb_mp_param *)&qp_req_msg.param;
+
+   qp_req_msg.len_param = sizeof(struct ipsec_mb_mp_param);
+   req_param->type = op_type;
+   req_param->dev_id = dev_id;
+   req_param->qp_id = qp_id;
+   req_param->socket_id = socket_id;
+   req_param->process_id = getpid();
+   if (qp_conf) {
+   req_param->nb_descriptors = qp_conf->nb_descriptors;
+   req_param->mp_session = (void *)qp_conf->mp_session;
+   }
+
+   qp_req_msg.num_fds = 0;
+   ret = rte_mp_request_sync(&qp_req_msg, &qp_resp, &ts);
+   if (ret) {
+   RTE_LOG(ERR, USER1, "Create MR request to primary process 
failed.");
+   return -1;
+   }
+   qp_resp_msg = &qp_resp.msgs[0];
+   resp_param = (struct ipsec_mb_mp_param *)qp_resp_msg->param;
+
+   return resp_param->result;
+}
+
 /** Release queue pair */
 int
 ipsec_mb_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
@@ -100,7 +141,10 @@ ipsec_mb_qp_release(struct rte_cryptodev *dev, uint16_t 
qp_id)
struct ipsec_mb_qp *qp = dev->data->queue_pairs[qp_id];
struct rte_ring *r = NULL;

-   if (qp != NULL && rte_eal_process_type() == RTE_PROC_PRIMARY) {
+   if (qp != NULL)
+   return 0;
+
+   if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
r = rte_ring_lookup(qp->name);
rte_ring_free(r);

@@ -115,6 +159,9 @@ ipsec_mb_qp_release(struct rte_cryptodev *dev, uint16_t 
qp_id)
 #endif
rte_free(qp);
dev->data->queue_pairs[qp_id] = NULL;
+   } else { /* secondary process */
+   return ipsec_mb_secondary_qp_op(dev->data->dev_id, qp_id,
+   NULL, 0, RTE_IPSEC_MB_MP_REQ_QP_FREE);
}
return 0;
 }
@@ -222,9 +269,13 @@ ipsec_mb_qp_setup(struct rte_cryptodev *dev, uint16_t 
qp_id,
 #endif
qp = dev->data->queue_pairs[qp_id];
if (qp == NULL) {
-   IPSEC_MB_LOG(ERR, "Primary process hasn't configured 
device qp.");
-   return -EINVAL;
+   IPSEC_MB_LOG(DEBUG, "Secondary process setting up 
device qp.");
+   return ipsec_mb_secondary_qp_op(dev->data->dev_id, 
qp_id,
+   qp_conf, socket_id, 
RTE_IPSEC_MB_MP_REQ_QP_SET);
}
+
+   IPSEC_MB_LOG(ERR, "Queue pair already setup'ed.");
+   return -EINVAL;
} else {
/* Free memory prior to re-allocation if needed. */
if (dev->data->queue_pairs[qp_id] != NULL)
@@ -296,6 +347,78 @@ ipsec_mb_qp_setup(struct rte_cryptodev *dev, uint16_t 
qp_id,
return ret;
 }

+int
+ipsec_mb_ipc_request(const struct rte_mp_msg *mp_msg, const void *peer)
+{
+   struct rte_mp_msg ipc_resp;
+   struct ipsec_mb_mp_param *resp_param =
+   (struct ipsec_mb_mp_param *)ipc_resp.param;
+   const struct ipsec_mb_mp_param *req_param =
+   (const struct ipsec_

[Bug 993] [build][21.11] build failure using meson 0.62.0

2022-10-26 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=993

Kevin Traynor (ktray...@redhat.com) changed:

   What|Removed |Added

 Resolution|--- |FIXED
 Status|UNCONFIRMED |RESOLVED
 CC||ktray...@redhat.com

--- Comment #5 from Kevin Traynor (ktray...@redhat.com) ---
This patch has been backported to 21.11 stable branch [1] and released as part
of DPDK 21.11.1.

Marking bug as RESOLVED as Reporter has confirmed this patch fixed the build.

[1]
https://git.dpdk.org/dpdk-stable/commit/?h=21.11&id=0c7cbe52f76963a6d8f6466fc077a9c0cb695b37

-- 
You are receiving this mail because:
You are the assignee for the bug.

Re: [PATCH v6 0/4] mempool: fix mempool cache flushing algorithm

2022-10-26 Thread Thomas Monjalon
10/10/2022 17:21, Thomas Monjalon:
> > Andrew Rybchenko (3):
> >   mempool: check driver enqueue result in one place
> >   mempool: avoid usage of term ring on put
> >   mempool: flush cache completely on overflow
> > 
> > Morten Brørup (1):
> >   mempool: fix cache flushing algorithm
> 
> Applied only first 2 "cosmetic" patches as discussed with Andrew.
> The goal is to make some performance tests
> before merging the rest of the series.

The last 2 patches are now merged in 22.11-rc2

There were some comments about improving alignment on cache.
Is it something we want in this release?




RE: [PATCH v6 0/4] mempool: fix mempool cache flushing algorithm

2022-10-26 Thread Morten Brørup
> From: Thomas Monjalon [mailto:tho...@monjalon.net]
> Sent: Wednesday, 26 October 2022 16.09
> 
> 10/10/2022 17:21, Thomas Monjalon:
> > > Andrew Rybchenko (3):
> > >   mempool: check driver enqueue result in one place
> > >   mempool: avoid usage of term ring on put
> > >   mempool: flush cache completely on overflow
> > >
> > > Morten Brørup (1):
> > >   mempool: fix cache flushing algorithm
> >
> > Applied only first 2 "cosmetic" patches as discussed with Andrew.
> > The goal is to make some performance tests
> > before merging the rest of the series.
> 
> The last 2 patches are now merged in 22.11-rc2

Thank you.

> 
> There were some comments about improving alignment on cache.
> Is it something we want in this release?

I think so, yes. Jerin also agreed that it was a good idea.

I will send a patch in a few minutes.

-Morten



[Bug 1115] i40e: RSS hash conf update fails when hash proto mask in rx_adv_conf is 0

2022-10-26 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1115

Bug ID: 1115
   Summary: i40e: RSS hash conf update fails when hash proto mask
in rx_adv_conf is 0
   Product: DPDK
   Version: unspecified
  Hardware: All
OS: All
Status: UNCONFIRMED
  Severity: normal
  Priority: Normal
 Component: ethdev
  Assignee: dev@dpdk.org
  Reporter: ivan.ma...@oktetlabs.ru
  Target Milestone: ---

Opensource ethdev tests [1] have been improved over the past weeks.
With the latest update, test "usecases/update_rss_hash_conf" finds
yet another bug in i40e PMD. If the user has not passed a non-zero
hash protocol mask via "rx_adv_conf" during device configure stage,
any attempts to use rte_eth_dev_rss_hash_update() at a later stage
will fail regardless of hash protocol mask and key being requested.

When the test sees that, it tries to re-configure the ethdev, this
time with the "rx_adv_conf" hash protocol mask being non-zero. The
following invocations of rte_eth_dev_rss_hash_update() do not fail.

In this particular case, the "non-zero" mask being used is in fact
taken from the "flow_type_rss_offloads" field specified by the PMD.


Further research indicates that the PMD considers RSS unconfigured
in the case of zero "rx_adv_conf" hash protocol mask, which is not
correct. Whether the user wants to enable RSS or not is determined
solely by testing RTE_ETH_MQ_RX_RSS_FLAG in "rxmode.mq_mode" field.
If this flag is set, rte_eth_dev_rss_hash_update() should not fail
unless the application tries a wittingly unsupported configuration.


A complete log example can be found at [2].


[1] http://mails.dpdk.org/archives/dev/2022-October/251663.html
[2] https://ts-factory.io/bublik/v2/log/163204?focusId=163697&mode=treeAndlog

-- 
You are receiving this mail because:
You are the assignee for the bug.

[PATCH] mempool: cache align mempool cache objects

2022-10-26 Thread Morten Brørup
Add __rte_cache_aligned to the objs array.

It makes no difference in the general case, but if get/put operations are
always 32 objects, it will reduce the number of memory (or last level
cache) accesses from five to four 64 B cache lines for every get/put
operation.

For readability reasons, an example using 16 objects follows:

Currently, with 16 objects (128B), we access to 3
cache lines:

  ┌┐
  │len │
cache ││---
line0 ││ ^
  ││ |
  ├┤ | 16 objects
  ││ | 128B
cache ││ |
line1 ││ |
  ││ |
  ├┤ |
  ││_v_
cache ││
line2 ││
  ││
  └┘

With the alignment, it is also 3 cache lines:

  ┌┐
  │len │
cache ││
line0 ││
  ││
  ├┤---
  ││ ^
cache ││ |
line1 ││ |
  ││ |
  ├┤ | 16 objects
  ││ | 128B
cache ││ |
line2 ││ |
  ││ v
  └┘---

However, accessing the objects at the bottom of the mempool cache is a
special case, where cache line0 is also used for objects.

Consider the next burst (and any following bursts):

Current:
  ┌┐
  │len │
cache ││
line0 ││
  ││
  ├┤
  ││
cache ││
line1 ││
  ││
  ├┤
  ││
cache ││---
line2 ││ ^
  ││ |
  ├┤ | 16 objects
  ││ | 128B
cache ││ |
line3 ││ |
  ││ |
  ├┤ |
  ││_v_
cache ││
line4 ││
  ││
  └┘
4 cache lines touched, incl. line0 for len.

With the proposed alignment:
  ┌┐
  │len │
cache ││
line0 ││
  ││
  ├┤
  ││
cache ││
line1 ││
  ││
  ├┤
  ││
cache ││
line2 ││
  ││
  ├┤
  ││---
cache ││ ^
line3 ││ |
  ││ | 16 objects
  ├┤ | 128B
  ││ |
cache ││ |
line4 ││ |
  ││_v_
  └┘
Only 3 cache lines touched, incl. line0 for len.

Credits go to Olivier Matz for the nice ASCII graphics.

Signed-off-by: Morten Brørup 
---
 lib/mempool/rte_mempool.h | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 1f5707f46a..3725a72951 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -86,11 +86,13 @@ struct rte_mempool_cache {
uint32_t size;/**< Size of the cache */
uint32_t flushthresh; /**< Threshold before we flush excess elements */
uint32_t len; /**< Current cache count */
-   /*
+   /**
+* Cache objects
+*
 * Cache is allocated to this size to allow it to overflow in certain
 * cases to avoid needless emptying of cache.
 */
-   void *objs[RTE_MEMPOOL_CACHE_MAX_SIZE * 2]; /**< Cache objects */
+   void *objs[RTE_MEMPOOL_CACHE_MAX_SIZE * 2] __rte_cache_aligned;
 } __rte_cache_aligned;
 
 /**
-- 
2.17.1



Re: [PATCH v2] test/member: fix incorrect expression

2022-10-26 Thread Thomas Monjalon
25/10/2022 11:00, Leyi Rong:
> Fix incorrect expression by cast division operand to type double
> to match ceil() and fabs() definitions.
> 
> Coverity issue: 381398, 381401, 381402
> Fixes: db354bd2e1f8 ("member: add NitroSketch mode")
> 
> Signed-off-by: Leyi Rong 

Applied, thanks.





Re: [PATCH v2] gro : check for payload length after the trim

2022-10-26 Thread Thomas Monjalon
> > From: Kumara Parameshwaran 
> > 
> > When packet is padded with extra bytes the the validation of the payload
> > length should be done after the trim operation
> > 
> > Fixes: b8a55871d5af ("gro: trim tail padding bytes")
> > Cc: sta...@dpdk.org
> > 
> > Signed-off-by: Kumara Parameshwaran 
> 
> Acked-by: Jiayu Hu 

Applied, thanks.




Re: [PATCH v2] doc: fix support table for ETH and VLAN flow items

2022-10-26 Thread Thomas Monjalon
13/10/2022 12:48, Ilya Maximets:
> 'has_vlan' attribute is only supported by sfc, mlx5 and cnxk.
> Other drivers doesn't support it.  Most of them (like i40e) just
> ignore it silently.  Some drivers (like mlx4) never had a full
> support of the eth item even before introduction of 'has_vlan'
> (mlx4 allows to match on the destination MAC only).
> 
> Same for the 'has_more_vlan' flag of the vlan item.
> 
> 'has_vlan' is part of 'rte_flow_item_eth', so changing 'eth'
> field to 'partial support' in documentation for all such drivers.
> 'has_more_vlan' is part of 'rte_flow_item_vlan', so changing
> 'vlan' to 'partial support' as well.
> 
> This doesn't solve the issue, but at least marks the problematic
> drivers.
> 
> Some details are available in:
>   https://bugs.dpdk.org/show_bug.cgi?id=958
> 
> Fixes: 09315fc83861 ("ethdev: add VLAN attributes to ethernet and VLAN items")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Ilya Maximets 

Applied, thanks.

How can we encourage fixing the problematic drivers?




Re: [PATCH v3] config/arm: add PHYTIUM tys2500

2022-10-26 Thread Thomas Monjalon
11/10/2022 13:14, luzhipeng:
> From: Zhipeng Lu 
> 
> Here adds configs for PHYTIUM server.
> 
> Signed-off-by: Zhipeng Lu 

Applied, thanks.





RE: [EXT] [PATCH v6 5/6] crypto/uadk: support auth algorithms

2022-10-26 Thread Akhil Goyal
> + {   /* SHA384 HMAC */
> + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> + {.sym = {
> + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> + {.auth = {
> + .algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
> + .block_size = 128,
> + .key_size = {
> + .min = 0,
> + .max = 0,
> + .increment = 0
> + },

This one is missed.

> + .digest_size = {
> + .min = 48,
> + .max = 48,
> + .increment = 0
> + },
> + .iv_size = { 0 }
> + }, }
> + }, }
> + },
> + {   /* SHA384 */
> + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> + {.sym = {
> + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> + {.auth = {
> + .algo = RTE_CRYPTO_AUTH_SHA384,
> + .block_size = 64,
> + .key_size = {
> + .min = 0,
> + .max = 0,
> + .increment = 0
> + },
> + .digest_size = {
> + .min = 48,
> + .max = 48,
> + .increment = 0
> + },
> + }, }
> + }, }
> + },


RE: [PATCH] doc: add Rx buffer split capability for mlx5

2022-10-26 Thread Raslan Darawsheh
Hi,

> -Original Message-
> From: Thomas Monjalon 
> Sent: Sunday, October 9, 2022 11:20 PM
> To: dev@dpdk.org
> Cc: ferruh.yi...@amd.com; andrew.rybche...@oktetlabs.ru;
> sta...@dpdk.org; Matan Azrad ; Slava Ovsiienko
> 
> Subject: [PATCH] doc: add Rx buffer split capability for mlx5
> 
> When adding buffer split feature to mlx in DPDK 20.11, it has been forgotten
> to fill the feature matrix.
> 
> Fixes: 6c8f7f1c1877 ("net/mlx5: report Rx buffer split capabilities")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Thomas Monjalon 
Acked-by: Raslan Darawsheh 

Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh


Re: [PATCH v2] lib: remove empty return types from doxygen comments

2022-10-26 Thread Thomas Monjalon
25/10/2022 18:00, Stephen Hemminger:
> On Tue, 25 Oct 2022 16:32:38 +0300
> Ali Alnubani  wrote:
> 
> > Recent versions of doxygen (1.9.4 and newer) complain about
> > documented return types for functions that don't return anything.
> > 
> > This patch removes these return types to fix build errors similar
> > to this one:
> > [..]
> > Generating doc/api/doxygen with a custom command
> > FAILED: doc/api/html
> > /usr/bin/python3 /path/to/doc/api/generate_doxygen.py doc/api/html
> >   /usr/bin/doxygen doc/api/doxy-api.conf
> > /root/dpdk/lib/eal/include/rte_bitmap.h:324: error: found documented
> >   return type for rte_bitmap_prefetch0 that does not return anything
> >   (warning treated as error, aborting now)
> > [..]
> > 
> > Tested with doxygen versions: 1.8.13, 1.8.17, 1.9.1, and 1.9.4.
> > 
> > Signed-off-by: Ali Alnubani 
> 
> Acked-by: Stephen Hemminger 

Applied, thanks.




RE: [PATCH] config: set pkgconfig for ppc64le

2022-10-26 Thread Ali Alnubani
> -Original Message-
> From: Ali Alnubani 
> Sent: Tuesday, August 30, 2022 9:36 PM
> To: David Christensen 
> Cc: dev@dpdk.org; Thinh Tran ; NBU-Contact-
> Thomas Monjalon (EXTERNAL) 
> Subject: RE: [PATCH] config: set pkgconfig for ppc64le
> 
> > On 8/29/22 3:30 AM, Thomas Monjalon wrote:
> > > What is the conclusion on this patch?
> > > It is good to go? Acked?
> >
> > Not from me yet.
> >
> > Just asked about the test environment so I can duplicate the issue.  My
> > understanding is that Ubuntu cross-compiles for CI/CD are still working
> > so I'd like to understand the test case the drives this need.
> >
> > Dave
> 
> Trying to enable the mlx drivers when cross-building for ppc64le, I added the
> directory containing
> the .pc files from an rdma-core (https://github.com/linux-rdma/rdma-core)
> ppc64le cross build, but Meson
> didn't detect the dependencies without installing pkg-config-powerpc64le-
> linux-gnu and setting
> binaries.pkgconfig as powerpc64le-linux-gnu-pkg-config.
> 
> I just tried to reproduce the issue with a cross build of numactl, but Meson
> will not
> detect it, even with my change. Seems that
> PKG_CONFIG_PATH=:/path/to/numactl/build/lib/pkgconfig
> gets ignored.
> 

Hello David,

Do you still have any concerns about this change?
Do you have any similar use-cases that you can test? (i.e., check if pkg-config 
can detect a cross-built dependency for ppc64le).

By the way, we do a similar change for arm cross builds:
https://git.dpdk.org/dpdk/commit/config/arm/arm64_armv8_linux_gcc?id=f31d17807236094268ccd1bead5d740a10fec513

Thanks,
Ali


Re: [PATCH v3 1/3] ethdev: add strict queue to pre-configuration flow hints

2022-10-26 Thread Andrew Rybchenko

On 10/19/22 17:49, Michael Baum wrote:

The data-path focused flow rule management can manage flow rules in more
optimized way than traditional one by using hints provided by
application in initialization phase.

In addition to the current hints we have in port attr, more hints could
be provided by application about its behaviour.

One example is how the application do with the same flow rule ?
A. create/destroy flow on same queue but query flow on different queue
or queue-less way (i.e, counter query)
B. All flow operations will be exactly on the same queue, by which PMD
could be in more optimized way then A because resource could be
isolated and access based on queue, without lock, for example.

This patch add flag about above situation and could be extended to cover
more situations.

Signed-off-by: Michael Baum 
Acked-by: Ori Kam 


Acked-by: Andrew Rybchenko 




Re: [PATCH v3 2/3] ethdev: add queue-based API to report aged flow rules

2022-10-26 Thread Andrew Rybchenko

On 10/19/22 17:49, Michael Baum wrote:

When application use queue-based flow rule management and operate the
same flow rule on the same queue, e.g create/destroy/query, API of
querying aged flow rules should also have queue id parameter just like
other queue-based flow APIs.

By this way, PMD can work in more optimized way since resources are
isolated by queue and needn't synchronize.

If application do use queue-based flow management but configure port
without RTE_FLOW_PORT_FLAG_STRICT_QUEUE, which means application operate
a given flow rule on different queues, the queue id parameter will
be ignored.

Signed-off-by: Michael Baum 
Acked-by: Ori Kam 


Few minor notes below, other than that

Acked-by: Andrew Rybchenko 

[snip]


diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst 
b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index a8b99c8c19..8e21b2a5b7 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -2894,9 +2894,10 @@ following sections.
 [meters_number {number}] [flags {number}]
  
  - Create a pattern template::

+


unrelated style fixes, should be in a separate patch


 flow pattern_template {port_id} create [pattern_template_id {id}]
 [relaxed {boolean}] [ingress] [egress] [transfer]
-  template {item} [/ {item} [...]] / end
+   template {item} [/ {item} [...]] / end


unrelated style fixes

  
  - Destroy a pattern template::
  
@@ -2995,6 +2996,10 @@ following sections.
  
 flow aged {port_id} [destroy]
  
+- Enqueue list and destroy aged flow rules::

+
+   flow queue {port_id} aged {queue_id} [destroy]
+
  - Tunnel offload - create a tunnel stub::
  
 flow tunnel create {port_id} type {tunnel_type}

@@ -4236,7 +4241,7 @@ Disabling isolated mode::
   testpmd>
  
  Dumping HW internal information

-
+~~~


unrelated changes

  
  ``flow dump`` dumps the hardware's internal representation information of

  all flows. It is bound to ``rte_flow_dev_dump()``::
@@ -4252,10 +4257,10 @@ Otherwise, it will complain error occurred::
 Caught error type [...] ([...]): [...]
  
  Listing and destroying aged flow rules

-
+~~
  
  ``flow aged`` simply lists aged flow rules be get from api ``rte_flow_get_aged_flows``,

-and ``destroy`` parameter can be used to destroy those flow rules in PMD.
+and ``destroy`` parameter can be used to destroy those flow rules in PMD::


unrelated style fixes

  
 flow aged {port_id} [destroy]
  
@@ -4290,7 +4295,7 @@ will be ID 3, ID 1, ID 0::

 1   0   0   i--
 0   0   0   i--
  
-If attach ``destroy`` parameter, the command will destroy all the list aged flow rules.

+If attach ``destroy`` parameter, the command will destroy all the list aged 
flow rules::


unrelated style fixes

[snip]



Re: [PATCH v3 3/3] ethdev: add structure for indirect AGE update

2022-10-26 Thread Andrew Rybchenko

On 10/19/22 17:49, Michael Baum wrote:

Add a new structure for indirect AGE update.

This new structure enables:
1. Update timeout value.
2. Stop AGE checking.
3. Start AGE checking.
4. restart AGE checking.

Signed-off-by: Michael Baum 
Acked-by: Ori Kam 


Few minor notes below, other than that

Acked-by: Andrew Rybchenko 


diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
diff --git a/doc/guides/prog_guide/rte_flow.rst 
b/doc/guides/prog_guide/rte_flow.rst
index 565868aeea..1ce0277e65 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2737,7 +2737,7 @@ Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be 
returned.
  Action: ``AGE``
  ^^^
  
-Set ageing timeout configuration to a flow.

+Set aging timeout configuration to a flow.


Unrelated fixes

  
  Event RTE_ETH_EVENT_FLOW_AGED will be reported if

  timeout passed without any matching on the flow.
@@ -2756,8 +2756,8 @@ timeout passed without any matching on the flow.
 | ``context``  | user input flow context |
 +--+-+
  
-Query structure to retrieve ageing status information of a

-shared AGE action, or a flow rule using the AGE action:
+Query structure to retrieve aging status information of an
+indirect AGE action, or a flow rule using the AGE action:


Unrelated fixes



Re: [PATCH] mempool: cache align mempool cache objects

2022-10-26 Thread Andrew Rybchenko

On 10/26/22 17:44, Morten Brørup wrote:

Add __rte_cache_aligned to the objs array.

It makes no difference in the general case, but if get/put operations are
always 32 objects, it will reduce the number of memory (or last level
cache) accesses from five to four 64 B cache lines for every get/put
operation.

For readability reasons, an example using 16 objects follows:

Currently, with 16 objects (128B), we access to 3
cache lines:

   ┌┐
   │len │
cache ││---
line0 ││ ^
   ││ |
   ├┤ | 16 objects
   ││ | 128B
cache ││ |
line1 ││ |
   ││ |
   ├┤ |
   ││_v_
cache ││
line2 ││
   ││
   └┘

With the alignment, it is also 3 cache lines:

   ┌┐
   │len │
cache ││
line0 ││
   ││
   ├┤---
   ││ ^
cache ││ |
line1 ││ |
   ││ |
   ├┤ | 16 objects
   ││ | 128B
cache ││ |
line2 ││ |
   ││ v
   └┘---

However, accessing the objects at the bottom of the mempool cache is a
special case, where cache line0 is also used for objects.

Consider the next burst (and any following bursts):

Current:
   ┌┐
   │len │
cache ││
line0 ││
   ││
   ├┤
   ││
cache ││
line1 ││
   ││
   ├┤
   ││
cache ││---
line2 ││ ^
   ││ |
   ├┤ | 16 objects
   ││ | 128B
cache ││ |
line3 ││ |
   ││ |
   ├┤ |
   ││_v_
cache ││
line4 ││
   ││
   └┘
4 cache lines touched, incl. line0 for len.

With the proposed alignment:
   ┌┐
   │len │
cache ││
line0 ││
   ││
   ├┤
   ││
cache ││
line1 ││
   ││
   ├┤
   ││
cache ││
line2 ││
   ││
   ├┤
   ││---
cache ││ ^
line3 ││ |
   ││ | 16 objects
   ├┤ | 128B
   ││ |
cache ││ |
line4 ││ |
   ││_v_
   └┘
Only 3 cache lines touched, incl. line0 for len.

Credits go to Olivier Matz for the nice ASCII graphics.

Signed-off-by: Morten Brørup 


Reviewed-by: Andrew Rybchenko 




[PATCH 01/14] doc/guides/bbdevs: add ark baseband device documentation

2022-10-26 Thread John Miller
Add new ark baseband device documentation.

This is the first patch in the series that introduces
the Arkville baseband PMD.

First we create a common/ark directory and move common files
from net/ark to share with the new baseband/ark device.

Next we create baseband/ark and introduce the Arkville baseband PMD,
including documentation.

Finally we modify the build system to support the changes.

Signed-off-by: John Miller 
---
 doc/guides/bbdevs/ark.rst | 52 +++
 1 file changed, 52 insertions(+)
 create mode 100644 doc/guides/bbdevs/ark.rst

diff --git a/doc/guides/bbdevs/ark.rst b/doc/guides/bbdevs/ark.rst
new file mode 100644
index 00..09afcb0f31
--- /dev/null
+++ b/doc/guides/bbdevs/ark.rst
@@ -0,0 +1,52 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright (c) 2015-2022 Atomic Rules LLC
+
+=
+ Atomic Rules LLC, Baseband Poll Mode Driver
+=
+
+The Atomic Rules, Arkville Baseband poll model driver supports the data
+movement portion of a baseband device implemented within an FPGA.
+The specifics of the encode or decode functions within the FPGA are
+outside the scope of Arkville's data movement. Hence this PMD requires and
+provides for the customization needed to advertise its
+features and support for out-of-band (or meta data) to accompany packet
+data between the FPGA device and the host software.
+
+
+==
+ Features
+==
+
+* Support for LDPC encode and decode operations.
+* Support for Turbo encode and decode operations.
+* Support for scatter/gather.
+* Support Mbuf data room sizes up to 32K bytes for improved performance.
+* Support for up to 64 queues
+* Support for runtime switching of Mbuf size, per queue, for improved 
perormance.
+* Support for PCIe Gen3x16, Gen4x16, and Gen5x8 endpoints.
+
+
+=
+ Required Customization Functions
+=
+
+The following customization functions are required:
+  * Set the capabilities structure for the device `ark_bbdev_info_get()`
+  * An optional device start function `rte_pmd_ark_bbdev_start()`
+  * An optional device stop function `rte_pmd_ark_bbdev_stop()`
+  * Functions for defining meta data format shared between
+the host and FPGA.
+`rte_pmd_ark_bbdev_enqueue_ldpc_dec()`,
+`rte_pmd_ark_bbdev_dequeue_ldpc_dec()`,
+`rte_pmd_ark_bbdev_enqueue_ldpc_enc()`,
+`rte_pmd_ark_bbdev_dequeue_ldpc_enc()`.
+
+
+=
+ Limitations
+=
+
+* MBufs for the output data from the operation must be sized exactly
+   to hold the result based on DATAROOM sizes.
+* Side-band or meta data accompaning packet data is limited to 20 Bytes.
-- 
2.25.1



[PATCH 03/14] common/ark: move common files to common subdirectory

2022-10-26 Thread John Miller
Add common ark files to drivers/common directory in
preparation to support Arkville baseband device.

Signed-off-by: John Miller 
---
 drivers/common/ark/ark_common.c |  9 
 drivers/common/ark/ark_common.h | 47 
 drivers/common/ark/meson.build  | 19 +++
 drivers/common/ark/version.map  | 95 +
 4 files changed, 170 insertions(+)
 create mode 100644 drivers/common/ark/ark_common.c
 create mode 100644 drivers/common/ark/ark_common.h
 create mode 100644 drivers/common/ark/meson.build
 create mode 100644 drivers/common/ark/version.map

diff --git a/drivers/common/ark/ark_common.c b/drivers/common/ark/ark_common.c
new file mode 100644
index 00..abc169fd14
--- /dev/null
+++ b/drivers/common/ark/ark_common.c
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2020-2021 Atomic Rules LLC
+ */
+
+#include "ark_common.h"
+
+int ark_common_logtype;
+
+RTE_LOG_REGISTER_DEFAULT(ark_common_logtype, NOTICE);
diff --git a/drivers/common/ark/ark_common.h b/drivers/common/ark/ark_common.h
new file mode 100644
index 00..ba4c70f804
--- /dev/null
+++ b/drivers/common/ark/ark_common.h
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2015-2018 Atomic Rules LLC
+ */
+
+#ifndef _ARK_COMMON_H_
+#define _ARK_COMMON_H_
+
+#include 
+#include 
+
+/* system camel case definition changed to upper case */
+#define PRIU32 PRIu32
+#define PRIU64 PRIu64
+
+/* Atomic Rules vendor id */
+#define AR_VENDOR_ID 0x1d6c
+
+/*
+ * This structure is used to statically define the capabilities
+ * of supported devices.
+ * Capabilities:
+ *  rqpacing -
+ * Some HW variants require that PCIe read-requests be correctly throttled.
+ * This is called "rqpacing" and has to do with credit and flow control
+ * on certain Arkville implementations.
+ */
+struct ark_caps {
+   bool rqpacing;
+};
+struct ark_dev_caps {
+   uint32_t  device_id;
+   struct ark_caps  caps;
+};
+#define SET_DEV_CAPS(id, rqp) \
+   {id, {.rqpacing = rqp} }
+
+/* Format specifiers for string data pairs */
+#define ARK_SU32  "\n\t%-20s%'20" PRIU32
+#define ARK_SU64  "\n\t%-20s%'20" PRIU64
+#define ARK_SU64X "\n\t%-20s%#20" PRIx64
+#define ARK_SPTR  "\n\t%-20s%20p"
+
+extern int ark_common_logtype;
+#define ARK_PMD_LOG(level, fmt, args...)   \
+   rte_log(RTE_LOG_ ##level, ark_common_logtype, "ARK_COMMON: " fmt, ## 
args)
+
+#endif
diff --git a/drivers/common/ark/meson.build b/drivers/common/ark/meson.build
new file mode 100644
index 00..a54acf5e3a
--- /dev/null
+++ b/drivers/common/ark/meson.build
@@ -0,0 +1,19 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2019 Mellanox Technologies, Ltd
+
+if is_windows
+build = false
+reason = 'not supported on Windows'
+subdir_done()
+endif
+
+sources += files(
+   'ark_ddm.c',
+   'ark_common.c',
+   'ark_mpu.c',
+   'ark_pktchkr.c',
+   'ark_pktdir.c',
+   'ark_pktgen.c',
+   'ark_rqp.c',
+   'ark_udm.c'
+)
diff --git a/drivers/common/ark/version.map b/drivers/common/ark/version.map
new file mode 100644
index 00..74d9f4b668
--- /dev/null
+++ b/drivers/common/ark/version.map
@@ -0,0 +1,95 @@
+DPDK_22 {
+   local: *;
+};
+
+INTERNAL {
+   global:
+
+   ark_api_num_queues;
+   ark_api_num_queues_per_port;
+
+   ark_ddm_dump_stats;
+   ark_ddm_queue_byte_count;
+   ark_ddm_queue_pkt_count;
+   ark_ddm_queue_reset_stats;
+   ark_ddm_stats_reset;
+   ark_ddm_verify;
+
+   ark_mpu_configure;
+   ark_mpu_dump;
+   ark_mpu_dump_setup;
+   ark_mpu_reset;
+   ark_mpu_start;
+   ark_mpu_stop;
+   ark_mpu_verify;
+
+   ark_pktchkr_dump_stats;
+   ark_pktchkr_get_pkts_sent;
+   ark_pktchkr_init;
+   ark_pktchkr_is_running;
+   ark_pktchkr_parse;
+   ark_pktchkr_run;
+   ark_pktchkr_set_dst_mac_addr;
+   ark_pktchkr_set_eth_type;
+   ark_pktchkr_set_hdr_dW;
+   ark_pktchkr_set_num_pkts;
+   ark_pktchkr_set_payload_byte;
+   ark_pktchkr_set_pkt_size_incr;
+   ark_pktchkr_set_pkt_size_max;
+   ark_pktchkr_set_pkt_size_min;
+   ark_pktchkr_set_src_mac_addr;
+   ark_pktchkr_setup;
+   ark_pktchkr_stop;
+   ark_pktchkr_stopped;
+   ark_pktchkr_uninit;
+   ark_pktchkr_wait_done;
+   ark_pktdir_init;
+   ark_pktdir_setup;
+   ark_pktdir_stall_cnt;
+   ark_pktdir_status;
+   ark_pktdir_uninit;
+
+   ark_pktgen_get_pkts_sent;
+   ark_pktgen_init;
+   ark_pktgen_is_gen_forever;
+   ark_pktgen_is_running;
+   ark_pktgen_parse;
+   ark_pktgen_pause;
+   ark_pktgen_paused;
+   ark_pktgen_reset;
+   ark_pktgen_run;
+   ark_pktgen_set_dst_mac_addr;
+   ark_pktgen_set_eth_type;
+   ark_pktgen_set_hdr_dW;
+   ark_pktgen_set_num_pkts;
+   ark_pktgen_set_payload_byte;
+   ark_pktgen_set_pkt_size_incr;
+   ark_pktgen_set_pkt_

[PATCH 02/14] common/ark: create common subdirectory for baseband support

2022-10-26 Thread John Miller
Create a common directory in drivers/common and move common
ark files to prepare support for Arkville baseband device.

Signed-off-by: John Miller 
---
 MAINTAINERS   |  1 +
 drivers/{net => common}/ark/ark_ddm.c |  2 +-
 drivers/{net => common}/ark/ark_ddm.h |  0
 drivers/{net => common}/ark/ark_mpu.c |  2 +-
 drivers/{net => common}/ark/ark_mpu.h |  0
 drivers/{net => common}/ark/ark_pktchkr.c |  2 +-
 drivers/{net => common}/ark/ark_pktchkr.h | 22 ++
 drivers/{net => common}/ark/ark_pktdir.c  |  5 +++--
 drivers/{net => common}/ark/ark_pktdir.h  |  7 ++
 drivers/{net => common}/ark/ark_pktgen.c  |  2 +-
 drivers/{net => common}/ark/ark_pktgen.h  | 27 +++
 drivers/{net => common}/ark/ark_rqp.c |  2 +-
 drivers/{net => common}/ark/ark_rqp.h |  0
 drivers/{net => common}/ark/ark_udm.c |  2 +-
 drivers/{net => common}/ark/ark_udm.h | 13 +--
 15 files changed, 77 insertions(+), 10 deletions(-)
 rename drivers/{net => common}/ark/ark_ddm.c (98%)
 rename drivers/{net => common}/ark/ark_ddm.h (100%)
 rename drivers/{net => common}/ark/ark_mpu.c (99%)
 rename drivers/{net => common}/ark/ark_mpu.h (100%)
 rename drivers/{net => common}/ark/ark_pktchkr.c (99%)
 rename drivers/{net => common}/ark/ark_pktchkr.h (88%)
 rename drivers/{net => common}/ark/ark_pktdir.c (95%)
 rename drivers/{net => common}/ark/ark_pktdir.h (89%)
 rename drivers/{net => common}/ark/ark_pktgen.c (99%)
 rename drivers/{net => common}/ark/ark_pktgen.h (86%)
 rename drivers/{net => common}/ark/ark_rqp.c (98%)
 rename drivers/{net => common}/ark/ark_rqp.h (100%)
 rename drivers/{net => common}/ark/ark_udm.c (99%)
 rename drivers/{net => common}/ark/ark_udm.h (95%)

diff --git a/MAINTAINERS b/MAINTAINERS
index 2bd4a55f1b..b2e192d001 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -642,6 +642,7 @@ M: Shepard Siegel 
 M: Ed Czeck 
 M: John Miller 
 F: drivers/net/ark/
+F: drivers/common/ark/
 F: doc/guides/nics/ark.rst
 F: doc/guides/nics/features/ark.ini
 
diff --git a/drivers/net/ark/ark_ddm.c b/drivers/common/ark/ark_ddm.c
similarity index 98%
rename from drivers/net/ark/ark_ddm.c
rename to drivers/common/ark/ark_ddm.c
index eb88349b7b..7875a7cabb 100644
--- a/drivers/net/ark/ark_ddm.c
+++ b/drivers/common/ark/ark_ddm.c
@@ -4,7 +4,7 @@
 
 #include 
 
-#include "ark_logs.h"
+#include "ark_common.h"
 #include "ark_ddm.h"
 
 static_assert(sizeof(union ark_tx_meta) == 8, "Unexpected struct size 
ark_tx_meta");
diff --git a/drivers/net/ark/ark_ddm.h b/drivers/common/ark/ark_ddm.h
similarity index 100%
rename from drivers/net/ark/ark_ddm.h
rename to drivers/common/ark/ark_ddm.h
diff --git a/drivers/net/ark/ark_mpu.c b/drivers/common/ark/ark_mpu.c
similarity index 99%
rename from drivers/net/ark/ark_mpu.c
rename to drivers/common/ark/ark_mpu.c
index 9d5ee7841b..1112974e59 100644
--- a/drivers/net/ark/ark_mpu.c
+++ b/drivers/common/ark/ark_mpu.c
@@ -4,7 +4,7 @@
 
 #include 
 
-#include "ark_logs.h"
+#include "ark_common.h"
 #include "ark_mpu.h"
 
 uint16_t
diff --git a/drivers/net/ark/ark_mpu.h b/drivers/common/ark/ark_mpu.h
similarity index 100%
rename from drivers/net/ark/ark_mpu.h
rename to drivers/common/ark/ark_mpu.h
diff --git a/drivers/net/ark/ark_pktchkr.c b/drivers/common/ark/ark_pktchkr.c
similarity index 99%
rename from drivers/net/ark/ark_pktchkr.c
rename to drivers/common/ark/ark_pktchkr.c
index e1f336c73c..18454e66f0 100644
--- a/drivers/net/ark/ark_pktchkr.c
+++ b/drivers/common/ark/ark_pktchkr.c
@@ -9,7 +9,7 @@
 #include 
 
 #include "ark_pktchkr.h"
-#include "ark_logs.h"
+#include "ark_common.h"
 
 static int set_arg(char *arg, char *val);
 static int ark_pktchkr_is_gen_forever(ark_pkt_chkr_t handle);
diff --git a/drivers/net/ark/ark_pktchkr.h b/drivers/common/ark/ark_pktchkr.h
similarity index 88%
rename from drivers/net/ark/ark_pktchkr.h
rename to drivers/common/ark/ark_pktchkr.h
index b362281776..a166f98586 100644
--- a/drivers/net/ark/ark_pktchkr.h
+++ b/drivers/common/ark/ark_pktchkr.h
@@ -5,6 +5,8 @@
 #ifndef _ARK_PKTCHKR_H_
 #define _ARK_PKTCHKR_H_
 
+#include 
+#include 
 #include 
 #include 
 
@@ -64,25 +66,45 @@ struct ark_pkt_chkr_inst {
 };
 
 /*  packet checker functions */
+__rte_internal
 ark_pkt_chkr_t ark_pktchkr_init(void *addr, int ord, int l2_mode);
+__rte_internal
 void ark_pktchkr_uninit(ark_pkt_chkr_t handle);
+__rte_internal
 void ark_pktchkr_run(ark_pkt_chkr_t handle);
+__rte_internal
 int ark_pktchkr_stopped(ark_pkt_chkr_t handle);
+__rte_internal
 void ark_pktchkr_stop(ark_pkt_chkr_t handle);
+__rte_internal
 int ark_pktchkr_is_running(ark_pkt_chkr_t handle);
+__rte_internal
 int ark_pktchkr_get_pkts_sent(ark_pkt_chkr_t handle);
+__rte_internal
 void ark_pktchkr_set_payload_byte(ark_pkt_chkr_t handle, uint32_t b);
+__rte_internal
 void ark_pktchkr_set_pkt_size_min(ark_pkt_chkr_t handle, uint32_t x);
+__rte_internal
 void ark_pktchkr_set_pkt_size_max(ark_pkt_chkr_t handle, uint32_t x);
+__rte_internal
 v

[PATCH 05/14] net/ark: remove build files moved to common

2022-10-26 Thread John Miller
Remove build files moved to common.

Signed-off-by: John Miller 
---
 drivers/net/ark/meson.build | 12 +---
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ark/meson.build b/drivers/net/ark/meson.build
index 8d87744c22..c48044b8ee 100644
--- a/drivers/net/ark/meson.build
+++ b/drivers/net/ark/meson.build
@@ -7,15 +7,13 @@ if is_windows
 subdir_done()
 endif
 
+deps += ['common_ark']
+
 sources = files(
-'ark_ddm.c',
 'ark_ethdev.c',
 'ark_ethdev_rx.c',
 'ark_ethdev_tx.c',
-'ark_mpu.c',
-'ark_pktchkr.c',
-'ark_pktdir.c',
-'ark_pktgen.c',
-'ark_rqp.c',
-'ark_udm.c',
+'ark_ethdev_logs.c',
 )
+
+includes += include_directories('../../common/ark')
-- 
2.25.1



[PATCH 04/14] common/meson.build:

2022-10-26 Thread John Miller
Add common ark to build system.

Signed-off-by: John Miller 
---
 drivers/common/meson.build | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/common/meson.build b/drivers/common/meson.build
index ea261dd70a..5514f4ba83 100644
--- a/drivers/common/meson.build
+++ b/drivers/common/meson.build
@@ -3,6 +3,7 @@
 
 std_deps = ['eal']
 drivers = [
+'ark',
 'cpt',
 'dpaax',
 'iavf',
-- 
2.25.1



[PATCH 07/14] common/ark: avoid exporting internal functions

2022-10-26 Thread John Miller
Add __rte_internal to all internal functions


Signed-off-by: John Miller 
---
 drivers/common/ark/ark_ddm.h | 8 
 drivers/common/ark/ark_mpu.h | 8 +++-
 drivers/common/ark/ark_rqp.h | 3 +++
 3 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/drivers/common/ark/ark_ddm.h b/drivers/common/ark/ark_ddm.h
index 84beeb063a..9d10c131a7 100644
--- a/drivers/common/ark/ark_ddm.h
+++ b/drivers/common/ark/ark_ddm.h
@@ -103,13 +103,21 @@ struct ark_ddm_t {
 };
 
 /* DDM function prototype */
+__rte_internal
 int ark_ddm_verify(struct ark_ddm_t *ddm);
+__rte_internal
 void ark_ddm_stats_reset(struct ark_ddm_t *ddm);
+__rte_internal
 void ark_ddm_queue_setup(struct ark_ddm_t *ddm, rte_iova_t cons_addr);
+__rte_internal
 void ark_ddm_dump_stats(struct ark_ddm_t *ddm, const char *msg);
+__rte_internal
 uint64_t ark_ddm_queue_byte_count(struct ark_ddm_t *ddm);
+__rte_internal
 uint64_t ark_ddm_queue_pkt_count(struct ark_ddm_t *ddm);
+__rte_internal
 void ark_ddm_queue_reset_stats(struct ark_ddm_t *ddm);
+__rte_internal
 void ark_ddm_queue_enable(struct ark_ddm_t *ddm, int enable);
 
 #endif
diff --git a/drivers/common/ark/ark_mpu.h b/drivers/common/ark/ark_mpu.h
index 9d2b70d35f..04824db080 100644
--- a/drivers/common/ark/ark_mpu.h
+++ b/drivers/common/ark/ark_mpu.h
@@ -80,14 +80,20 @@ struct ark_mpu_t {
 uint16_t ark_api_num_queues(struct ark_mpu_t *mpu);
 uint16_t ark_api_num_queues_per_port(struct ark_mpu_t *mpu,
 uint16_t ark_ports);
+__rte_internal
 int ark_mpu_verify(struct ark_mpu_t *mpu, uint32_t obj_size);
+__rte_internal
 void ark_mpu_stop(struct ark_mpu_t *mpu);
+__rte_internal
 void ark_mpu_start(struct ark_mpu_t *mpu);
+__rte_internal
 int ark_mpu_reset(struct ark_mpu_t *mpu);
+__rte_internal
 int ark_mpu_configure(struct ark_mpu_t *mpu, rte_iova_t ring,
  uint32_t ring_size, int is_tx);
-
+__rte_internal
 void ark_mpu_dump(struct ark_mpu_t *mpu, const char *msg, uint16_t idx);
+__rte_internal
 void ark_mpu_dump_setup(struct ark_mpu_t *mpu, uint16_t qid);
 
 /*  this action is in a performance critical path */
diff --git a/drivers/common/ark/ark_rqp.h b/drivers/common/ark/ark_rqp.h
index d09f242e1e..c3d2ba739b 100644
--- a/drivers/common/ark/ark_rqp.h
+++ b/drivers/common/ark/ark_rqp.h
@@ -52,7 +52,10 @@ struct ark_rqpace_t {
volatile uint32_t cmpl_errors;
 };
 
+__rte_internal
 void ark_rqp_dump(struct ark_rqpace_t *rqp);
+__rte_internal
 void ark_rqp_stats_reset(struct ark_rqpace_t *rqp);
+__rte_internal
 int ark_rqp_lasped(struct ark_rqpace_t *rqp);
 #endif
-- 
2.25.1



[PATCH 06/14] common/ark: update version map file

2022-10-26 Thread John Miller
Update the version map file with new common functions.

Signed-off-by: John Miller 
---
 drivers/common/ark/version.map | 13 -
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/drivers/common/ark/version.map b/drivers/common/ark/version.map
index 74d9f4b668..64d78cff24 100644
--- a/drivers/common/ark/version.map
+++ b/drivers/common/ark/version.map
@@ -1,18 +1,21 @@
-DPDK_22 {
-   local: *;
-};
-
-INTERNAL {
+EXTERNAL {
global:
 
ark_api_num_queues;
ark_api_num_queues_per_port;
 
+};
+
+INTERNAL {
+   global:
+
ark_ddm_dump_stats;
ark_ddm_queue_byte_count;
ark_ddm_queue_pkt_count;
ark_ddm_queue_reset_stats;
ark_ddm_stats_reset;
+   ark_ddm_queue_setup;
+   ark_ddm_queue_enable;
ark_ddm_verify;
 
ark_mpu_configure;
-- 
2.25.1



  1   2   >