Re: [PATCH v5 1/1] build: add libarchive to external deps

2023-11-06 Thread Bruce Richardson
On Sun, Nov 05, 2023 at 08:12:43PM -0800, Srikanth Yalavarthi wrote:
> In order to avoid linking with Libs.private, libarchive
> is not added to ext_deps during the meson setup stage.
> 
> Since libarchive is not added to ext_deps, cross-compilation
> or native compilation with libarchive installed in non-standard
> location fails with errors related to "cannot find -larchive"
> or "archive.h: No such file or directory". In order to fix the
> build failures, user is required to define the 'c_args' and
> 'c_link_args' with '-I' and '-L'.
> 
> This patch adds libarchive to ext_deps and further would not
> require setting c_args and c_link_args externally.
> 
> Fixes: 40edb9c0d36b ("eal: handle compressed firmware")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Srikanth Yalavarthi 

Acked-by: Bruce Richardson 



Re: [PATCH] doc: add prog action into default ini

2023-11-06 Thread Thomas Monjalon
06/11/2023 10:14, Qi Zhang:
> Added prog action into nic feature default.ini.
> 
> Fixes: 8f1953f1914d ("ethdev: add flow API for P4-programmable devices")
> 
> Signed-off-by: Qi Zhang 

Applied, thanks.




Re: [PATCH v2 7/7] doc: testpmd support event handling section

2023-11-06 Thread lihuisong (C)



在 2023/10/20 18:07, Chengwen Feng 写道:

Add new section of event handling, which documented the ethdev and
device events.

Signed-off-by: Chengwen Feng 
---
  doc/guides/testpmd_app_ug/event_handling.rst | 80 
  doc/guides/testpmd_app_ug/index.rst  |  1 +
  2 files changed, 81 insertions(+)
  create mode 100644 doc/guides/testpmd_app_ug/event_handling.rst

diff --git a/doc/guides/testpmd_app_ug/event_handling.rst 
b/doc/guides/testpmd_app_ug/event_handling.rst
new file mode 100644
index 00..c116753ad0
--- /dev/null
+++ b/doc/guides/testpmd_app_ug/event_handling.rst
@@ -0,0 +1,80 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+Copyright(c) 2023 HiSilicon Limited.
+
+Event Handling
+==
+
+The ``testpmd`` application supports following two type event handling:
+
+ethdev events
+-
+
+The ``testpmd`` provide options "--print-event" and "--mask-event" to control
+whether display such as "Port x y event" when received "y" event on port "x".
+This is named as default processing.
+
+This section details the support events, unless otherwise specified, only the
+default processing is support.
+
+- ``RTE_ETH_EVENT_INTR_LSC``:
+  If device started with lsc enabled, the PMD will launch this event when it
+  detect link status changes.
+
+- ``RTE_ETH_EVENT_QUEUE_STATE``:
+  Used only within vhost PMD to report vring whether enabled.

Used only within vhost PMD? it seems that this is only used by vhost.
but ethdev lib says:
/** queue state event (enabled/disabled) */
    RTE_ETH_EVENT_QUEUE_STATE,
testpmd is also a demo for user, so suggest that change this commnts to 
avoid the confuesed by that.

+
+- ``RTE_ETH_EVENT_INTR_RESET``:
+  Used to report reset interrupt happened, this event only reported when the
+  PMD supports ``RTE_ETH_ERROR_HANDLE_MODE_PASSIVE``.
+
+- ``RTE_ETH_EVENT_VF_MBOX``:
+  Used as a PF to process mailbox messages of the VFs to which the PF belongs.
+
+- ``RTE_ETH_EVENT_INTR_RMV``:
+  Used to report device removal event. The ``testpmd`` will remove the port
+  later.
+
+- ``RTE_ETH_EVENT_NEW``:
+  Used to report port was probed event. The ``testpmd`` will setup the port
+  later.
+
+- ``RTE_ETH_EVENT_DESTROY``:
+  Used to report port was released event. The ``testpmd`` will changes the
+  port's status.
+
+- ``RTE_ETH_EVENT_MACSEC``:
+  Used to report MACsec offload related event.
+
+- ``RTE_ETH_EVENT_IPSEC``:
+  Used to report IPsec offload related event.
+
+- ``RTE_ETH_EVENT_FLOW_AGED``:
+  Used to report new aged-out flows was detected. Only valid with mlx5 PMD.
+
+- ``RTE_ETH_EVENT_RX_AVAIL_THRESH``:
+  Used to report available Rx descriptors was smaller than the threshold. Only
+  valid with mlx5 PMD.
+
+- ``RTE_ETH_EVENT_ERR_RECOVERING``:
+  Used to report error happened, and PMD will do recover after report this
+  event. The ``testpmd`` will stop packet forwarding when received the event.
+
+- ``RTE_ETH_EVENT_RECOVERY_SUCCESS``:
+  Used to report error recovery success. The ``testpmd`` will restart packet
+  forwarding when received the event.
+
+- ``RTE_ETH_EVENT_RECOVERY_FAILED``:
+  Used to report error recovery failed. The ``testpmd`` will display one
+  message to show which ports failed.
+
+.. note::
+
+   The ``RTE_ETH_EVENT_ERR_RECOVERING``, ``RTE_ETH_EVENT_RECOVERY_SUCCESS`` and
+   ``RTE_ETH_EVENT_RECOVERY_FAILED`` only reported when the PMD supports
+   ``RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE``.
+
+device events
+-
+
+Including two events ``RTE_DEV_EVENT_ADD`` and ``RTE_DEV_EVENT_ADD``, and
+enabled only when the ``testpmd`` stated with options "--hot-plug".
diff --git a/doc/guides/testpmd_app_ug/index.rst 
b/doc/guides/testpmd_app_ug/index.rst
index 1ac0d25d57..3c09448c4e 100644
--- a/doc/guides/testpmd_app_ug/index.rst
+++ b/doc/guides/testpmd_app_ug/index.rst
@@ -14,3 +14,4 @@ Testpmd Application User Guide
  build_app
  run_app
  testpmd_funcs
+event_handling


Re: [RFC] mempool: CPU cache aligning mempool driver accesses

2023-11-06 Thread Bruce Richardson
On Sat, Nov 04, 2023 at 06:29:40PM +0100, Morten Brørup wrote:
> I tried a little experiment, which gave a 25 % improvement in mempool
> perf tests for long bursts (n_get_bulk=32 n_put_bulk=32 n_keep=512
> constant_n=0) on a Xeon E5-2620 v4 based system.
> 
> This is the concept:
> 
> If all accesses to the mempool driver goes through the mempool cache,
> we can ensure that these bulk load/stores are always CPU cache aligned,
> by using cache->size when loading/storing to the mempool driver.
> 
> Furthermore, it is rumored that most applications use the default
> mempool cache size, so if the driver tests for that specific value,
> it can use rte_memcpy(src,dst,N) with N known at build time, allowing
> optimal performance for copying the array of objects.
> 
> Unfortunately, I need to change the flush threshold from 1.5 to 2 to
> be able to always use cache->size when loading/storing to the mempool
> driver.
> 
> What do you think?
> 
> PS: If we can't get rid of the mempool cache size threshold factor,
> we really need to expose it through public APIs. A job for another day.
> 
> Signed-off-by: Morten Brørup 
> ---
Interesting, thanks.

Out of interest, is there any different in performance you observe if using
regular libc memcpy vs rte_memcpy for the ring copies? Since the copy
amount is constant, a regular memcpy call should be expanded by the
compiler itself, and so should be pretty efficient.

/Bruce


[PATCH] net/cpfl: fix coverity issues

2023-11-06 Thread wenjing . qiao
From: Wenjing Qiao 

Fix integer handling issues, tainted_scalar issues, uninit issues,
overrun issues and control flow issues reported by coverity scan.

Coverity issue: 403259
Coverity issue: 403261
Coverity issue: 403266
Coverity issue: 403267
Coverity issue: 403271
Coverity issue: 403274
Fixes: db042ef09d26 ("net/cpfl: implement FXP rule creation and destroying")
Fixes: 03f976012304 ("net/cpfl: adapt FXP to flow engine")

Signed-off-by: Wenjing Qiao 
---
 drivers/net/cpfl/cpfl_ethdev.c  |  2 +-
 drivers/net/cpfl/cpfl_flow_engine_fxp.c | 12 +++-
 drivers/net/cpfl/cpfl_fxp_rule.c|  6 +++---
 drivers/net/cpfl/cpfl_rules.c   |  3 +--
 4 files changed, 8 insertions(+), 15 deletions(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index eb168eee51..7697aea0ce 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -2478,7 +2478,7 @@ cpfl_alloc_dma_mem_batch(struct idpf_dma_mem *orig_dma, 
struct idpf_dma_mem *dma
 {
int i;
 
-   if (!idpf_alloc_dma_mem(NULL, orig_dma, size * (1 + batch_size))) {
+   if (!idpf_alloc_dma_mem(NULL, orig_dma, (uint64_t)size * (1 + 
batch_size))) {
PMD_INIT_LOG(ERR, "Could not alloc dma memory");
return -ENOMEM;
}
diff --git a/drivers/net/cpfl/cpfl_flow_engine_fxp.c 
b/drivers/net/cpfl/cpfl_flow_engine_fxp.c
index ddede2f553..4d3cdf813e 100644
--- a/drivers/net/cpfl/cpfl_flow_engine_fxp.c
+++ b/drivers/net/cpfl/cpfl_flow_engine_fxp.c
@@ -107,13 +107,6 @@ cpfl_fxp_create(struct rte_eth_dev *dev,
return ret;
 }
 
-static inline void
-cpfl_fxp_rule_free(struct rte_flow *flow)
-{
-   rte_free(flow->rule);
-   flow->rule = NULL;
-}
-
 static int
 cpfl_fxp_destroy(struct rte_eth_dev *dev,
 struct rte_flow *flow,
@@ -128,7 +121,7 @@ cpfl_fxp_destroy(struct rte_eth_dev *dev,
struct cpfl_vport *vport;
struct cpfl_repr *repr;
 
-   rim = flow->rule;
+   rim = (struct cpfl_rule_info_meta *)flow->rule;
if (!rim) {
rte_flow_error_set(error, EINVAL,
   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
@@ -164,7 +157,8 @@ cpfl_fxp_destroy(struct rte_eth_dev *dev,
for (i = rim->pr_num; i < rim->rule_num; i++)
cpfl_fxp_mod_idx_free(ad, rim->rules[i].mod.mod_index);
 err:
-   cpfl_fxp_rule_free(flow);
+   rte_free(rim);
+   flow->rule = NULL;
return ret;
 }
 
diff --git a/drivers/net/cpfl/cpfl_fxp_rule.c b/drivers/net/cpfl/cpfl_fxp_rule.c
index ea65e20507..ba3a036e7a 100644
--- a/drivers/net/cpfl/cpfl_fxp_rule.c
+++ b/drivers/net/cpfl/cpfl_fxp_rule.c
@@ -76,8 +76,8 @@ cpfl_receive_ctlq_msg(struct idpf_hw *hw, struct 
idpf_ctlq_info *cq, u16 num_q_m
rte_delay_us_sleep(10);
ret = cpfl_vport_ctlq_recv(cq, &num_q_msg, &q_msg[0]);
 
-   if (ret && ret != CPFL_ERR_CTLQ_NO_WORK &&
-   ret != CPFL_ERR_CTLQ_ERROR) {
+   if (ret && ret != CPFL_ERR_CTLQ_NO_WORK && ret != 
CPFL_ERR_CTLQ_ERROR &&
+   ret != CPFL_ERR_CTLQ_EMPTY) {
PMD_INIT_LOG(ERR, "failed to recv ctrlq msg. err: 
0x%4x\n", ret);
retries++;
continue;
@@ -165,7 +165,7 @@ cpfl_default_rule_pack(struct cpfl_rule_info *rinfo, struct 
idpf_dma_mem *dma,
 {
union cpfl_rule_cfg_pkt_record *blob = NULL;
enum cpfl_ctlq_rule_cfg_opc opc;
-   struct cpfl_rule_cfg_data cfg;
+   struct cpfl_rule_cfg_data cfg = {0};
uint16_t cfg_ctrl;
 
if (!dma->va) {
diff --git a/drivers/net/cpfl/cpfl_rules.c b/drivers/net/cpfl/cpfl_rules.c
index 3d259d3da8..6c0e435b1d 100644
--- a/drivers/net/cpfl/cpfl_rules.c
+++ b/drivers/net/cpfl/cpfl_rules.c
@@ -116,8 +116,7 @@ cpfl_prep_sem_rule_blob(const uint8_t *key,
uint32_t i;
 
idpf_memset(rule_blob, 0, sizeof(*rule_blob), IDPF_DMA_MEM);
-   idpf_memcpy(rule_blob->sem_rule.key, key, key_byte_len,
-   CPFL_NONDMA_TO_DMA);
+   memcpy(rule_blob->sem_rule.key, key, key_byte_len);
 
for (i = 0; i < act_byte_len / sizeof(uint32_t); i++)
*act_dst++ = CPU_TO_LE32(*act_src++);
-- 
2.34.1



Re: [PATCH v2] app/testpmd: fix UDP cksum error for UFO enable

2023-11-06 Thread Ferruh Yigit
On 11/6/2023 4:13 AM, lihuisong (C) wrote:
> 
> 在 2023/11/3 18:42, Ferruh Yigit 写道:
>> On 11/3/2023 9:09 AM, lihuisong (C) wrote:
>>> Hi Ferruh,
>>>
>>> Thanks for you review.
>>>
>>>
>>> 在 2023/11/3 9:31, Ferruh Yigit 写道:
 On 8/2/2023 3:55 AM, Huisong Li wrote:
> The command "tso set  " is used to enable UFO,
> please
> see commit ce8e6e742807 ("app/testpmd: support UFO in checksum
> engine")
>
> The above patch configures the RTE_MBUF_F_TX_UDP_SEG to enable UFO
> only if
> tso_segsz is set.
>
 "The above patch sets the RTE_MBUF_F_TX_UDP_SEG in mbuf ol_flags, only
 by checking if 'tso_segsz' is set, but missing check if UFO offload
 (RTE_ETH_TX_OFFLOAD_UDP_TSO) supported by device."
>>> Ack

> Then tx_prepare() may call rte_net_intel_cksum_prepare()
> to compute pseudo header checksum (because some PMDs may supports
> TSO).
>
 Not sure what do you mean by '(because some PMDs may supports TSO)'?

 Do you mean something like following:
 "RTE_MBUF_F_TX_UDP_SEG flag causes driver that supports TSO/UFO to
 compute pseudo header checksum."
>>> Ack

> As a result, if the peer sends UDP packets, all packets with UDP
> checksum
> error are received for the PMDs only supported TSO.
>
 "As a result, if device only supports TSO, but not UFO, UDP packet
 checksum will be wrong."
>>> Ack

> So enabling UFO also depends on if driver has
> RTE_ETH_TX_OFFLOAD_UDP_TSO
> capability. Similarly, TSO also need to do like this.
>
> In addition, this patch also fixes cmd_tso_set_parsed() for UFO to
> make
> it better to support TSO and UFO.
>
> Fixes: ce8e6e742807 ("app/testpmd: support UFO in checksum engine")
>
> Signed-off-by: Huisong Li 
> ---
>    v2: add handle for tunnel TSO offload in process_inner_cksums
>
> ---
>    app/test-pmd/cmdline.c  | 47
> +
>    app/test-pmd/csumonly.c | 11 --
>    2 files changed, 33 insertions(+), 25 deletions(-)
>
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index 0d0723f659..8be593d405 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -4906,6 +4906,7 @@ cmd_tso_set_parsed(void *parsed_result,
>    {
>    struct cmd_tso_set_result *res = parsed_result;
>    struct rte_eth_dev_info dev_info;
> +    uint64_t offloads;
>    int ret;
>      if (port_id_is_invalid(res->port_id, ENABLED_WARN))
> @@ -4922,37 +4923,37 @@ cmd_tso_set_parsed(void *parsed_result,
>    if (ret != 0)
>    return;
>    -    if ((ports[res->port_id].tso_segsz != 0) &&
> -    (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) ==
> 0) {
> -    fprintf(stderr, "Error: TSO is not supported by port %d\n",
> -    res->port_id);
> -    return;
> +    if (ports[res->port_id].tso_segsz != 0) {
> +    if ((dev_info.tx_offload_capa & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
> +    RTE_ETH_TX_OFFLOAD_UDP_TSO)) == 0) {
> +    fprintf(stderr, "Error: both TSO and UFO are not
> supported by port %d\n",
> +    res->port_id);
> +    return;
> +    }
> +    /* display warnings if configuration is not supported by the
> NIC */
> +    if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO)
> == 0)
> +    fprintf(stderr, "Warning: port %d doesn't support TSO\n",
> +    res->port_id);
> +    if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_TSO)
> == 0)
> +    fprintf(stderr, "Warning: port %d doesn't support UFO\n",
> +    res->port_id);
>
 Requesting TSO/UFO by setting 'tso_segsz', but device capability
 missing
 is an error case, so OK to have first message.

 But only supporting TSO or UFO is not an error case, not sure about
 logging this. But even it is logged, I think it shouldn't be to stderr
 or it should say "Warning: ", a regular logging can be done.
>>> All right, will fix it in next version.

>    }
>      if (ports[res->port_id].tso_segsz == 0) {
>    ports[res->port_id].dev_conf.txmode.offloads &=
> -    ~RTE_ETH_TX_OFFLOAD_TCP_TSO;
> -    printf("TSO for non-tunneled packets is disabled\n");
> +    ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
> RTE_ETH_TX_OFFLOAD_UDP_TSO);
> +    printf("TSO and UFO for non-tunneled packets is disabled\n");
>    } else {
> -    ports[res->port_id].dev_conf.txmode.offloads |=
> -    RTE_ETH_TX_OFFLOAD_TCP_TSO;
> -    printf("TSO segment size for non-tunneled packets is %d\n",
> +    offloads = (dev_info.tx_offload_capa &
> RTE_ETH_TX_OFFLOAD_

[PATCH v5] net/ice: fix crash on closing representor ports

2023-11-06 Thread Mingjin Ye
The data resource in struct rte_eth_dev is cleared and points to NULL
when the DCF port is closed.

If the DCF representor port is closed after the DCF port is closed,
a segmentation fault occurs because the representor port accesses the
data resource released by the DCF port.

This patch fixes this issue by synchronizing the state of DCF ports and
representor ports to the peer in real time when their state changes.

Fixes: 5674465a32c8 ("net/ice: add DCF VLAN handling")
Fixes: da9cdcd1f372 ("net/ice: fix crash on representor port closing")
Fixes: 7564d5509611 ("net/ice: add DCF hardware initialization")
Fixes: c7e1a1a3bfeb ("net/ice: refactor DCF VLAN handling")
Fixes: 1a86f4dbdf42 ("net/ice: support DCF device reset")
Cc: sta...@dpdk.org

Signed-off-by: Mingjin Ye 
---
v2: Reformat code to remove unneeded fixlines.
---
v3: New solution.
---
v4: Optimize v2 patch.
---
v5: optimization.
---
 drivers/net/ice/ice_dcf_ethdev.c | 30 --
 drivers/net/ice/ice_dcf_ethdev.h |  3 ++
 drivers/net/ice/ice_dcf_vf_representor.c | 50 ++--
 3 files changed, 77 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 065ec728c2..eea24ee3a9 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1618,6 +1618,26 @@ ice_dcf_free_repr_info(struct ice_dcf_adapter 
*dcf_adapter)
}
 }
 
+int
+ice_dcf_handle_vf_repr_close(struct ice_dcf_adapter *dcf_adapter,
+   uint16_t vf_id)
+{
+   struct ice_dcf_repr_info *vf_rep_info;
+
+   if (dcf_adapter->num_reprs >= vf_id) {
+   PMD_DRV_LOG(ERR, "Invalid VF id: %d", vf_id);
+   return -1;
+   }
+
+   if (!dcf_adapter->repr_infos)
+   return 0;
+
+   vf_rep_info = &dcf_adapter->repr_infos[vf_id];
+   vf_rep_info->vf_rep_eth_dev = NULL;
+
+   return 0;
+}
+
 static int
 ice_dcf_init_repr_info(struct ice_dcf_adapter *dcf_adapter)
 {
@@ -1641,11 +1661,10 @@ ice_dcf_dev_close(struct rte_eth_dev *dev)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
 
+   ice_dcf_vf_repr_notify_all(adapter, false);
(void)ice_dcf_dev_stop(dev);
 
ice_free_queues(dev);
-
-   ice_dcf_free_repr_info(adapter);
ice_dcf_uninit_parent_adapter(dev);
ice_dcf_uninit_hw(dev, &adapter->real_hw);
 
@@ -1835,7 +1854,7 @@ ice_dcf_dev_reset(struct rte_eth_dev *dev)
ice_dcf_reset_hw(dev, hw);
}
 
-   ret = ice_dcf_dev_uninit(dev);
+   ret = ice_dcf_dev_close(dev);
if (ret)
return ret;
 
@@ -1938,12 +1957,17 @@ ice_dcf_dev_init(struct rte_eth_dev *eth_dev)
}
 
dcf_config_promisc(adapter, false, false);
+   ice_dcf_vf_repr_notify_all(adapter, true);
+
return 0;
 }
 
 static int
 ice_dcf_dev_uninit(struct rte_eth_dev *eth_dev)
 {
+   struct ice_dcf_adapter *adapter = eth_dev->data->dev_private;
+
+   ice_dcf_free_repr_info(adapter);
ice_dcf_dev_close(eth_dev);
 
return 0;
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index 4baaec4b8b..6dcbaac5eb 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -60,6 +60,7 @@ struct ice_dcf_vf_repr {
struct rte_ether_addr mac_addr;
uint16_t switch_domain_id;
uint16_t vf_id;
+   bool dcf_valid;
 
struct ice_dcf_vlan outer_vlan_info; /* DCF always handle outer VLAN */
 };
@@ -80,6 +81,8 @@ int ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, 
void *init_param);
 int ice_dcf_vf_repr_uninit(struct rte_eth_dev *vf_rep_eth_dev);
 int ice_dcf_vf_repr_init_vlan(struct rte_eth_dev *vf_rep_eth_dev);
 void ice_dcf_vf_repr_stop_all(struct ice_dcf_adapter *dcf_adapter);
+void ice_dcf_vf_repr_notify_all(struct ice_dcf_adapter *dcf_adapter, bool 
valid);
+int ice_dcf_handle_vf_repr_close(struct ice_dcf_adapter *dcf_adapter, uint16_t 
vf_id);
 bool ice_dcf_adminq_need_retry(struct ice_adapter *ad);
 
 #endif /* _ICE_DCF_ETHDEV_H_ */
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c 
b/drivers/net/ice/ice_dcf_vf_representor.c
index b9fcfc80ad..6c342798ac 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -50,9 +50,32 @@ ice_dcf_vf_repr_dev_stop(struct rte_eth_dev *dev)
return 0;
 }
 
+static bool
+ice_dcf_vf_repr_set_dcf_valid(struct rte_eth_dev *dev, bool valid)
+{
+   struct ice_dcf_vf_repr *repr = dev->data->dev_private;
+
+   if (!repr)
+   return false;
+
+   repr->dcf_valid = valid;
+
+   return true;
+}
+
 static int
 ice_dcf_vf_repr_dev_close(struct rte_eth_dev *dev)
 {
+   struct ice_dcf_vf_repr *repr = dev->data->dev_private;
+   struct ice_dcf_adapter *dcf_adapter;
+
+   if (repr->dcf_valid) {
+   dcf_adapter = repr->dcf_eth_dev->data->dev_private;
+  

[PATCH] net/iavf: MDD fault diagnostics support on TX paths

2023-11-06 Thread Mingjin Ye
When an MDD packet is detected, hardware will shutdown the queue.
In a Tx path troubleshooting scenario, modifying the application
code to reselect the Tx path is the only way to enable mbuf
legitimacy check, which makes troubleshooting difficult.

In this patch, the devargs option "mbuf_check" is introduced and the
corresponding diagnostic function is enabled by configuring MDD case.

Argument format: mbuf_check=generic,,
Currently support MDD case: generic, segment, offload, careful.

Signed-off-by: Mingjin Ye 
---
 drivers/net/iavf/iavf.h|  26 +
 drivers/net/iavf/iavf_ethdev.c |  99 ++
 drivers/net/iavf/iavf_rxtx.c   | 182 +
 drivers/net/iavf/iavf_rxtx.h   |   4 +
 4 files changed, 311 insertions(+)

diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 04774ce124..ad46fdeddd 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -113,9 +113,15 @@ struct iavf_ipsec_crypto_stats {
} ierrors;
 };
 
+struct iavf_mdd_stats {
+   uint64_t mdd_mbuf_err_count;
+   uint64_t mdd_pkt_err_count;
+};
+
 struct iavf_eth_xstats {
struct virtchnl_eth_stats eth_stats;
struct iavf_ipsec_crypto_stats ips_stats;
+   struct iavf_mdd_stats mdd_stats;
 };
 
 /* Structure that defines a VSI, associated with a adapter. */
@@ -299,6 +305,13 @@ enum iavf_proto_xtr_type {
IAVF_PROTO_XTR_MAX,
 };
 
+enum iavf_mdd_check_type {
+   IAVF_MDD_CHECK_GENERAL,
+   IAVF_MDD_CHECK_SEGMENT,
+   IAVF_MDD_CHECK_OFFLOAD,
+   IAVF_MDD_CHECK_CAREFUL,
+};
+
 /**
  * Cache devargs parse result.
  */
@@ -308,10 +321,21 @@ struct iavf_devargs {
uint16_t quanta_size;
uint32_t watchdog_period;
uint8_t  auto_reset;
+   uint16_t mbuf_check;
 };
 
 struct iavf_security_ctx;
 
+struct iavf_tx_burst_element {
+   TAILQ_ENTRY(iavf_tx_burst_element) next;
+   eth_tx_burst_t tx_pkt_burst;
+};
+
+#define IAVF_MDD_CHECK_F_TX_GENERAL (1ULL << 0)
+#define IAVF_MDD_CHECK_F_TX_SEGMENT (1ULL << 1)
+#define IAVF_MDD_CHECK_F_TX_OFFLOAD (1ULL << 2)
+#define IAVF_MDD_CHECK_F_TX_CAREFUL (1ULL << 3)
+
 /* Structure to store private data for each VF instance. */
 struct iavf_adapter {
struct iavf_hw hw;
@@ -326,6 +350,8 @@ struct iavf_adapter {
uint32_t ptype_tbl[IAVF_MAX_PKT_TYPE] __rte_cache_min_aligned;
bool stopped;
bool closed;
+   uint64_t mc_flags; /* mdd check flags. */
+   TAILQ_HEAD(tx_pkt_burst_list, iavf_tx_burst_element) list_tx_pkt_burst;
uint16_t fdir_ref_cnt;
struct iavf_devargs devargs;
 };
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 5b2634a4e3..cb0b7491e5 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -37,6 +37,7 @@
 #define IAVF_PROTO_XTR_ARG "proto_xtr"
 #define IAVF_QUANTA_SIZE_ARG   "quanta_size"
 #define IAVF_RESET_WATCHDOG_ARG"watchdog_period"
+#define IAVF_MDD_CHECK_ARG   "mbuf_check"
 #define IAVF_ENABLE_AUTO_RESET_ARG "auto_reset"
 
 uint64_t iavf_timestamp_dynflag;
@@ -46,6 +47,7 @@ static const char * const iavf_valid_args[] = {
IAVF_PROTO_XTR_ARG,
IAVF_QUANTA_SIZE_ARG,
IAVF_RESET_WATCHDOG_ARG,
+   IAVF_MDD_CHECK_ARG,
IAVF_ENABLE_AUTO_RESET_ARG,
NULL
 };
@@ -187,6 +189,8 @@ static const struct rte_iavf_xstats_name_off 
rte_iavf_stats_strings[] = {
_OFF_OF(ips_stats.ierrors.ipsec_length)},
{"inline_ipsec_crypto_ierrors_misc",
_OFF_OF(ips_stats.ierrors.misc)},
+   {"mdd_mbuf_error_packets", _OFF_OF(mdd_stats.mdd_mbuf_err_count)},
+   {"mdd_pkt_error_packets", _OFF_OF(mdd_stats.mdd_pkt_err_count)},
 };
 #undef _OFF_OF
 
@@ -1878,6 +1882,9 @@ static int iavf_dev_xstats_get(struct rte_eth_dev *dev,
 {
int ret;
unsigned int i;
+   struct iavf_tx_queue *txq;
+   uint64_t mdd_mbuf_err_count = 0;
+   uint64_t mdd_pkt_err_count = 0;
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
@@ -1901,6 +1908,17 @@ static int iavf_dev_xstats_get(struct rte_eth_dev *dev,
if (iavf_ipsec_crypto_supported(adapter))
iavf_dev_update_ipsec_xstats(dev, &iavf_xtats.ips_stats);
 
+
+   if (adapter->devargs.mbuf_check) {
+   for (i = 0; i < dev->data->nb_tx_queues; i++) {
+   txq = dev->data->tx_queues[i];
+   mdd_mbuf_err_count += txq->mdd_mbuf_err_count;
+   mdd_pkt_err_count += txq->mdd_pkt_err_count;
+   }
+   iavf_xtats.mdd_stats.mdd_mbuf_err_count = mdd_mbuf_err_count;
+   iavf_xtats.mdd_stats.mdd_pkt_err_count = mdd_pkt_err_count;
+   }
+
/* loop over xstats array and values from pstats */
for (i = 0; i < IAVF_NB_XST

Re: [PATCH v5 2/5] net/sfc: use new API to parse kvargs

2023-11-06 Thread Andrew Rybchenko

On 11/6/23 10:31, Chengwen Feng wrote:

The sfc_kvargs_process() and sfc_efx_dev_class_get() function could
handle both key=value and only-key, so they should use
rte_kvargs_process_opt() instead of rte_kvargs_process() to parse.

Signed-off-by: Chengwen Feng 
---
  drivers/common/sfc_efx/sfc_efx.c | 4 ++--
  drivers/net/sfc/sfc_kvargs.c | 2 +-
  2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/common/sfc_efx/sfc_efx.c b/drivers/common/sfc_efx/sfc_efx.c
index 2dc5545760..3ebac909f1 100644
--- a/drivers/common/sfc_efx/sfc_efx.c
+++ b/drivers/common/sfc_efx/sfc_efx.c
@@ -52,8 +52,8 @@ sfc_efx_dev_class_get(struct rte_devargs *devargs)
return dev_class;
  
  	if (rte_kvargs_count(kvargs, RTE_DEVARGS_KEY_CLASS) != 0) {

-   rte_kvargs_process(kvargs, RTE_DEVARGS_KEY_CLASS,
-  sfc_efx_kvarg_dev_class_handler, &dev_class);
+   rte_kvargs_process_opt(kvargs, RTE_DEVARGS_KEY_CLASS,
+  sfc_efx_kvarg_dev_class_handler, 
&dev_class);


LGTM from code point of view, but I'm not sure that I understand the
idea behind handling NULL value in sfc_efx_kvarg_dev_class_handler().

Cc: Vijay


}
  
  	rte_kvargs_free(kvargs);

diff --git a/drivers/net/sfc/sfc_kvargs.c b/drivers/net/sfc/sfc_kvargs.c
index 783cb43ae6..24bb896179 100644
--- a/drivers/net/sfc/sfc_kvargs.c
+++ b/drivers/net/sfc/sfc_kvargs.c
@@ -70,7 +70,7 @@ sfc_kvargs_process(struct sfc_adapter *sa, const char 
*key_match,
if (sa->kvargs == NULL)
return 0;
  
-	return -rte_kvargs_process(sa->kvargs, key_match, handler, opaque_arg);

+   return -rte_kvargs_process_opt(sa->kvargs, key_match, handler, 
opaque_arg);


It looks wrong to me since many handlers do not handle NULL string 
gracefully. As I understand some handlers where fixed to avoid crash

and correct fix would be to keep  rte_kvargs_process() and remove
unnecessary checks for NULL string value.


  }
  
  int




Re: [PATCH 24.03 v2] build: track mandatory rather than optional libs

2023-11-06 Thread Bruce Richardson
On Fri, Nov 03, 2023 at 09:19:53PM +0100, Morten Brørup wrote:
> > From: Bruce Richardson [mailto:bruce.richard...@intel.com]
> > Sent: Friday, 3 November 2023 19.09
> > 
> > On Fri, Nov 03, 2023 at 06:31:30PM +0100, Morten Brørup wrote:
> > > > From: Bruce Richardson [mailto:bruce.richard...@intel.com]
> > > > Sent: Friday, 3 November 2023 17.52
> > > >
> > > > DPDK now has more optional libraries than mandatory ones, so invert
> > the
> > > > list stored in the meson.build file from the optional ones to the
> > > > "always_enable" ones. As well as being a shorter list:
> > > >
> > > > * we can remove the loop building up the "always_enable" list
> > > >   dynamically from the optional list
> > > > * it better aligns with the drivers/meson.build file which
> > maintains an
> > > >   always_enable list.
> > > >
> > > > Signed-off-by: Bruce Richardson 
> > >
> > > Excellent!
> > >
> > > It really shows how bloated DPDK CORE still is. I would like to see
> > these go optional:
> > >
> > 
> > For some I agree, but we need to decide what optional really means. :-)
> > 
> > For my mind, there are 3 (maybe 4) key components that need to be built
> > for
> > me to consider a build to be a valid DPDK one:
> > * EAL obviously,
> > * testpmd, because everyone seems to use it
> > * l3fwd, becaues it's the most commonly referenced example and used for
> >   benchmarking, and build testing in test-meson-builds. (There are
> > others,
> >   but they are pretty likely to build if l3fwd does!)
> > * dpdk-test - I feel this should always be buildable, but for me it's
> > the
> >   optional 4th component.
> > 
> > Now, the obviously one to relax here is l3fwd, since it is just an
> > example,
> > but I wonder if that may cause some heartache.
> 
> I don't consider any DPDK lib CORE just because the lib is used by testpmd 
> and/or l3fwd. I agree that all libs should be included by default, so you can 
> run testpmd, l3fwd, and other apps and examples.
> 
> However, many libs are not needed for *all* DPDK applications, so I would 
> like other apps to be able to build DPDK without superfluous libs.
> 
> E.g. our StraightShaper CSP appliance is deployed at Layer 2, and doesn't use 
> any of DPDK's L3 libs, so why should the DPDK L3 libs be considered CORE and 
> thus included in our application? I suppose other companies are also using 
> DPDK for other purposes than L3 routing, and don't need the DPDK L3 libs.
> 
> Furthermore, I suppose that some Layer 3 applications use their own 
> RIB/FIB/LPM libraries. Does OVS use DPDK's rib/fib/lpm libraries?
> 



> > Overall, if we want to make more libs optional, I would start looking
> > at
> > l3fwd and making it a bit more modular. I previously made its support
> > for
> > eventdev optional, we should do the same for lpm and fib. Beyond that,
> > we
> > need to decide what core really means.
> 
> Yes - defining CORE is the key to setting the goal here.
> 
> In my mind, CORE is the minimum requirement to running an absolutely minimal 
> DPDK application.
> 
> A primary DPDK application would probably need to do some packet I/O; but it 
> might be a simple layer two bridge, not using any of the L3 libs.
> 
> And a secondary DPDK application might attach to a primary DPDK application 
> only to work on its data structures, e.g. to collect statistics, but not do 
> any packet processing, so that application doesn't need any of those libs 
> (not even the ethdev lib).
> 
> In reality, DPDK applications would probably need to build more libs than 
> just CORE. But some application might need CORE + lib A, and some other 
> application might need CORE + lib B. In essence, I don't want application A 
> to drag around some unused lib B, and application B to drag around some 
> unused lib A.
> 
> It's an optimization only available a build time. Distros should continue 
> providing all DPDK libs.
> 
> There's also system testing and system attack surface to consider... all that 
> bloat makes production systems more fragile and vulnerable.
> 

I largely agree, though I do think that trying to split primary-secondary
as having different builds could lead to some headaches, so I'd push any
work around that further down the road.

Some thoughts on next steps:
* From looks of my original list above, it appears the low-hanging fruit is
  largely gone, in terms of being able to turn off libs that have few
  dependencies, timer being one possible exception
* I think it's worth looking into making l3fwd more modular so it can be
  build only with backend x or y or z in it. However, if agreeable, we can
  just start marking lpm and rib/fib libs as optional directly and have
  l3fwd not buildable in those cases.
* For libs that depend on other libs for bits of functionality, we are
  getting into the realm of using ifdefs to start selectively removing
  bits. This is the not-so-nice bit as:

  - it makes it a lot harder to do proper build testing, as we now have to
test with individual bits on 

RE: [RFC] mempool: CPU cache aligning mempool driver accesses

2023-11-06 Thread Morten Brørup
> From: Bruce Richardson [mailto:bruce.richard...@intel.com]
> Sent: Monday, 6 November 2023 10.45
> 
> On Sat, Nov 04, 2023 at 06:29:40PM +0100, Morten Brørup wrote:
> > I tried a little experiment, which gave a 25 % improvement in mempool
> > perf tests for long bursts (n_get_bulk=32 n_put_bulk=32 n_keep=512
> > constant_n=0) on a Xeon E5-2620 v4 based system.
> >
> > This is the concept:
> >
> > If all accesses to the mempool driver goes through the mempool cache,
> > we can ensure that these bulk load/stores are always CPU cache
> aligned,
> > by using cache->size when loading/storing to the mempool driver.
> >
> > Furthermore, it is rumored that most applications use the default
> > mempool cache size, so if the driver tests for that specific value,
> > it can use rte_memcpy(src,dst,N) with N known at build time, allowing
> > optimal performance for copying the array of objects.
> >
> > Unfortunately, I need to change the flush threshold from 1.5 to 2 to
> > be able to always use cache->size when loading/storing to the mempool
> > driver.
> >
> > What do you think?
> >
> > PS: If we can't get rid of the mempool cache size threshold factor,
> > we really need to expose it through public APIs. A job for another
> day.
> >
> > Signed-off-by: Morten Brørup 
> > ---
> Interesting, thanks.
> 
> Out of interest, is there any different in performance you observe if
> using
> regular libc memcpy vs rte_memcpy for the ring copies? Since the copy
> amount is constant, a regular memcpy call should be expanded by the
> compiler itself, and so should be pretty efficient.

I ran some tests without patching rte_ring_elem_pvt.h, i.e. without introducing 
the constant-size copy loop. I got the majority of the performance gain at this 
point.

At this point, both pointers are CPU cache aligned when refilling the mempool 
cache, and the destination pointer is CPU cache aligned when draining the 
mempool cache.

In other words: When refilling the mempool cache, it is both loading and 
storing entire CPU cache lines. And when draining, it is storing entire CPU 
cache lines.


Adding the fixed-size copy loop provided an additional performance gain. I 
didn't test other constant-size copy methods than rte_memcpy.

rte_memcpy should have optimal conditions in this patch, because N is known to 
be 512 * 8 = 4 KiB at build time. Furthermore, both pointers are CPU cache 
aligned when refilling the mempool cache, and the destination pointer is CPU 
cache aligned when draining the mempool cache. I don't recall if pointer 
alignment matters for rte_memcpy, though.

The memcpy in libc (or more correctly: intrinsic to the compiler) will do 
non-temporal copying for large sizes, and I don't know what that threshold is, 
so I think rte_memcpy is the safe bet here. Especially if someone builds DPDK 
with a larger mempool cache size than 512 objects.

On the other hand, non-temporal access to the objects in the ring might be 
beneficial if the ring is so large that they go cold before the application 
loads them from the ring again.



Re: [PATCH v9 7/9] ethdev: add API to get RSS algorithm names

2023-11-06 Thread Andrew Rybchenko

On 11/2/23 11:20, Jie Hai wrote:

This patch adds new API rte_eth_dev_rss_algo_name() to get
name of a RSS algorithm and document it.

Signed-off-by: Jie Hai 
Acked-by: Huisong Li 
Acked-by: Chengwen Feng 





@@ -4791,6 +4802,20 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id,
return ret;
  }
  
+const char *

+rte_eth_dev_rss_algo_name(enum rte_eth_hash_function rss_algo)
+{
+   const char *name = "Unknown function";
+   unsigned int i;
+
+   for (i = 0; i < RTE_DIM(rte_eth_dev_rss_algo_names); i++) {
+   if (rss_algo == rte_eth_dev_rss_algo_names[i].algo)
+   return rte_eth_dev_rss_algo_names[i].name;
+   }
+
+   return name;


My 2c:

IMHO, usage of name variable here just complicate reading and forces
reader to find out which value 'name' has here. Just return
"Unknown function".



RE: [PATCH v2 1/1] ml/cnxk: fix updating internal I/O info

2023-11-06 Thread Anup Prabhu


> -Original Message-
> From: Srikanth Yalavarthi 
> Sent: Friday, November 3, 2023 10:10 PM
> To: Srikanth Yalavarthi 
> Cc: dev@dpdk.org; Shivah Shankar Shankar Narayan Rao
> ; Anup Prabhu ;
> Prince Takkar ; Jerin Jacob Kollanukkaran
> 
> Subject: [PATCH v2 1/1] ml/cnxk: fix updating internal I/O info
> 
> Update scale factor in IO info of TVM models from metadata.
> 
> Fixes: 35c3e790b4a0 ("ml/cnxk: update internal info for TVM model")
> 
> Signed-off-by: Srikanth Yalavarthi 
Acked-by: Anup Prabhu 
<>

Re: [PATCH v9 1/9] ethdev: overwrite some comment related to RSS

2023-11-06 Thread Andrew Rybchenko

On 11/2/23 11:20, Jie Hai wrote:

In rte_eth_dev_rss_hash_conf_get(), the "rss_key_len" should be
greater than or equal to the "hash_key_size" which get from
rte_eth_dev_info_get() API. And the "rss_key" should contain at
least "hash_key_size" bytes. If these requirements are not met,
the query unreliable.

In rte_eth_dev_rss_hash_update() or rte_eth_dev_configure(), the
"rss_key_len" indicates the length of the "rss_key" in bytes of
the array pointed by "rss_key", it should be equal to the
"hash_key_size" if "rss_key" is not NULL.

This patch overwrites the comments of fields of "rte_eth_rss_conf"
and "RTE_ETH_HASH_FUNCTION_DEFAULT", checks "rss_key_len" in
ethdev level, and documents these changes.

Signed-off-by: Jie Hai 
Acked-by: Huisong Li 
Acked-by: Chengwen Feng 






@@ -4712,6 +4730,7 @@ int
  rte_eth_dev_rss_hash_conf_get(uint16_t port_id,
  struct rte_eth_rss_conf *rss_conf)
  {
+   struct rte_eth_dev_info dev_info = { 0 };


There is no poiint to init dev_info here. Get functoin does it anyway.



RE: [PATCH 24.03 v2] build: track mandatory rather than optional libs

2023-11-06 Thread Morten Brørup
> From: Bruce Richardson [mailto:bruce.richard...@intel.com]
> Sent: Monday, 6 November 2023 11.29
> 
> On Fri, Nov 03, 2023 at 09:19:53PM +0100, Morten Brørup wrote:
> > > From: Bruce Richardson [mailto:bruce.richard...@intel.com]
> > > Sent: Friday, 3 November 2023 19.09
> > >
> > > On Fri, Nov 03, 2023 at 06:31:30PM +0100, Morten Brørup wrote:
> > > > > From: Bruce Richardson [mailto:bruce.richard...@intel.com]
> > > > > Sent: Friday, 3 November 2023 17.52
> > > > >
> > > > > DPDK now has more optional libraries than mandatory ones, so
> invert
> > > the
> > > > > list stored in the meson.build file from the optional ones to
> the
> > > > > "always_enable" ones. As well as being a shorter list:
> > > > >
> > > > > * we can remove the loop building up the "always_enable" list
> > > > >   dynamically from the optional list
> > > > > * it better aligns with the drivers/meson.build file which
> > > maintains an
> > > > >   always_enable list.
> > > > >
> > > > > Signed-off-by: Bruce Richardson 
> > > >
> > > > Excellent!
> > > >
> > > > It really shows how bloated DPDK CORE still is. I would like to
> see
> > > these go optional:
> > > >
> > >
> > > For some I agree, but we need to decide what optional really means.
> :-)
> > >
> > > For my mind, there are 3 (maybe 4) key components that need to be
> built
> > > for
> > > me to consider a build to be a valid DPDK one:
> > > * EAL obviously,
> > > * testpmd, because everyone seems to use it
> > > * l3fwd, becaues it's the most commonly referenced example and used
> for
> > >   benchmarking, and build testing in test-meson-builds. (There are
> > > others,
> > >   but they are pretty likely to build if l3fwd does!)
> > > * dpdk-test - I feel this should always be buildable, but for me
> it's
> > > the
> > >   optional 4th component.
> > >
> > > Now, the obviously one to relax here is l3fwd, since it is just an
> > > example,
> > > but I wonder if that may cause some heartache.
> >
> > I don't consider any DPDK lib CORE just because the lib is used by
> testpmd and/or l3fwd. I agree that all libs should be included by
> default, so you can run testpmd, l3fwd, and other apps and examples.
> >
> > However, many libs are not needed for *all* DPDK applications, so I
> would like other apps to be able to build DPDK without superfluous
> libs.
> >
> > E.g. our StraightShaper CSP appliance is deployed at Layer 2, and
> doesn't use any of DPDK's L3 libs, so why should the DPDK L3 libs be
> considered CORE and thus included in our application? I suppose other
> companies are also using DPDK for other purposes than L3 routing, and
> don't need the DPDK L3 libs.
> >
> > Furthermore, I suppose that some Layer 3 applications use their own
> RIB/FIB/LPM libraries. Does OVS use DPDK's rib/fib/lpm libraries?
> >
> 
> 
> 
> > > Overall, if we want to make more libs optional, I would start
> looking
> > > at
> > > l3fwd and making it a bit more modular. I previously made its
> support
> > > for
> > > eventdev optional, we should do the same for lpm and fib. Beyond
> that,
> > > we
> > > need to decide what core really means.
> >
> > Yes - defining CORE is the key to setting the goal here.
> >
> > In my mind, CORE is the minimum requirement to running an absolutely
> minimal DPDK application.
> >
> > A primary DPDK application would probably need to do some packet I/O;
> but it might be a simple layer two bridge, not using any of the L3
> libs.
> >
> > And a secondary DPDK application might attach to a primary DPDK
> application only to work on its data structures, e.g. to collect
> statistics, but not do any packet processing, so that application
> doesn't need any of those libs (not even the ethdev lib).
> >
> > In reality, DPDK applications would probably need to build more libs
> than just CORE. But some application might need CORE + lib A, and some
> other application might need CORE + lib B. In essence, I don't want
> application A to drag around some unused lib B, and application B to
> drag around some unused lib A.
> >
> > It's an optimization only available a build time. Distros should
> continue providing all DPDK libs.
> >
> > There's also system testing and system attack surface to consider...
> all that bloat makes production systems more fragile and vulnerable.
> >
> 
> I largely agree, though I do think that trying to split primary-
> secondary
> as having different builds could lead to some headaches, so I'd push
> any
> work around that further down the road.

You are probably right that running a secondary process built differently than 
the primary process will cause an avalanche of new challenges, so I strongly 
agree to pushing it further down the road. I don't even know if there is any 
demand for such a secondary process. (We considered something like this for our 
application, but did something else instead.) Starting the secondary process 
with some additional run-time parameters will have to suffice.

> 
> Some thoughts on next steps:
> * From looks of 

Re: [PATCH 24.03 v2] build: track mandatory rather than optional libs

2023-11-06 Thread Bruce Richardson
On Mon, Nov 06, 2023 at 12:22:57PM +0100, Morten Brørup wrote:
> > From: Bruce Richardson [mailto:bruce.richard...@intel.com]
> > Sent: Monday, 6 November 2023 11.29
> > 
> > On Fri, Nov 03, 2023 at 09:19:53PM +0100, Morten Brørup wrote:
> > > > From: Bruce Richardson [mailto:bruce.richard...@intel.com]
> > > > Sent: Friday, 3 November 2023 19.09
> > > >
> > > > On Fri, Nov 03, 2023 at 06:31:30PM +0100, Morten Brørup wrote:
> > > > > > From: Bruce Richardson [mailto:bruce.richard...@intel.com]
> > > > > > Sent: Friday, 3 November 2023 17.52
> > > > > >
> > > > > > DPDK now has more optional libraries than mandatory ones, so
> > invert
> > > > the
> > > > > > list stored in the meson.build file from the optional ones to
> > the
> > > > > > "always_enable" ones. As well as being a shorter list:
> > > > > >
> > > > > > * we can remove the loop building up the "always_enable" list
> > > > > >   dynamically from the optional list
> > > > > > * it better aligns with the drivers/meson.build file which
> > > > maintains an
> > > > > >   always_enable list.
> > > > > >
> > > > > > Signed-off-by: Bruce Richardson 
> > > > >
> > > > > Excellent!
> > > > >
> > > > > It really shows how bloated DPDK CORE still is. I would like to
> > see
> > > > these go optional:
> > > > >
> > > >
> > > > For some I agree, but we need to decide what optional really means.
> > :-)
> > > >
> > > > For my mind, there are 3 (maybe 4) key components that need to be
> > built
> > > > for
> > > > me to consider a build to be a valid DPDK one:
> > > > * EAL obviously,
> > > > * testpmd, because everyone seems to use it
> > > > * l3fwd, becaues it's the most commonly referenced example and used
> > for
> > > >   benchmarking, and build testing in test-meson-builds. (There are
> > > > others,
> > > >   but they are pretty likely to build if l3fwd does!)
> > > > * dpdk-test - I feel this should always be buildable, but for me
> > it's
> > > > the
> > > >   optional 4th component.
> > > >
> > > > Now, the obviously one to relax here is l3fwd, since it is just an
> > > > example,
> > > > but I wonder if that may cause some heartache.
> > >
> > > I don't consider any DPDK lib CORE just because the lib is used by
> > testpmd and/or l3fwd. I agree that all libs should be included by
> > default, so you can run testpmd, l3fwd, and other apps and examples.
> > >
> > > However, many libs are not needed for *all* DPDK applications, so I
> > would like other apps to be able to build DPDK without superfluous
> > libs.
> > >
> > > E.g. our StraightShaper CSP appliance is deployed at Layer 2, and
> > doesn't use any of DPDK's L3 libs, so why should the DPDK L3 libs be
> > considered CORE and thus included in our application? I suppose other
> > companies are also using DPDK for other purposes than L3 routing, and
> > don't need the DPDK L3 libs.
> > >
> > > Furthermore, I suppose that some Layer 3 applications use their own
> > RIB/FIB/LPM libraries. Does OVS use DPDK's rib/fib/lpm libraries?
> > >
> > 
> > 
> > 
> > > > Overall, if we want to make more libs optional, I would start
> > looking
> > > > at
> > > > l3fwd and making it a bit more modular. I previously made its
> > support
> > > > for
> > > > eventdev optional, we should do the same for lpm and fib. Beyond
> > that,
> > > > we
> > > > need to decide what core really means.
> > >
> > > Yes - defining CORE is the key to setting the goal here.
> > >
> > > In my mind, CORE is the minimum requirement to running an absolutely
> > minimal DPDK application.
> > >
> > > A primary DPDK application would probably need to do some packet I/O;
> > but it might be a simple layer two bridge, not using any of the L3
> > libs.
> > >
> > > And a secondary DPDK application might attach to a primary DPDK
> > application only to work on its data structures, e.g. to collect
> > statistics, but not do any packet processing, so that application
> > doesn't need any of those libs (not even the ethdev lib).
> > >
> > > In reality, DPDK applications would probably need to build more libs
> > than just CORE. But some application might need CORE + lib A, and some
> > other application might need CORE + lib B. In essence, I don't want
> > application A to drag around some unused lib B, and application B to
> > drag around some unused lib A.
> > >
> > > It's an optimization only available a build time. Distros should
> > continue providing all DPDK libs.
> > >
> > > There's also system testing and system attack surface to consider...
> > all that bloat makes production systems more fragile and vulnerable.
> > >
> > 
> > I largely agree, though I do think that trying to split primary-
> > secondary
> > as having different builds could lead to some headaches, so I'd push
> > any
> > work around that further down the road.
> 
> You are probably right that running a secondary process built differently 
> than the primary process will cause an avalanche of new challenges, so I 
> strongly agree to pushing it further down the road. I

Re: [PATCH] doc/contributing: update RST text-wrapping guidelines

2023-11-06 Thread Thomas Monjalon
03/11/2023 14:42, Ferruh Yigit:
> On 11/3/2023 1:29 PM, Bruce Richardson wrote:
> > Update and clarify the guidelines on how to wrap lines in our RST docs.
> > We no longer limit lines to just 80 characters, and what is more
> > important that line length is the wrapping of sentences, starting a new
> > sentence on a new line, and wrapping at punctuation.
> > 
> > Signed-off-by: Bruce Richardson 
> 
> Acked-by: Ferruh Yigit 

I'm doing a lot of such minor improvement when merging patches.
I believe it makes doc more confortable to read and update.

Acked-by: Thomas Monjalon 




RE: [PATCH 24.03 v2] build: track mandatory rather than optional libs

2023-11-06 Thread Morten Brørup
> From: Bruce Richardson [mailto:bruce.richard...@intel.com]
> Sent: Monday, 6 November 2023 12.27
> 
> On Mon, Nov 06, 2023 at 12:22:57PM +0100, Morten Brørup wrote:
> > > From: Bruce Richardson [mailto:bruce.richard...@intel.com]
> > > Sent: Monday, 6 November 2023 11.29
> > >
> > > On Fri, Nov 03, 2023 at 09:19:53PM +0100, Morten Brørup wrote:
> > > > > From: Bruce Richardson [mailto:bruce.richard...@intel.com]
> > > > > Sent: Friday, 3 November 2023 19.09
> > > > >
> > > > > On Fri, Nov 03, 2023 at 06:31:30PM +0100, Morten Brørup wrote:
> > > > > > > From: Bruce Richardson [mailto:bruce.richard...@intel.com]
> > > > > > > Sent: Friday, 3 November 2023 17.52
> > > > > > >
> > > > > > > DPDK now has more optional libraries than mandatory ones,
> so
> > > invert
> > > > > the
> > > > > > > list stored in the meson.build file from the optional ones
> to
> > > the
> > > > > > > "always_enable" ones. As well as being a shorter list:
> > > > > > >
> > > > > > > * we can remove the loop building up the "always_enable"
> list
> > > > > > >   dynamically from the optional list
> > > > > > > * it better aligns with the drivers/meson.build file which
> > > > > maintains an
> > > > > > >   always_enable list.
> > > > > > >
> > > > > > > Signed-off-by: Bruce Richardson
> 
> > > > > >
> > > > > > Excellent!
> > > > > >
> > > > > > It really shows how bloated DPDK CORE still is. I would like
> to
> > > see
> > > > > these go optional:
> > > > > >
> > > > >
> > > > > For some I agree, but we need to decide what optional really
> means.
> > > :-)
> > > > >
> > > > > For my mind, there are 3 (maybe 4) key components that need to
> be
> > > built
> > > > > for
> > > > > me to consider a build to be a valid DPDK one:
> > > > > * EAL obviously,
> > > > > * testpmd, because everyone seems to use it
> > > > > * l3fwd, becaues it's the most commonly referenced example and
> used
> > > for
> > > > >   benchmarking, and build testing in test-meson-builds. (There
> are
> > > > > others,
> > > > >   but they are pretty likely to build if l3fwd does!)
> > > > > * dpdk-test - I feel this should always be buildable, but for
> me
> > > it's
> > > > > the
> > > > >   optional 4th component.
> > > > >
> > > > > Now, the obviously one to relax here is l3fwd, since it is just
> an
> > > > > example,
> > > > > but I wonder if that may cause some heartache.
> > > >
> > > > I don't consider any DPDK lib CORE just because the lib is used
> by
> > > testpmd and/or l3fwd. I agree that all libs should be included by
> > > default, so you can run testpmd, l3fwd, and other apps and
> examples.
> > > >
> > > > However, many libs are not needed for *all* DPDK applications, so
> I
> > > would like other apps to be able to build DPDK without superfluous
> > > libs.
> > > >
> > > > E.g. our StraightShaper CSP appliance is deployed at Layer 2, and
> > > doesn't use any of DPDK's L3 libs, so why should the DPDK L3 libs
> be
> > > considered CORE and thus included in our application? I suppose
> other
> > > companies are also using DPDK for other purposes than L3 routing,
> and
> > > don't need the DPDK L3 libs.
> > > >
> > > > Furthermore, I suppose that some Layer 3 applications use their
> own
> > > RIB/FIB/LPM libraries. Does OVS use DPDK's rib/fib/lpm libraries?
> > > >
> > >
> > > 
> > >
> > > > > Overall, if we want to make more libs optional, I would start
> > > looking
> > > > > at
> > > > > l3fwd and making it a bit more modular. I previously made its
> > > support
> > > > > for
> > > > > eventdev optional, we should do the same for lpm and fib.
> Beyond
> > > that,
> > > > > we
> > > > > need to decide what core really means.
> > > >
> > > > Yes - defining CORE is the key to setting the goal here.
> > > >
> > > > In my mind, CORE is the minimum requirement to running an
> absolutely
> > > minimal DPDK application.
> > > >
> > > > A primary DPDK application would probably need to do some packet
> I/O;
> > > but it might be a simple layer two bridge, not using any of the L3
> > > libs.
> > > >
> > > > And a secondary DPDK application might attach to a primary DPDK
> > > application only to work on its data structures, e.g. to collect
> > > statistics, but not do any packet processing, so that application
> > > doesn't need any of those libs (not even the ethdev lib).
> > > >
> > > > In reality, DPDK applications would probably need to build more
> libs
> > > than just CORE. But some application might need CORE + lib A, and
> some
> > > other application might need CORE + lib B. In essence, I don't want
> > > application A to drag around some unused lib B, and application B
> to
> > > drag around some unused lib A.
> > > >
> > > > It's an optimization only available a build time. Distros should
> > > continue providing all DPDK libs.
> > > >
> > > > There's also system testing and system attack surface to
> consider...
> > > all that bloat makes production systems more fragile and
> vulnerable.
> > > >
> > >
> > > I largely agree, though I do thin

Re: [PATCH] net/sfc: fix null dereference in syslog

2023-11-06 Thread Ferruh Yigit
On 11/4/2023 7:37 AM, Weiguo Li wrote:
> When ctx->sa is null, sfc_err(ctx->sa, ...) will triger a null
> dereference in the macro of sfc_err. Use SFC_GENERIC_LOG(ERR, ...)
> to avoid that.
> 
> Fixes: 44db08d53be3 ("net/sfc: maintain controller to EFX interface mapping")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Weiguo Li 
>

Reviewed-by: Ferruh Yigit 

Applied to dpdk-next-net/main, thanks.


RE: [PATCH v5] net/ice: fix crash on closing representor ports

2023-11-06 Thread Zhang, Qi Z



> -Original Message-
> From: Ye, MingjinX 
> Sent: Monday, November 6, 2023 6:00 PM
> To: dev@dpdk.org
> Cc: Yang, Qiming ; Zhou, YidingX
> ; Ye, MingjinX ;
> sta...@dpdk.org; Zhang, Qi Z 
> Subject: [PATCH v5] net/ice: fix crash on closing representor ports
> 
> The data resource in struct rte_eth_dev is cleared and points to NULL when
> the DCF port is closed.
> 
> If the DCF representor port is closed after the DCF port is closed, a
> segmentation fault occurs because the representor port accesses the data
> resource released by the DCF port.
> 
> This patch fixes this issue by synchronizing the state of DCF ports and
> representor ports to the peer in real time when their state changes.
> 
> Fixes: 5674465a32c8 ("net/ice: add DCF VLAN handling")
> Fixes: da9cdcd1f372 ("net/ice: fix crash on representor port closing")
> Fixes: 7564d5509611 ("net/ice: add DCF hardware initialization")
> Fixes: c7e1a1a3bfeb ("net/ice: refactor DCF VLAN handling")
> Fixes: 1a86f4dbdf42 ("net/ice: support DCF device reset")

These fixlines does not make sense to me, I believe the issue comes from when 
we enabled port representor for DCF, 
A patch expose the issue does not imply it cause the issue.

> Cc: sta...@dpdk.org
> 
> Signed-off-by: Mingjin Ye 
> ---
> v2: Reformat code to remove unneeded fixlines.
> ---
> v3: New solution.
> ---
> v4: Optimize v2 patch.
> ---
> v5: optimization.
> ---
>  drivers/net/ice/ice_dcf_ethdev.c | 30 --
>  drivers/net/ice/ice_dcf_ethdev.h |  3 ++
>  drivers/net/ice/ice_dcf_vf_representor.c | 50 ++--
>  3 files changed, 77 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/net/ice/ice_dcf_ethdev.c 
> b/drivers/net/ice/ice_dcf_ethdev.c
> index 065ec728c2..eea24ee3a9 100644
> --- a/drivers/net/ice/ice_dcf_ethdev.c
> +++ b/drivers/net/ice/ice_dcf_ethdev.c
> @@ -1618,6 +1618,26 @@ ice_dcf_free_repr_info(struct ice_dcf_adapter
> *dcf_adapter)
>   }
>  }
> 
> +int
> +ice_dcf_handle_vf_repr_close(struct ice_dcf_adapter *dcf_adapter,
> + uint16_t vf_id)
> +{
> + struct ice_dcf_repr_info *vf_rep_info;
> +
> + if (dcf_adapter->num_reprs >= vf_id) {
> + PMD_DRV_LOG(ERR, "Invalid VF id: %d", vf_id);
> + return -1;
> + }
> +
> + if (!dcf_adapter->repr_infos)
> + return 0;
> +
> + vf_rep_info = &dcf_adapter->repr_infos[vf_id];
> + vf_rep_info->vf_rep_eth_dev = NULL;
> +
> + return 0;
> +}
> +
>  static int
>  ice_dcf_init_repr_info(struct ice_dcf_adapter *dcf_adapter)  { @@ -1641,11
> +1661,10 @@ ice_dcf_dev_close(struct rte_eth_dev *dev)
>   if (rte_eal_process_type() != RTE_PROC_PRIMARY)
>   return 0;
> 
> + ice_dcf_vf_repr_notify_all(adapter, false);
>   (void)ice_dcf_dev_stop(dev);
> 
>   ice_free_queues(dev);
> -
> - ice_dcf_free_repr_info(adapter);
>   ice_dcf_uninit_parent_adapter(dev);
>   ice_dcf_uninit_hw(dev, &adapter->real_hw);
> 
> @@ -1835,7 +1854,7 @@ ice_dcf_dev_reset(struct rte_eth_dev *dev)
>   ice_dcf_reset_hw(dev, hw);
>   }
> 
> - ret = ice_dcf_dev_uninit(dev);
> + ret = ice_dcf_dev_close(dev);
>   if (ret)
>   return ret;
> 
> @@ -1938,12 +1957,17 @@ ice_dcf_dev_init(struct rte_eth_dev *eth_dev)
>   }
> 
>   dcf_config_promisc(adapter, false, false);
> + ice_dcf_vf_repr_notify_all(adapter, true);
> +
>   return 0;
>  }
> 
>  static int
>  ice_dcf_dev_uninit(struct rte_eth_dev *eth_dev)  {
> + struct ice_dcf_adapter *adapter = eth_dev->data->dev_private;
> +
> + ice_dcf_free_repr_info(adapter);
>   ice_dcf_dev_close(eth_dev);
> 
>   return 0;
> diff --git a/drivers/net/ice/ice_dcf_ethdev.h
> b/drivers/net/ice/ice_dcf_ethdev.h
> index 4baaec4b8b..6dcbaac5eb 100644
> --- a/drivers/net/ice/ice_dcf_ethdev.h
> +++ b/drivers/net/ice/ice_dcf_ethdev.h
> @@ -60,6 +60,7 @@ struct ice_dcf_vf_repr {
>   struct rte_ether_addr mac_addr;
>   uint16_t switch_domain_id;
>   uint16_t vf_id;
> + bool dcf_valid;
> 
>   struct ice_dcf_vlan outer_vlan_info; /* DCF always handle outer VLAN
> */  }; @@ -80,6 +81,8 @@ int ice_dcf_vf_repr_init(struct rte_eth_dev
> *vf_rep_eth_dev, void *init_param);  int ice_dcf_vf_repr_uninit(struct
> rte_eth_dev *vf_rep_eth_dev);  int ice_dcf_vf_repr_init_vlan(struct
> rte_eth_dev *vf_rep_eth_dev);  void ice_dcf_vf_repr_stop_all(struct
> ice_dcf_adapter *dcf_adapter);
> +void ice_dcf_vf_repr_notify_all(struct ice_dcf_adapter *dcf_adapter,
> +bool valid); int ice_dcf_handle_vf_repr_close(struct ice_dcf_adapter
> +*dcf_adapter, uint16_t vf_id);
>  bool ice_dcf_adminq_need_retry(struct ice_adapter *ad);
> 
>  #endif /* _ICE_DCF_ETHDEV_H_ */
> diff --git a/drivers/net/ice/ice_dcf_vf_representor.c
> b/drivers/net/ice/ice_dcf_vf_representor.c
> index b9fcfc80ad..6c342798ac 100644
> --- a/drivers/net/ice/ice_dcf_vf_representor.c
> +++ b/drivers/net/ice/ice_dcf_vf_representor.c
> @@ -50,9 +50,32 

Re: [PATCH v2 1/2] mempool: fix internal function documentation

2023-11-06 Thread Andrew Rybchenko

On 10/23/23 12:38, Ferruh Yigit wrote:

static function `rte_mempool_do_generic_get()` returns zero on success,
not >=0 as its function comment documents.

Since this function called by public API, the comment causes confusion
on the public API return value.

Fixing the internal function documentation for return value.

Fixes: af75078fece3 ("first public release")
Cc: sta...@dpdk.org

Reported-by: Mahesh Adulla 
Signed-off-by: Ferruh Yigit 
Reviewed-by: Morten Brørup 
Acked-by: Huisong Li 
---
  .mailmap  | 1 +
  lib/mempool/rte_mempool.h | 2 +-
  2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/.mailmap b/.mailmap
index 3f5bab26a81f..bfe451980f1c 100644
--- a/.mailmap
+++ b/.mailmap
@@ -836,6 +836,7 @@ Maciej Rabeda 
  Maciej Szwed 
  Madhu Chittim 
  Madhuker Mythri 
+Mahesh Adulla 
  Mahipal Challa 
  Mah Yock Gen 
  Mairtin o Loingsigh 
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index f70bf36080fb..86598bc639e6 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -1484,7 +1484,7 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
   * @param cache
   *   A pointer to a mempool cache structure. May be NULL if not needed.
   * @return
- *   - >=0: Success; number of objects supplied.
+ *   - 0: Success; number of objects supplied.


I think "number of objects supplied" does not make sense here any more.


   *   - <0: Error; code of driver dequeue function.
   */
  static __rte_always_inline int




Re: [PATCH v5 2/5] net/sfc: use new API to parse kvargs

2023-11-06 Thread fengchengwen
Hi Andrew,

On 2023/11/6 18:28, Andrew Rybchenko wrote:
> On 11/6/23 10:31, Chengwen Feng wrote:
>> The sfc_kvargs_process() and sfc_efx_dev_class_get() function could
>> handle both key=value and only-key, so they should use
>> rte_kvargs_process_opt() instead of rte_kvargs_process() to parse.
>>
>> Signed-off-by: Chengwen Feng 
>> ---
>>   drivers/common/sfc_efx/sfc_efx.c | 4 ++--
>>   drivers/net/sfc/sfc_kvargs.c | 2 +-
>>   2 files changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/common/sfc_efx/sfc_efx.c 
>> b/drivers/common/sfc_efx/sfc_efx.c
>> index 2dc5545760..3ebac909f1 100644
>> --- a/drivers/common/sfc_efx/sfc_efx.c
>> +++ b/drivers/common/sfc_efx/sfc_efx.c
>> @@ -52,8 +52,8 @@ sfc_efx_dev_class_get(struct rte_devargs *devargs)
>>   return dev_class;
>>     if (rte_kvargs_count(kvargs, RTE_DEVARGS_KEY_CLASS) != 0) {
>> -    rte_kvargs_process(kvargs, RTE_DEVARGS_KEY_CLASS,
>> -   sfc_efx_kvarg_dev_class_handler, &dev_class);
>> +    rte_kvargs_process_opt(kvargs, RTE_DEVARGS_KEY_CLASS,
>> +   sfc_efx_kvarg_dev_class_handler, &dev_class);
> 
> LGTM from code point of view, but I'm not sure that I understand the
> idea behind handling NULL value in sfc_efx_kvarg_dev_class_handler().
> 
> Cc: Vijay
> 
>>   }
>>     rte_kvargs_free(kvargs);
>> diff --git a/drivers/net/sfc/sfc_kvargs.c b/drivers/net/sfc/sfc_kvargs.c
>> index 783cb43ae6..24bb896179 100644
>> --- a/drivers/net/sfc/sfc_kvargs.c
>> +++ b/drivers/net/sfc/sfc_kvargs.c
>> @@ -70,7 +70,7 @@ sfc_kvargs_process(struct sfc_adapter *sa, const char 
>> *key_match,
>>   if (sa->kvargs == NULL)
>>   return 0;
>>   -    return -rte_kvargs_process(sa->kvargs, key_match, handler, 
>> opaque_arg);
>> +    return -rte_kvargs_process_opt(sa->kvargs, key_match, handler, 
>> opaque_arg);
> 
> It looks wrong to me since many handlers do not handle NULL string 
> gracefully. As I understand some handlers where fixed to avoid crash
> and correct fix would be to keep  rte_kvargs_process() and remove
> unnecessary checks for NULL string value.

The scope is large, I suggest creates a new patchset later which remove 
unnecessary checks for NULL string value.

> 
>>   }
>>     int
> 
> .


Re: [PATCH v2] app/testpmd: fix UDP cksum error for UFO enable

2023-11-06 Thread lihuisong (C)



在 2023/11/6 18:09, Ferruh Yigit 写道:

On 11/6/2023 4:13 AM, lihuisong (C) wrote:

在 2023/11/3 18:42, Ferruh Yigit 写道:

On 11/3/2023 9:09 AM, lihuisong (C) wrote:

Hi Ferruh,

Thanks for you review.


在 2023/11/3 9:31, Ferruh Yigit 写道:

On 8/2/2023 3:55 AM, Huisong Li wrote:

The command "tso set  " is used to enable UFO,
please
see commit ce8e6e742807 ("app/testpmd: support UFO in checksum
engine")

The above patch configures the RTE_MBUF_F_TX_UDP_SEG to enable UFO
only if
tso_segsz is set.


"The above patch sets the RTE_MBUF_F_TX_UDP_SEG in mbuf ol_flags, only
by checking if 'tso_segsz' is set, but missing check if UFO offload
(RTE_ETH_TX_OFFLOAD_UDP_TSO) supported by device."

Ack

Then tx_prepare() may call rte_net_intel_cksum_prepare()
to compute pseudo header checksum (because some PMDs may supports
TSO).


Not sure what do you mean by '(because some PMDs may supports TSO)'?

Do you mean something like following:
"RTE_MBUF_F_TX_UDP_SEG flag causes driver that supports TSO/UFO to
compute pseudo header checksum."

Ack

As a result, if the peer sends UDP packets, all packets with UDP
checksum
error are received for the PMDs only supported TSO.


"As a result, if device only supports TSO, but not UFO, UDP packet
checksum will be wrong."

Ack

So enabling UFO also depends on if driver has
RTE_ETH_TX_OFFLOAD_UDP_TSO
capability. Similarly, TSO also need to do like this.

In addition, this patch also fixes cmd_tso_set_parsed() for UFO to
make
it better to support TSO and UFO.

Fixes: ce8e6e742807 ("app/testpmd: support UFO in checksum engine")

Signed-off-by: Huisong Li 
---
    v2: add handle for tunnel TSO offload in process_inner_cksums

---
    app/test-pmd/cmdline.c  | 47
+
    app/test-pmd/csumonly.c | 11 --
    2 files changed, 33 insertions(+), 25 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 0d0723f659..8be593d405 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -4906,6 +4906,7 @@ cmd_tso_set_parsed(void *parsed_result,
    {
    struct cmd_tso_set_result *res = parsed_result;
    struct rte_eth_dev_info dev_info;
+    uint64_t offloads;
    int ret;
      if (port_id_is_invalid(res->port_id, ENABLED_WARN))
@@ -4922,37 +4923,37 @@ cmd_tso_set_parsed(void *parsed_result,
    if (ret != 0)
    return;
    -    if ((ports[res->port_id].tso_segsz != 0) &&
-    (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) ==
0) {
-    fprintf(stderr, "Error: TSO is not supported by port %d\n",
-    res->port_id);
-    return;
+    if (ports[res->port_id].tso_segsz != 0) {
+    if ((dev_info.tx_offload_capa & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+    RTE_ETH_TX_OFFLOAD_UDP_TSO)) == 0) {
+    fprintf(stderr, "Error: both TSO and UFO are not
supported by port %d\n",
+    res->port_id);
+    return;
+    }
+    /* display warnings if configuration is not supported by the
NIC */
+    if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO)
== 0)
+    fprintf(stderr, "Warning: port %d doesn't support TSO\n",
+    res->port_id);
+    if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_TSO)
== 0)
+    fprintf(stderr, "Warning: port %d doesn't support UFO\n",
+    res->port_id);


Requesting TSO/UFO by setting 'tso_segsz', but device capability
missing
is an error case, so OK to have first message.

But only supporting TSO or UFO is not an error case, not sure about
logging this. But even it is logged, I think it shouldn't be to stderr
or it should say "Warning: ", a regular logging can be done.

All right, will fix it in next version.

    }
      if (ports[res->port_id].tso_segsz == 0) {
    ports[res->port_id].dev_conf.txmode.offloads &=
-    ~RTE_ETH_TX_OFFLOAD_TCP_TSO;
-    printf("TSO for non-tunneled packets is disabled\n");
+    ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
RTE_ETH_TX_OFFLOAD_UDP_TSO);
+    printf("TSO and UFO for non-tunneled packets is disabled\n");
    } else {
-    ports[res->port_id].dev_conf.txmode.offloads |=
-    RTE_ETH_TX_OFFLOAD_TCP_TSO;
-    printf("TSO segment size for non-tunneled packets is %d\n",
+    offloads = (dev_info.tx_offload_capa &
RTE_ETH_TX_OFFLOAD_TCP_TSO) ?
+    RTE_ETH_TX_OFFLOAD_TCP_TSO : 0;
+    offloads |= (dev_info.tx_offload_capa &
RTE_ETH_TX_OFFLOAD_UDP_TSO) ?
+    RTE_ETH_TX_OFFLOAD_UDP_TSO : 0;
+    ports[res->port_id].dev_conf.txmode.offloads |= offloads;
+    printf("segment size for non-tunneled packets is %d\n",
    ports[res->port_id].tso_segsz);
    }
-    cmd_config_queue_tx_offloads(&ports[res->port_id]);
-
-    /* display warnings if configuration is not supported by the
NIC */
-    ret = eth_dev_info_get_print_err(res->port_id, &dev_info);
-    if (ret != 0)
-    

Re: [PATCH v2 7/7] doc: testpmd support event handling section

2023-11-06 Thread fengchengwen
Hi Huisong,

On 2023/11/6 17:28, lihuisong (C) wrote:
> 
> 在 2023/10/20 18:07, Chengwen Feng 写道:
>> Add new section of event handling, which documented the ethdev and
>> device events.
>>
>> Signed-off-by: Chengwen Feng 
>> ---
>>   doc/guides/testpmd_app_ug/event_handling.rst | 80 
>>   doc/guides/testpmd_app_ug/index.rst  |  1 +
>>   2 files changed, 81 insertions(+)
>>   create mode 100644 doc/guides/testpmd_app_ug/event_handling.rst
>>
>> diff --git a/doc/guides/testpmd_app_ug/event_handling.rst 
>> b/doc/guides/testpmd_app_ug/event_handling.rst
>> new file mode 100644
>> index 00..c116753ad0
>> --- /dev/null
>> +++ b/doc/guides/testpmd_app_ug/event_handling.rst
>> @@ -0,0 +1,80 @@
>> +..  SPDX-License-Identifier: BSD-3-Clause
>> +    Copyright(c) 2023 HiSilicon Limited.
>> +
>> +Event Handling
>> +==
>> +
>> +The ``testpmd`` application supports following two type event handling:
>> +
>> +ethdev events
>> +-
>> +
>> +The ``testpmd`` provide options "--print-event" and "--mask-event" to 
>> control
>> +whether display such as "Port x y event" when received "y" event on port 
>> "x".
>> +This is named as default processing.
>> +
>> +This section details the support events, unless otherwise specified, only 
>> the
>> +default processing is support.
>> +
>> +- ``RTE_ETH_EVENT_INTR_LSC``:
>> +  If device started with lsc enabled, the PMD will launch this event when it
>> +  detect link status changes.
>> +
>> +- ``RTE_ETH_EVENT_QUEUE_STATE``:
>> +  Used only within vhost PMD to report vring whether enabled.
> Used only within vhost PMD? it seems that this is only used by vhost.
> but ethdev lib says:
> /** queue state event (enabled/disabled) */
>     RTE_ETH_EVENT_QUEUE_STATE,
> testpmd is also a demo for user, so suggest that change this commnts to avoid 
> the confuesed by that.

Ok, I think vhost could as example, e.g.
Used when notify queue state event changed, for example: vhost PMD use this 
event report vring whether enabled.

Thanks
Chengwen

>> +
>> +- ``RTE_ETH_EVENT_INTR_RESET``:
>> +  Used to report reset interrupt happened, this event only reported when the
>> +  PMD supports ``RTE_ETH_ERROR_HANDLE_MODE_PASSIVE``.
>> +
>> +- ``RTE_ETH_EVENT_VF_MBOX``:
>> +  Used as a PF to process mailbox messages of the VFs to which the PF 
>> belongs.
>> +
>> +- ``RTE_ETH_EVENT_INTR_RMV``:
>> +  Used to report device removal event. The ``testpmd`` will remove the port
>> +  later.
>> +
>> +- ``RTE_ETH_EVENT_NEW``:
>> +  Used to report port was probed event. The ``testpmd`` will setup the port
>> +  later.
>> +
>> +- ``RTE_ETH_EVENT_DESTROY``:
>> +  Used to report port was released event. The ``testpmd`` will changes the
>> +  port's status.
>> +
>> +- ``RTE_ETH_EVENT_MACSEC``:
>> +  Used to report MACsec offload related event.
>> +
>> +- ``RTE_ETH_EVENT_IPSEC``:
>> +  Used to report IPsec offload related event.
>> +
>> +- ``RTE_ETH_EVENT_FLOW_AGED``:
>> +  Used to report new aged-out flows was detected. Only valid with mlx5 PMD.
>> +
>> +- ``RTE_ETH_EVENT_RX_AVAIL_THRESH``:
>> +  Used to report available Rx descriptors was smaller than the threshold. 
>> Only
>> +  valid with mlx5 PMD.
>> +
>> +- ``RTE_ETH_EVENT_ERR_RECOVERING``:
>> +  Used to report error happened, and PMD will do recover after report this
>> +  event. The ``testpmd`` will stop packet forwarding when received the 
>> event.
>> +
>> +- ``RTE_ETH_EVENT_RECOVERY_SUCCESS``:
>> +  Used to report error recovery success. The ``testpmd`` will restart packet
>> +  forwarding when received the event.
>> +
>> +- ``RTE_ETH_EVENT_RECOVERY_FAILED``:
>> +  Used to report error recovery failed. The ``testpmd`` will display one
>> +  message to show which ports failed.
>> +
>> +.. note::
>> +
>> +   The ``RTE_ETH_EVENT_ERR_RECOVERING``, ``RTE_ETH_EVENT_RECOVERY_SUCCESS`` 
>> and
>> +   ``RTE_ETH_EVENT_RECOVERY_FAILED`` only reported when the PMD supports
>> +   ``RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE``.
>> +
>> +device events
>> +-
>> +
>> +Including two events ``RTE_DEV_EVENT_ADD`` and ``RTE_DEV_EVENT_ADD``, and
>> +enabled only when the ``testpmd`` stated with options "--hot-plug".
>> diff --git a/doc/guides/testpmd_app_ug/index.rst 
>> b/doc/guides/testpmd_app_ug/index.rst
>> index 1ac0d25d57..3c09448c4e 100644
>> --- a/doc/guides/testpmd_app_ug/index.rst
>> +++ b/doc/guides/testpmd_app_ug/index.rst
>> @@ -14,3 +14,4 @@ Testpmd Application User Guide
>>   build_app
>>   run_app
>>   testpmd_funcs
>> +    event_handling
> .


Re: [PATCH v2 5/7] app/testpmd: add error recovery usage demo

2023-11-06 Thread fengchengwen
Hi Huisong,

On 2023/11/1 12:08, lihuisong (C) wrote:
> 
> 在 2023/10/20 18:07, Chengwen Feng 写道:
>> This patch adds error recovery usage demo which will:
>> 1. stop packet forwarding when the RTE_ETH_EVENT_ERR_RECOVERING event
>>     is received.
>> 2. restart packet forwarding when the RTE_ETH_EVENT_RECOVERY_SUCCESS
>>     event is received.
>> 3. prompt the ports that fail to recovery and need to be removed when
>>     the RTE_ETH_EVENT_RECOVERY_FAILED event is received.
> Why not suggest that try to call dev_reset() or other way to recovery?

It was already discussed many times, which is the reason why introduced the
RTE_ETH_EVENT_RECOVERY_XXX event, please refer previous thread.

>>
>> In addition, a message is added to the printed information, requiring
>> no command to be executed during the error recovery.
>>
>> Signed-off-by: Chengwen Feng 
>> Acked-by: Konstantin Ananyev 
>> ---
>>   app/test-pmd/testpmd.c | 80 ++
>>   app/test-pmd/testpmd.h |  4 ++-
>>   2 files changed, 83 insertions(+), 1 deletion(-)
>>
>> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
>> index 595b77748c..39a25238e5 100644
>> --- a/app/test-pmd/testpmd.c
>> +++ b/app/test-pmd/testpmd.c
>> @@ -3942,6 +3942,77 @@ rmv_port_callback(void *arg)
>>   start_packet_forwarding(0);
>>   }
>>   +static int need_start_when_recovery_over;
>> +
>> +static bool
>> +has_port_in_err_recovering(void)
>> +{
>> +    struct rte_port *port;
>> +    portid_t pid;
>> +
>> +    RTE_ETH_FOREACH_DEV(pid) {
>> +    port = &ports[pid];
>> +    if (port->err_recovering)
>> +    return true;
>> +    }
>> +
>> +    return false;
>> +}
>> +
>> +static void
>> +err_recovering_callback(portid_t port_id)
>> +{
>> +    if (!has_port_in_err_recovering())
>> +    printf("Please stop executing any commands until recovery result 
>> events are received!\n");
>> +
>> +    ports[port_id].err_recovering = 1;
>> +    ports[port_id].recover_failed = 0;
>> +
>> +    /* To simplify implementation, stop forwarding regardless of whether 
>> the port is used. */
>> +    if (!test_done) {
>> +    printf("Stop packet forwarding because some ports are in error 
>> recovering!\n");
>> +    stop_packet_forwarding();
>> +    need_start_when_recovery_over = 1;
>> +    }
>> +}
>> +
>> +static void
>> +recover_success_callback(portid_t port_id)
>> +{
>> +    ports[port_id].err_recovering = 0;
>> +    if (has_port_in_err_recovering())
>> +    return;
>> +
>> +    if (need_start_when_recovery_over) {
>> +    printf("Recovery success! Restart packet forwarding!\n");
>> +    start_packet_forwarding(0);
> s/start_packet_forwarding(0)/start_packet_forwarding() ?

start_packet_forwarding must have one parameter, 0 is proper use for here.

Thanks
Chengwen

>> +    need_start_when_recovery_over = 0;
>> +    } else {
>> +    printf("Recovery success!\n");
>> +    }
>> +}
>> +
>> +static void
>> +recover_failed_callback(portid_t port_id)
>> +{
>> +    struct rte_port *port;
>> +    portid_t pid;
>> +
>> +    ports[port_id].err_recovering = 0;
>> +    ports[port_id].recover_failed = 1;
>> +    if (has_port_in_err_recovering())
>> +    return;
>> +
>> +    need_start_when_recovery_over = 0;
>> +    printf("The ports:");
>> +    RTE_ETH_FOREACH_DEV(pid) {
>> +    port = &ports[pid];
>> +    if (port->recover_failed)
>> +    printf(" %u", pid);
>> +    }
>> +    printf(" recovery failed! Please remove them!\n");
>> +}
>> +
>>   /* This function is used by the interrupt thread */
>>   static int
>>   eth_event_callback(portid_t port_id, enum rte_eth_event_type type, void 
>> *param,
>> @@ -3997,6 +4068,15 @@ eth_event_callback(portid_t port_id, enum 
>> rte_eth_event_type type, void *param,
>>   }
>>   break;
>>   }
>> +    case RTE_ETH_EVENT_ERR_RECOVERING:
>> +    err_recovering_callback(port_id);
>> +    break;
>> +    case RTE_ETH_EVENT_RECOVERY_SUCCESS:
>> +    recover_success_callback(port_id);
>> +    break;
>> +    case RTE_ETH_EVENT_RECOVERY_FAILED:
>> +    recover_failed_callback(port_id);
>> +    break;
>>   default:
>>   break;
>>   }
>> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
>> index 09a36b90b8..42782d5a05 100644
>> --- a/app/test-pmd/testpmd.h
>> +++ b/app/test-pmd/testpmd.h
>> @@ -342,7 +342,9 @@ struct rte_port {
>>   uint8_t member_flag : 1, /**< bonding member port */
>>   bond_flag : 1, /**< port is bond device */
>>   fwd_mac_swap : 1, /**< swap packet MAC before forward */
>> -    update_conf : 1; /**< need to update bonding device 
>> configuration */
>> +    update_conf : 1, /**< need to update bonding device 
>> configuration */
>> +    err_recovering : 1, /**< port is in error recovering */
>> +    recover_failed : 1; /**< port recover failed */
>>   struct port_

[PATCH v3 3/7] net/bnxt: fix race-condition when report error recovery

2023-11-06 Thread Chengwen Feng
If set data path functions to dummy functions before reports error
recovering event, there maybe a race-condition with data path threads,
this patch fixes it by setting data path functions to dummy functions
only after reports such event.

Fixes: e11052f3a46f ("net/bnxt: support proactive error handling mode")
Cc: sta...@dpdk.org

Signed-off-by: Chengwen Feng 
Acked-by: Konstantin Ananyev 
Acked-by: Ajit Khaparde 
---
 drivers/net/bnxt/bnxt_cpr.c| 13 +++--
 drivers/net/bnxt/bnxt_ethdev.c |  4 ++--
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_cpr.c b/drivers/net/bnxt/bnxt_cpr.c
index 0733cf4df2..d8947d5b5f 100644
--- a/drivers/net/bnxt/bnxt_cpr.c
+++ b/drivers/net/bnxt/bnxt_cpr.c
@@ -168,14 +168,9 @@ void bnxt_handle_async_event(struct bnxt *bp,
PMD_DRV_LOG(INFO, "Port conn async event\n");
break;
case HWRM_ASYNC_EVENT_CMPL_EVENT_ID_RESET_NOTIFY:
-   /*
-* Avoid any rx/tx packet processing during firmware reset
-* operation.
-*/
-   bnxt_stop_rxtx(bp->eth_dev);
-
/* Ignore reset notify async events when stopping the port */
if (!bp->eth_dev->data->dev_started) {
+   bnxt_stop_rxtx(bp->eth_dev);
bp->flags |= BNXT_FLAG_FATAL_ERROR;
return;
}
@@ -184,6 +179,12 @@ void bnxt_handle_async_event(struct bnxt *bp,
 RTE_ETH_EVENT_ERR_RECOVERING,
 NULL);
 
+   /*
+* Avoid any rx/tx packet processing during firmware reset
+* operation.
+*/
+   bnxt_stop_rxtx(bp->eth_dev);
+
pthread_mutex_lock(&bp->err_recovery_lock);
event_data = data1;
/* timestamp_lo/hi values are in units of 100ms */
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 5c4d96d4b1..003a6eec11 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -4616,14 +4616,14 @@ static void bnxt_check_fw_health(void *arg)
bp->flags |= BNXT_FLAG_FATAL_ERROR;
bp->flags |= BNXT_FLAG_FW_RESET;
 
-   bnxt_stop_rxtx(bp->eth_dev);
-
PMD_DRV_LOG(ERR, "Detected FW dead condition\n");
 
rte_eth_dev_callback_process(bp->eth_dev,
 RTE_ETH_EVENT_ERR_RECOVERING,
 NULL);
 
+   bnxt_stop_rxtx(bp->eth_dev);
+
if (bnxt_is_primary_func(bp))
wait_msec = info->primary_func_wait_period;
else
-- 
2.17.1



[PATCH v3 0/7] fix race-condition of proactive error handling mode

2023-11-06 Thread Chengwen Feng
This patch fixes race-condition of proactive error handling mode, the
discussion thread [1].

[1] 
http://patchwork.dpdk.org/project/dpdk/patch/20230220060839.1267349-2-ashok.k.kal...@intel.com/

Chengwen Feng (7):
  ethdev: fix race-condition of proactive error handling mode
  net/hns3: replace fp ops config function
  net/bnxt: fix race-condition when report error recovery
  net/bnxt: use fp ops setup function
  app/testpmd: add error recovery usage demo
  app/testpmd: extract event handling to event.c
  doc: testpmd support event handling section

---
v3:
- adjust the usage of RTE_ETH_EVENT_QUEUE_STATE in 7/7 commit.
- add ack-by from Konstantin Ananyev, Ajit Khaparde and Huisong Li.
v2:
- extract event handling to event.c and document it, which address
  Ferruh's comment.
- add ack-by from Konstantin Ananyev and Dongdong Liu.

 app/test-pmd/event.c | 390 +++
 app/test-pmd/meson.build |   1 +
 app/test-pmd/parameters.c|  36 +-
 app/test-pmd/testpmd.c   | 247 +---
 app/test-pmd/testpmd.h   |  10 +-
 doc/guides/prog_guide/poll_mode_drv.rst  |  20 +-
 doc/guides/testpmd_app_ug/event_handling.rst |  81 
 doc/guides/testpmd_app_ug/index.rst  |   1 +
 drivers/net/bnxt/bnxt_cpr.c  |  18 +-
 drivers/net/bnxt/bnxt_ethdev.c   |   9 +-
 drivers/net/hns3/hns3_rxtx.c |  21 +-
 lib/ethdev/ethdev_driver.c   |   8 +
 lib/ethdev/ethdev_driver.h   |  10 +
 lib/ethdev/rte_ethdev.h  |  32 +-
 lib/ethdev/version.map   |   1 +
 15 files changed, 552 insertions(+), 333 deletions(-)
 create mode 100644 app/test-pmd/event.c
 create mode 100644 doc/guides/testpmd_app_ug/event_handling.rst

-- 
2.17.1



[PATCH v3 2/7] net/hns3: replace fp ops config function

2023-11-06 Thread Chengwen Feng
This patch replace hns3_eth_dev_fp_ops_config() with
rte_eth_fp_ops_setup().

Cc: sta...@dpdk.org

Signed-off-by: Chengwen Feng 
Acked-by: Dongdong Liu 
Acked-by: Konstantin Ananyev 
Acked-by: Huisong Li 
---
 drivers/net/hns3/hns3_rxtx.c | 21 +++--
 1 file changed, 3 insertions(+), 18 deletions(-)

diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 09b7e90c70..ecee74cf11 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4443,21 +4443,6 @@ hns3_trace_rxtx_function(struct rte_eth_dev *dev)
 rx_mode.info, tx_mode.info);
 }
 
-static void
-hns3_eth_dev_fp_ops_config(const struct rte_eth_dev *dev)
-{
-   struct rte_eth_fp_ops *fpo = rte_eth_fp_ops;
-   uint16_t port_id = dev->data->port_id;
-
-   fpo[port_id].rx_pkt_burst = dev->rx_pkt_burst;
-   fpo[port_id].tx_pkt_burst = dev->tx_pkt_burst;
-   fpo[port_id].tx_pkt_prepare = dev->tx_pkt_prepare;
-   fpo[port_id].rx_descriptor_status = dev->rx_descriptor_status;
-   fpo[port_id].tx_descriptor_status = dev->tx_descriptor_status;
-   fpo[port_id].rxq.data = dev->data->rx_queues;
-   fpo[port_id].txq.data = dev->data->tx_queues;
-}
-
 void
 hns3_set_rxtx_function(struct rte_eth_dev *eth_dev)
 {
@@ -4480,7 +4465,7 @@ hns3_set_rxtx_function(struct rte_eth_dev *eth_dev)
}
 
hns3_trace_rxtx_function(eth_dev);
-   hns3_eth_dev_fp_ops_config(eth_dev);
+   rte_eth_fp_ops_setup(eth_dev);
 }
 
 void
@@ -4833,7 +4818,7 @@ hns3_stop_tx_datapath(struct rte_eth_dev *dev)
 {
dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
dev->tx_pkt_prepare = NULL;
-   hns3_eth_dev_fp_ops_config(dev);
+   rte_eth_fp_ops_setup(dev);
 
if (rte_eal_process_type() == RTE_PROC_SECONDARY)
return;
@@ -4850,7 +4835,7 @@ hns3_start_tx_datapath(struct rte_eth_dev *dev)
 {
dev->tx_pkt_burst = hns3_get_tx_function(dev);
dev->tx_pkt_prepare = hns3_get_tx_prepare(dev);
-   hns3_eth_dev_fp_ops_config(dev);
+   rte_eth_fp_ops_setup(dev);
 
if (rte_eal_process_type() == RTE_PROC_SECONDARY)
return;
-- 
2.17.1



[PATCH v3 4/7] net/bnxt: use fp ops setup function

2023-11-06 Thread Chengwen Feng
Use rte_eth_fp_ops_setup() instead of directly manipulating
rte_eth_fp_ops variable.

Cc: sta...@dpdk.org

Signed-off-by: Chengwen Feng 
Acked-by: Konstantin Ananyev 
Acked-by: Ajit Khaparde 
Acked-by: Huisong Li 
---
 drivers/net/bnxt/bnxt_cpr.c| 5 +
 drivers/net/bnxt/bnxt_ethdev.c | 5 +
 2 files changed, 2 insertions(+), 8 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_cpr.c b/drivers/net/bnxt/bnxt_cpr.c
index d8947d5b5f..3a08028331 100644
--- a/drivers/net/bnxt/bnxt_cpr.c
+++ b/drivers/net/bnxt/bnxt_cpr.c
@@ -416,10 +416,7 @@ void bnxt_stop_rxtx(struct rte_eth_dev *eth_dev)
eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
 
-   rte_eth_fp_ops[eth_dev->data->port_id].rx_pkt_burst =
-   eth_dev->rx_pkt_burst;
-   rte_eth_fp_ops[eth_dev->data->port_id].tx_pkt_burst =
-   eth_dev->tx_pkt_burst;
+   rte_eth_fp_ops_setup(eth_dev);
rte_mb();
 
/* Allow time for threads to exit the real burst functions. */
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 003a6eec11..9d9b9ae8cf 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -4428,10 +4428,7 @@ static void bnxt_dev_recover(void *arg)
if (rc)
goto err_start;
 
-   rte_eth_fp_ops[bp->eth_dev->data->port_id].rx_pkt_burst =
-   bp->eth_dev->rx_pkt_burst;
-   rte_eth_fp_ops[bp->eth_dev->data->port_id].tx_pkt_burst =
-   bp->eth_dev->tx_pkt_burst;
+   rte_eth_fp_ops_setup(bp->eth_dev);
rte_mb();
 
PMD_DRV_LOG(INFO, "Port: %u Recovered from FW reset\n",
-- 
2.17.1



[PATCH v3 6/7] app/testpmd: extract event handling to event.c

2023-11-06 Thread Chengwen Feng
This patch extract event handling (including eth-event and dev-event)
to a new file 'event.c'.

Signed-off-by: Chengwen Feng 
Acked-by: Huisong Li 
---
 app/test-pmd/event.c  | 390 ++
 app/test-pmd/meson.build  |   1 +
 app/test-pmd/parameters.c |  36 +---
 app/test-pmd/testpmd.c| 327 +---
 app/test-pmd/testpmd.h|   6 +
 5 files changed, 407 insertions(+), 353 deletions(-)
 create mode 100644 app/test-pmd/event.c

diff --git a/app/test-pmd/event.c b/app/test-pmd/event.c
new file mode 100644
index 00..8393e105d7
--- /dev/null
+++ b/app/test-pmd/event.c
@@ -0,0 +1,390 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 HiSilicon Limited
+ */
+
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#ifdef RTE_NET_MLX5
+#include "mlx5_testpmd.h"
+#endif
+
+#include "testpmd.h"
+
+/* Pretty printing of ethdev events */
+static const char * const eth_event_desc[] = {
+   [RTE_ETH_EVENT_UNKNOWN] = "unknown",
+   [RTE_ETH_EVENT_INTR_LSC] = "link state change",
+   [RTE_ETH_EVENT_QUEUE_STATE] = "queue state",
+   [RTE_ETH_EVENT_INTR_RESET] = "reset",
+   [RTE_ETH_EVENT_VF_MBOX] = "VF mbox",
+   [RTE_ETH_EVENT_IPSEC] = "IPsec",
+   [RTE_ETH_EVENT_MACSEC] = "MACsec",
+   [RTE_ETH_EVENT_INTR_RMV] = "device removal",
+   [RTE_ETH_EVENT_NEW] = "device probed",
+   [RTE_ETH_EVENT_DESTROY] = "device released",
+   [RTE_ETH_EVENT_FLOW_AGED] = "flow aged",
+   [RTE_ETH_EVENT_RX_AVAIL_THRESH] = "RxQ available descriptors threshold 
reached",
+   [RTE_ETH_EVENT_ERR_RECOVERING] = "error recovering",
+   [RTE_ETH_EVENT_RECOVERY_SUCCESS] = "error recovery successful",
+   [RTE_ETH_EVENT_RECOVERY_FAILED] = "error recovery failed",
+   [RTE_ETH_EVENT_MAX] = NULL,
+};
+
+/*
+ * Display or mask ether events
+ * Default to all events except VF_MBOX
+ */
+uint32_t event_print_mask = (UINT32_C(1) << RTE_ETH_EVENT_UNKNOWN) |
+   (UINT32_C(1) << RTE_ETH_EVENT_INTR_LSC) |
+   (UINT32_C(1) << RTE_ETH_EVENT_QUEUE_STATE) |
+   (UINT32_C(1) << RTE_ETH_EVENT_INTR_RESET) |
+   (UINT32_C(1) << RTE_ETH_EVENT_IPSEC) |
+   (UINT32_C(1) << RTE_ETH_EVENT_MACSEC) |
+   (UINT32_C(1) << RTE_ETH_EVENT_INTR_RMV) |
+   (UINT32_C(1) << RTE_ETH_EVENT_FLOW_AGED) |
+   (UINT32_C(1) << RTE_ETH_EVENT_ERR_RECOVERING) |
+   (UINT32_C(1) << RTE_ETH_EVENT_RECOVERY_SUCCESS) |
+   (UINT32_C(1) << RTE_ETH_EVENT_RECOVERY_FAILED);
+
+int
+get_event_name_mask(const char *name, uint32_t *mask)
+{
+   if (!strcmp(name, "unknown"))
+   *mask = UINT32_C(1) << RTE_ETH_EVENT_UNKNOWN;
+   else if (!strcmp(name, "intr_lsc"))
+   *mask = UINT32_C(1) << RTE_ETH_EVENT_INTR_LSC;
+   else if (!strcmp(name, "queue_state"))
+   *mask = UINT32_C(1) << RTE_ETH_EVENT_QUEUE_STATE;
+   else if (!strcmp(name, "intr_reset"))
+   *mask = UINT32_C(1) << RTE_ETH_EVENT_INTR_RESET;
+   else if (!strcmp(name, "vf_mbox"))
+   *mask = UINT32_C(1) << RTE_ETH_EVENT_VF_MBOX;
+   else if (!strcmp(name, "ipsec"))
+   *mask = UINT32_C(1) << RTE_ETH_EVENT_IPSEC;
+   else if (!strcmp(name, "macsec"))
+   *mask = UINT32_C(1) << RTE_ETH_EVENT_MACSEC;
+   else if (!strcmp(name, "intr_rmv"))
+   *mask = UINT32_C(1) << RTE_ETH_EVENT_INTR_RMV;
+   else if (!strcmp(name, "dev_probed"))
+   *mask = UINT32_C(1) << RTE_ETH_EVENT_NEW;
+   else if (!strcmp(name, "dev_released"))
+   *mask = UINT32_C(1) << RTE_ETH_EVENT_DESTROY;
+   else if (!strcmp(name, "flow_aged"))
+   *mask = UINT32_C(1) << RTE_ETH_EVENT_FLOW_AGED;
+   else if (!strcmp(name, "err_recovering"))
+   *mask = UINT32_C(1) << RTE_ETH_EVENT_ERR_RECOVERING;
+   else if (!strcmp(name, "recovery_success"))
+   *mask = UINT32_C(1) << RTE_ETH_EVENT_RECOVERY_SUCCESS;
+   else if (!strcmp(name, "recovery_failed"))
+   *mask = UINT32_C(1) << RTE_ETH_EVENT_RECOVERY_FAILED;
+   else if (!strcmp(name, "all"))
+   *mask = ~UINT32_C(0);
+   else
+   return -1;
+
+   return 0;
+}
+
+static void
+rmv_port_callback(void *arg)
+{
+   int need_to_start = 0;
+   int org_no_link_check = no_link_check;
+   portid_t port_id = (intptr_t)arg;
+   struct rte_eth_dev_info dev_info;
+   int ret;
+
+   RTE_ETH_VALID_PORTID_OR_RET(port_id);
+
+   if (!test_done && port_is_forwarding(port_id)) {
+   need_to_start = 1;
+   stop_packet_forwarding();
+   }
+   no_link_check = 1;
+   stop_port(port_id);
+   no_link_check = org_no_link_check;
+
+   ret = eth_dev_info_

[PATCH v3 1/7] ethdev: fix race-condition of proactive error handling mode

2023-11-06 Thread Chengwen Feng
In the proactive error handling mode, the PMD will set the data path
pointers to dummy functions and then try recovery, in this period the
application may still invoking data path API. This will introduce a
race-condition with data path which may lead to crash [1].

Although the PMD added delay after setting data path pointers to cover
the above race-condition, it reduces the probability, but it doesn't
solve the problem.

To solve the race-condition problem fundamentally, the following
requirements are added:
1. The PMD should set the data path pointers to dummy functions after
   report RTE_ETH_EVENT_ERR_RECOVERING event.
2. The application should stop data path API invocation when process
   the RTE_ETH_EVENT_ERR_RECOVERING event.
3. The PMD should set the data path pointers to valid functions before
   report RTE_ETH_EVENT_RECOVERY_SUCCESS event.
4. The application should enable data path API invocation when process
   the RTE_ETH_EVENT_RECOVERY_SUCCESS event.

Also, this patch introduce a driver internal function
rte_eth_fp_ops_setup which used as an help function for PMD.

[1] 
http://patchwork.dpdk.org/project/dpdk/patch/20230220060839.1267349-2-ashok.k.kal...@intel.com/

Fixes: eb0d471a8941 ("ethdev: add proactive error handling mode")
Cc: sta...@dpdk.org

Signed-off-by: Chengwen Feng 
Acked-by: Konstantin Ananyev 
Acked-by: Huisong Li 
---
 doc/guides/prog_guide/poll_mode_drv.rst | 20 +++-
 lib/ethdev/ethdev_driver.c  |  8 +++
 lib/ethdev/ethdev_driver.h  | 10 
 lib/ethdev/rte_ethdev.h | 32 +++--
 lib/ethdev/version.map  |  1 +
 5 files changed, 46 insertions(+), 25 deletions(-)

diff --git a/doc/guides/prog_guide/poll_mode_drv.rst 
b/doc/guides/prog_guide/poll_mode_drv.rst
index c145a9066c..e380ff135a 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -638,14 +638,9 @@ different from the application invokes recovery in PASSIVE 
mode,
 the PMD automatically recovers from error in PROACTIVE mode,
 and only a small amount of work is required for the application.
 
-During error detection and automatic recovery,
-the PMD sets the data path pointers to dummy functions
-(which will prevent the crash),
-and also make sure the control path operations fail with a return code 
``-EBUSY``.
-
-Because the PMD recovers automatically,
-the application can only sense that the data flow is disconnected for a while
-and the control API returns an error in this period.
+During error detection and automatic recovery, the PMD sets the data path
+pointers to dummy functions and also make sure the control path operations
+failed with a return code ``-EBUSY``.
 
 In order to sense the error happening/recovering,
 as well as to restore some additional configuration,
@@ -653,9 +648,9 @@ three events are available:
 
 ``RTE_ETH_EVENT_ERR_RECOVERING``
Notify the application that an error is detected
-   and the recovery is being started.
+   and the recovery is about to start.
Upon receiving the event, the application should not invoke
-   any control path function until receiving
+   any control and data path API until receiving
``RTE_ETH_EVENT_RECOVERY_SUCCESS`` or ``RTE_ETH_EVENT_RECOVERY_FAILED`` 
event.
 
 .. note::
@@ -666,8 +661,9 @@ three events are available:
 
 ``RTE_ETH_EVENT_RECOVERY_SUCCESS``
Notify the application that the recovery from error is successful,
-   the PMD already re-configures the port,
-   and the effect is the same as a restart operation.
+   the PMD already re-configures the port.
+   The application should restore some additional configuration, and then
+   enable data path API invocation.
 
 ``RTE_ETH_EVENT_RECOVERY_FAILED``
Notify the application that the recovery from error failed,
diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c
index fff4b7b4cd..65ead7b910 100644
--- a/lib/ethdev/ethdev_driver.c
+++ b/lib/ethdev/ethdev_driver.c
@@ -537,6 +537,14 @@ rte_eth_dma_zone_free(const struct rte_eth_dev *dev, const 
char *ring_name,
return rc;
 }
 
+void
+rte_eth_fp_ops_setup(struct rte_eth_dev *dev)
+{
+   if (dev == NULL)
+   return;
+   eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
+}
+
 const struct rte_memzone *
 rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name,
 uint16_t queue_id, size_t size, unsigned int align,
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index b482cd12bb..eaf2c9ca6d 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1636,6 +1636,16 @@ int
 rte_eth_dma_zone_free(const struct rte_eth_dev *eth_dev, const char *name,
 uint16_t queue_id);
 
+/**
+ * @internal
+ * Setup eth fast-path API to ethdev values.
+ *
+ * @param dev
+ *  Pointer to struct rte_eth_dev.
+ */
+__rte_internal
+void rte_eth_fp_ops_setup(struct rte_eth_dev *

[PATCH v3 5/7] app/testpmd: add error recovery usage demo

2023-11-06 Thread Chengwen Feng
This patch adds error recovery usage demo which will:
1. stop packet forwarding when the RTE_ETH_EVENT_ERR_RECOVERING event
   is received.
2. restart packet forwarding when the RTE_ETH_EVENT_RECOVERY_SUCCESS
   event is received.
3. prompt the ports that fail to recovery and need to be removed when
   the RTE_ETH_EVENT_RECOVERY_FAILED event is received.

In addition, a message is added to the printed information, requiring
no command to be executed during the error recovery.

Signed-off-by: Chengwen Feng 
Acked-by: Konstantin Ananyev 
---
 app/test-pmd/testpmd.c | 80 ++
 app/test-pmd/testpmd.h |  4 ++-
 2 files changed, 83 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 9e4e99e53b..a45c411398 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -3941,6 +3941,77 @@ rmv_port_callback(void *arg)
start_packet_forwarding(0);
 }
 
+static int need_start_when_recovery_over;
+
+static bool
+has_port_in_err_recovering(void)
+{
+   struct rte_port *port;
+   portid_t pid;
+
+   RTE_ETH_FOREACH_DEV(pid) {
+   port = &ports[pid];
+   if (port->err_recovering)
+   return true;
+   }
+
+   return false;
+}
+
+static void
+err_recovering_callback(portid_t port_id)
+{
+   if (!has_port_in_err_recovering())
+   printf("Please stop executing any commands until recovery 
result events are received!\n");
+
+   ports[port_id].err_recovering = 1;
+   ports[port_id].recover_failed = 0;
+
+   /* To simplify implementation, stop forwarding regardless of whether 
the port is used. */
+   if (!test_done) {
+   printf("Stop packet forwarding because some ports are in error 
recovering!\n");
+   stop_packet_forwarding();
+   need_start_when_recovery_over = 1;
+   }
+}
+
+static void
+recover_success_callback(portid_t port_id)
+{
+   ports[port_id].err_recovering = 0;
+   if (has_port_in_err_recovering())
+   return;
+
+   if (need_start_when_recovery_over) {
+   printf("Recovery success! Restart packet forwarding!\n");
+   start_packet_forwarding(0);
+   need_start_when_recovery_over = 0;
+   } else {
+   printf("Recovery success!\n");
+   }
+}
+
+static void
+recover_failed_callback(portid_t port_id)
+{
+   struct rte_port *port;
+   portid_t pid;
+
+   ports[port_id].err_recovering = 0;
+   ports[port_id].recover_failed = 1;
+   if (has_port_in_err_recovering())
+   return;
+
+   need_start_when_recovery_over = 0;
+   printf("The ports:");
+   RTE_ETH_FOREACH_DEV(pid) {
+   port = &ports[pid];
+   if (port->recover_failed)
+   printf(" %u", pid);
+   }
+   printf(" recovery failed! Please remove them!\n");
+}
+
 /* This function is used by the interrupt thread */
 static int
 eth_event_callback(portid_t port_id, enum rte_eth_event_type type, void *param,
@@ -3996,6 +4067,15 @@ eth_event_callback(portid_t port_id, enum 
rte_eth_event_type type, void *param,
}
break;
}
+   case RTE_ETH_EVENT_ERR_RECOVERING:
+   err_recovering_callback(port_id);
+   break;
+   case RTE_ETH_EVENT_RECOVERY_SUCCESS:
+   recover_success_callback(port_id);
+   break;
+   case RTE_ETH_EVENT_RECOVERY_FAILED:
+   recover_failed_callback(port_id);
+   break;
default:
break;
}
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 9b10a9ea1c..b8a0a4715a 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -342,7 +342,9 @@ struct rte_port {
uint8_t member_flag : 1, /**< bonding member port */
bond_flag : 1, /**< port is bond device */
fwd_mac_swap : 1, /**< swap packet MAC before 
forward */
-   update_conf : 1; /**< need to update bonding 
device configuration */
+   update_conf : 1, /**< need to update bonding 
device configuration */
+   err_recovering : 1, /**< port is in error 
recovering */
+   recover_failed : 1; /**< port recover failed */
struct port_template*pattern_templ_list; /**< Pattern templates. */
struct port_template*actions_templ_list; /**< Actions templates. */
struct port_table   *table_list; /**< Flow tables. */
-- 
2.17.1



[PATCH v3 7/7] doc: testpmd support event handling section

2023-11-06 Thread Chengwen Feng
Add new section of event handling, which documented the ethdev and
device events.

Signed-off-by: Chengwen Feng 
---
 doc/guides/testpmd_app_ug/event_handling.rst | 81 
 doc/guides/testpmd_app_ug/index.rst  |  1 +
 2 files changed, 82 insertions(+)
 create mode 100644 doc/guides/testpmd_app_ug/event_handling.rst

diff --git a/doc/guides/testpmd_app_ug/event_handling.rst 
b/doc/guides/testpmd_app_ug/event_handling.rst
new file mode 100644
index 00..1c39e0c486
--- /dev/null
+++ b/doc/guides/testpmd_app_ug/event_handling.rst
@@ -0,0 +1,81 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+Copyright(c) 2023 HiSilicon Limited.
+
+Event Handling
+==
+
+The ``testpmd`` application supports following two type event handling:
+
+ethdev events
+-
+
+The ``testpmd`` provide options "--print-event" and "--mask-event" to control
+whether display such as "Port x y event" when received "y" event on port "x".
+This is named as default processing.
+
+This section details the support events, unless otherwise specified, only the
+default processing is support.
+
+- ``RTE_ETH_EVENT_INTR_LSC``:
+  If device started with lsc enabled, the PMD will launch this event when it
+  detect link status changes.
+
+- ``RTE_ETH_EVENT_QUEUE_STATE``:
+  Used when notify queue state event changed, for example: vhost PMD use this
+  event report whether vring enabled.
+
+- ``RTE_ETH_EVENT_INTR_RESET``:
+  Used to report reset interrupt happened, this event only reported when the
+  PMD supports ``RTE_ETH_ERROR_HANDLE_MODE_PASSIVE``.
+
+- ``RTE_ETH_EVENT_VF_MBOX``:
+  Used as a PF to process mailbox messages of the VFs to which the PF belongs.
+
+- ``RTE_ETH_EVENT_INTR_RMV``:
+  Used to report device removal event. The ``testpmd`` will remove the port
+  later.
+
+- ``RTE_ETH_EVENT_NEW``:
+  Used to report port was probed event. The ``testpmd`` will setup the port
+  later.
+
+- ``RTE_ETH_EVENT_DESTROY``:
+  Used to report port was released event. The ``testpmd`` will changes the
+  port's status.
+
+- ``RTE_ETH_EVENT_MACSEC``:
+  Used to report MACsec offload related event.
+
+- ``RTE_ETH_EVENT_IPSEC``:
+  Used to report IPsec offload related event.
+
+- ``RTE_ETH_EVENT_FLOW_AGED``:
+  Used to report new aged-out flows was detected. Only valid with mlx5 PMD.
+
+- ``RTE_ETH_EVENT_RX_AVAIL_THRESH``:
+  Used to report available Rx descriptors was smaller than the threshold. Only
+  valid with mlx5 PMD.
+
+- ``RTE_ETH_EVENT_ERR_RECOVERING``:
+  Used to report error happened, and PMD will do recover after report this
+  event. The ``testpmd`` will stop packet forwarding when received the event.
+
+- ``RTE_ETH_EVENT_RECOVERY_SUCCESS``:
+  Used to report error recovery success. The ``testpmd`` will restart packet
+  forwarding when received the event.
+
+- ``RTE_ETH_EVENT_RECOVERY_FAILED``:
+  Used to report error recovery failed. The ``testpmd`` will display one
+  message to show which ports failed.
+
+.. note::
+
+   The ``RTE_ETH_EVENT_ERR_RECOVERING``, ``RTE_ETH_EVENT_RECOVERY_SUCCESS`` and
+   ``RTE_ETH_EVENT_RECOVERY_FAILED`` only reported when the PMD supports
+   ``RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE``.
+
+device events
+-
+
+Including two events ``RTE_DEV_EVENT_ADD`` and ``RTE_DEV_EVENT_ADD``, and
+enabled only when the ``testpmd`` stated with options "--hot-plug".
diff --git a/doc/guides/testpmd_app_ug/index.rst 
b/doc/guides/testpmd_app_ug/index.rst
index 1ac0d25d57..3c09448c4e 100644
--- a/doc/guides/testpmd_app_ug/index.rst
+++ b/doc/guides/testpmd_app_ug/index.rst
@@ -14,3 +14,4 @@ Testpmd Application User Guide
 build_app
 run_app
 testpmd_funcs
+event_handling
-- 
2.17.1



Re: [PATCH] eal/linux: verify mmu type for DPDK support (ppc64le)

2023-11-06 Thread Thomas Monjalon
23/10/2023 23:59, David Christensen:
> 
> On 10/17/23 5:39 AM, Thomas Monjalon wrote:
> > I feel this function should not be implemented in the common EAL.
> > What about adding a new function in lib/eal/ppc/ ?
> > And add the "return true" for other architectures?
> 
> Would it be more appropriate in the lib/eal/common level or 
> lib/eal/linux only?  I would expect the MMU requirement should apply to 
> FreeBSD on ppc64le as well but IBM doesn't support or test FreeBSD 
> internally.

Even if you are not testing it, I don't think you should restrict
the code change to Linux.




Re: [PATCH v3] eal/linux: verify mmu type for DPDK support (ppc64le)

2023-11-06 Thread Thomas Monjalon
24/10/2023 19:43, David Christensen:
> IBM POWER systems support more than one type of memory management unit
> (MMU).  The Power ISA 3.0 specification, which applies to P9 and later
> CPUs, defined a new Radix MMU which, among other things, allows an
> anonymous memory page mapping to be converted into a hugepage mapping
> at a specific address. This is a required feature in DPDK so we need
> to test the MMU type when POWER systems are used and provide a more
> useful error message for the user when running on an unsupported
> system.
> 
> Bugzilla ID: 1221
> 
> Suggested-by: Thomas Monjalon 
> Signed-off-by: David Christensen 
> ---
> --- a/lib/eal/linux/eal.c
> +++ b/lib/eal/linux/eal.c
> + /* verify if DPDK supported on architecture MMU */
> + if (!eal_mmu_supported_linux_arch()) {
> + rte_eal_init_alert("unsupported MMU type.");
> + rte_errno = ENOTSUP;
> + return -1;
> + }

I don't think we should restrict the MMU check to Linux.




Re: [PATCH v1] config/arm: correct cpu arch for cross build

2023-11-06 Thread Thomas Monjalon
18/10/2023 07:40, Joyce Kong:
> > From: Thomas Monjalon 
> > 22/08/2023 09:47, Joyce Kong:
> > > The cn10k cross build file sets cpu to 'armv8.6-a' while
> > > N2 is armv8.5-a arch.
> > > The cpu field in the cross file doesn't take effect as
> > > config/arm/meson.build controls machine_args for march.
> > > Then correct the value in arm cross files to 'auto'.
> > 
> > I don't get it.
> > Why setting a value if it has no impact?
> > Looks like something is overcomplicated.
> > 
> We still have to declare them here because meson would check the 'cpu' line 
> in the config file, otherwise it would report missing {'cpu'}.

OK

Then why not all cross files are set to auto?




Re: [PATCH] config/arm: add cortex-A55 part number

2023-11-06 Thread Thomas Monjalon
30/10/2023 16:51, Hemant Agrawal:
> This patch adds the part number for Cortex-A55 ARM Cores
> A55 is used in NXP-i.mx93 SoCs.
> 
> Signed-off-by: Hemant Agrawal 

Applied, thanks.





[PATCH 1/2] crypto/qat: fix block cipher misalignment for AES CBC and 3DES CBC

2023-11-06 Thread Sivaramakrishnan Venkat
check cipher length alignment for 3DES CBC and AES CBC
to change it to NULL op for buffer misalignment

Fixes: a815a04cea05 ("crypto/qat: support symmetric build op request")
Fixes: 85fec6fd9674 ("crypto/qat: unify raw data path functions")
Fixes: def38073ac90 ("crypto/qat: check cipher buffer alignment")
Cc: kai...@intel.com
Cc: sta...@dpdk.org

Signed-off-by: Sivaramakrishnan Venkat 
---
 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 35 +++-
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c| 12 +++
 drivers/crypto/qat/qat_sym.h |  9 +
 3 files changed, 35 insertions(+), 21 deletions(-)

diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h 
b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
index 37647374d5..49053e662e 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
@@ -616,7 +616,8 @@ static __rte_always_inline void
 enqueue_one_cipher_job_gen1(struct qat_sym_session *ctx,
struct icp_qat_fw_la_bulk_req *req,
struct rte_crypto_va_iova_ptr *iv,
-   union rte_crypto_sym_ofs ofs, uint32_t data_len)
+   union rte_crypto_sym_ofs ofs, uint32_t data_len,
+   struct qat_sym_op_cookie *cookie)
 {
struct icp_qat_fw_la_cipher_req_params *cipher_param;
 
@@ -627,6 +628,15 @@ enqueue_one_cipher_job_gen1(struct qat_sym_session *ctx,
cipher_param->cipher_offset = ofs.ofs.cipher.head;
cipher_param->cipher_length = data_len - ofs.ofs.cipher.head -
ofs.ofs.cipher.tail;
+
+   if (AES_OR_3DES_MISALIGNED) {
+   QAT_LOG(DEBUG,
+ "Input cipher buffer misalignment detected and change job as NULL 
operation");
+   struct icp_qat_fw_comn_req_hdr *header = &req->comn_hdr;
+   header->service_type = ICP_QAT_FW_COMN_REQ_NULL;
+   header->service_cmd_id = ICP_QAT_FW_NULL_REQ_SERV_ID;
+   cookie->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+   }
 }
 
 static __rte_always_inline void
@@ -683,7 +693,8 @@ enqueue_one_chain_job_gen1(struct qat_sym_session *ctx,
struct rte_crypto_va_iova_ptr *cipher_iv,
struct rte_crypto_va_iova_ptr *digest,
struct rte_crypto_va_iova_ptr *auth_iv,
-   union rte_crypto_sym_ofs ofs, uint32_t data_len)
+   union rte_crypto_sym_ofs ofs, uint32_t data_len,
+   struct qat_sym_op_cookie *cookie)
 {
struct icp_qat_fw_la_cipher_req_params *cipher_param;
struct icp_qat_fw_la_auth_req_params *auth_param;
@@ -711,20 +722,14 @@ enqueue_one_chain_job_gen1(struct qat_sym_session *ctx,
auth_param->auth_off = ofs.ofs.auth.head;
auth_param->auth_len = auth_len;
auth_param->auth_res_addr = digest->iova;
-   /* Input cipher length alignment requirement for 3DES-CBC and AES-CBC.
-* For 3DES-CBC cipher algo, ESP Payload size requires 8 Byte aligned.
-* For AES-CBC cipher algo, ESP Payload size requires 16 Byte aligned.
-* The alignment should be guaranteed by the ESP package padding field
-* according to the RFC4303. Under this condition, QAT will pass through
-* chain job as NULL cipher and NULL auth operation and report 
misalignment
-* error detected.
-*/
if (AES_OR_3DES_MISALIGNED) {
-   QAT_LOG(ERR, "Input cipher length alignment error detected.\n");
-   ctx->qat_cipher_alg = ICP_QAT_HW_CIPHER_ALGO_NULL;
-   ctx->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_NULL;
-   cipher_param->cipher_length = 0;
-   auth_param->auth_len = 0;
+   QAT_LOG(DEBUG,
+ "Input cipher buffer misalignment detected and change job as NULL 
operation");
+   struct icp_qat_fw_comn_req_hdr *header = &req->comn_hdr;
+   header->service_type = ICP_QAT_FW_COMN_REQ_NULL;
+   header->service_cmd_id = ICP_QAT_FW_NULL_REQ_SERV_ID;
+   cookie->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+   return -1;
}
 
switch (ctx->qat_hash_alg) {
diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c 
b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
index e4bcfa59e7..208b7e0ba6 100644
--- a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
@@ -248,7 +248,7 @@ qat_sym_build_op_cipher_gen1(void *in_op, struct 
qat_sym_session *ctx,
return -EINVAL;
}
 
-   enqueue_one_cipher_job_gen1(ctx, req, &cipher_iv, ofs, total_len);
+   enqueue_one_cipher_job_gen1(ctx, req, &cipher_iv, ofs, total_len, 
op_cookie);
 
qat_sym_debug_log_dump(req, ctx, in_sgl.vec, in_sgl.num, &cipher_iv,
NULL, NULL, NULL);
@@ -383,7 +383,7 @@ qat_sym_build_op_chain_gen1(void *in_op, struct 
qat_sym_session *ctx,
 
enqueue_one_chain_job_gen1(ctx, req, in_sgl.vec, in_sgl.num,
out_sgl.vec, out_sgl.num, &cipher_iv, &digest, &auth_iv,
-  

[PATCH 2/2] test/crypto: add negative test cases for cipher buffer alignment

2023-11-06 Thread Sivaramakrishnan Venkat
add negative test cases for 3DES CBC and AES CBC
cipher algorithms for buffer misalignment

Signed-off-by: Sivaramakrishnan Venkat 
---
 app/test/test_cryptodev.c  | 321 -
 app/test/test_cryptodev_aes_test_vectors.h | 119 
 app/test/test_cryptodev_blockcipher.c  |  20 +-
 app/test/test_cryptodev_blockcipher.h  |   1 +
 app/test/test_cryptodev_des_test_vectors.h |  38 +++
 5 files changed, 491 insertions(+), 8 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index d2c4c6f8b5..12e0cf8044 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -1371,6 +1371,42 @@ negative_hmac_sha1_testsuite_setup(void)
return 0;
 }
 
+static int
+negative_input_buffer_misalignment_testsuite_setup(void)
+{
+   struct crypto_testsuite_params *ts_params = &testsuite_params;
+   uint8_t dev_id = ts_params->valid_devs[0];
+   struct rte_cryptodev_info dev_info;
+   const enum rte_crypto_cipher_algorithm ciphers[] = {
+   RTE_CRYPTO_CIPHER_3DES_CBC,
+   RTE_CRYPTO_CIPHER_AES_CBC
+   };
+   const enum rte_crypto_auth_algorithm auths[] = {
+   RTE_CRYPTO_AUTH_SHA256,
+   RTE_CRYPTO_AUTH_SHA256,
+   };
+
+   rte_cryptodev_info_get(dev_id, &dev_info);
+
+   if (!(dev_info.feature_flags & RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO) ||
+   ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+   !(dev_info.feature_flags & 
RTE_CRYPTODEV_FF_SYM_RAW_DP))) {
+   RTE_LOG(INFO, USER1, "Feature flag requirements for Negative "
+   "Input Buffer misalignment testsuite not 
met\n");
+   return TEST_SKIPPED;
+   }
+
+   if (check_cipher_capabilities_supported(ciphers, RTE_DIM(ciphers)) != 0
+   && check_auth_capabilities_supported(auths,
+   RTE_DIM(auths)) != 0) {
+   RTE_LOG(INFO, USER1, "Capability requirements for Negative "
+   "Input Buffer misalignment testsuite not 
met\n");
+   return TEST_SKIPPED;
+   }
+
+   return 0;
+}
+
 static int
 dev_configure_and_start(uint64_t ff_disable)
 {
@@ -14469,6 +14505,192 @@ aes128cbc_hmac_sha1_test_vector = {
}
 };
 
+static const struct test_crypto_vector
+aes128cbc_sha256_misalign_test_vector = {
+   .crypto_algo = RTE_CRYPTO_CIPHER_AES_CBC,
+   .cipher_offset = 0,
+   .cipher_len = 511,
+   .cipher_key = {
+   .data = {
+   0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+   0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A
+   },
+   .len = 16
+   },
+   .iv = {
+   .data = {
+   0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+   0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F
+   },
+   .len = 16
+   },
+   .plaintext = {
+   .data = plaintext_aes_common,
+   .len = 511
+   },
+   .ciphertext = {
+   .data = ciphertext512_aes128cbc,
+   .len = 511
+   },
+   .auth_algo = RTE_CRYPTO_AUTH_SHA256,
+   .auth_offset = 0,
+   .auth_key = {
+   .data = {
+   0x42, 0x1A, 0x7D, 0x3D, 0xF5, 0x82, 0x80, 0xF1,
+   0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA,
+   0x58, 0x34, 0x85, 0x61, 0x1C, 0x42, 0x10, 0x76,
+   0x9A, 0x4F, 0x88, 0x1B, 0xB6, 0x8F, 0xD8, 0x60
+   },
+   .len = 32
+   },
+   .digest = {
+   .data = {
+   0xA8, 0xBC, 0xDB, 0x99, 0xAA, 0x45, 0x91, 0xA3,
+   0x2D, 0x75, 0x41, 0x92, 0x28, 0x01, 0x87, 0x5D,
+   0x45, 0xED, 0x49, 0x05, 0xD3, 0xAE, 0x32, 0x57,
+   0xB7, 0x79, 0x65, 0xFC, 0xFA, 0x6C, 0xFA, 0xDF
+   },
+   .len = 32
+   }
+};
+
+static const struct test_crypto_vector
+aes192cbc_sha256_misalign_test_vector = {
+   .crypto_algo = RTE_CRYPTO_CIPHER_AES_CBC,
+   .cipher_offset = 0,
+   .cipher_len = 511,
+   .cipher_key = {
+   .data = {
+   0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+   0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A,
+   0xD4, 0xC3, 0xA3, 0xAA, 0x33, 0x62, 0x61, 0xE0
+   },
+   .len = 24
+   },
+   .iv = {
+   .data = {
+   0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+   0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F
+   },
+   .len = 16
+   },
+   .plaintext = {
+   .data = plaintext_hash,
+   .len = 511
+   },
+   .ciphertext = {
+   .data 

[PATCH] net/mvpp2: fix null dereference in vmwa release

2023-11-06 Thread Weiguo Li
Pointer 'mrvl_cfg' is dereferenced and then compared to NULL.
Move dereference after NULL test to fix this issue.

Fixes: 7af10d29a4a0 ("net/mlx5/linux: refactor VLAN")
Cc: sta...@dpdk.org

Signed-off-by: Weiguo Li 
---
 drivers/net/mlx5/linux/mlx5_vlan_os.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/linux/mlx5_vlan_os.c 
b/drivers/net/mlx5/linux/mlx5_vlan_os.c
index 81611a8d3f..391c9ce832 100644
--- a/drivers/net/mlx5/linux/mlx5_vlan_os.c
+++ b/drivers/net/mlx5/linux/mlx5_vlan_os.c
@@ -37,12 +37,13 @@ mlx5_vlan_vmwa_release(struct rte_eth_dev *dev,
 {
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_nl_vlan_vmwa_context *vmwa = priv->vmwa_context;
-   struct mlx5_nl_vlan_dev *vlan_dev = &vmwa->vlan_dev[0];
+   struct mlx5_nl_vlan_dev *vlan_dev;
 
MLX5_ASSERT(vlan->created);
MLX5_ASSERT(priv->vmwa_context);
if (!vlan->created || !vmwa)
return;
+   vlan_dev = &vmwa->vlan_dev[0];
vlan->created = 0;
rte_spinlock_lock(&vmwa->sl);
MLX5_ASSERT(vlan_dev[vlan->tag].refcnt);
-- 
2.34.1



Re: configuration of memseg lists number

2023-11-06 Thread Avi Kivity
Thanks, it makes sense. I'll get around to it "eventually".

On Thu, 2023-11-02 at 11:04 +0100, Thomas Monjalon wrote:
> Hello,
> 
> While looking at Seastar, I see it uses this patch on top of DPDK:
> 
> build: add meson options of max_memseg_lists
> 
> RTE_MAX_MEMSEG_LISTS = 128 is not enough for high-memory
> machines,
> in our case, we need to increase it to 8192.
> so add an option so user can override it.
> 
> https://github.com/scylladb/dpdk/commit/cafaa3cf457584de
> 
> I think we could allow to configure this at runtime,
> as we did already for RTE_MAX_MEMZONE:
> we've added rte_memzone_max_set() / rte_memzone_max_get().
> 
> Opinions, comments, volunteers?
> 
> 



Re: [PATCH v3] config/arm: update aarch32 build with gcc13

2023-11-06 Thread Thomas Monjalon
01/11/2023 13:57, Paul Szczepanek:
> 
> On 25/10/2023 13:57, Juraj Linkeš wrote:
> > The aarch32 with gcc13 fails with:
> >
> > Compiler for C supports arguments -march=armv8-a: NO
> >
> > ../config/arm/meson.build:714:12: ERROR: Problem encountered: No
> > suitable armv8 march version found.
> >
> > This is because we test -march=armv8-a alone (without the -mpfu option),
> > which is no longer supported in gcc13 aarch32 builds.
> >
> > The most recent recommendation from the compiler team is to build with
> > -march=armv8-a+simd -mfpu=auto, which should work for compilers old and
> > new. The suggestion is to first check -march=armv8-a+simd and only then
> > check -mfpu=auto.
> >
> > To address this, add a way to force the architecture (the value of
> > the -march option).
> >
> > Signed-off-by: Juraj Linkeš 
> > ---
> >   config/arm/meson.build | 40 +++-
> >   1 file changed, 23 insertions(+), 17 deletions(-)
> >
> > diff --git a/config/arm/meson.build b/config/arm/meson.build
> > index 3f22d8a2fc..c3f763764a 100644
> > --- a/config/arm/meson.build
> > +++ b/config/arm/meson.build
> > @@ -43,7 +43,9 @@ implementer_generic = {
> >   },
> >   'generic_aarch32': {
> >   'march': 'armv8-a',
> > -'compiler_options': ['-mfpu=neon'],
> > +'force_march': true,
> > +'march_features': ['simd'],
> > +'compiler_options': ['-mfpu=auto'],
> >   'flags': [
> >   ['RTE_ARCH_ARM_NEON_MEMCPY', false],
> >   ['RTE_ARCH_STRICT_ALIGN', true],
> > @@ -695,21 +697,25 @@ if update_flags
> >   # probe supported archs and their features
> >   candidate_march = ''
> >   if part_number_config.has_key('march')
> > -supported_marchs = ['armv8.6-a', 'armv8.5-a', 'armv8.4-a', 
> > 'armv8.3-a',
> > -'armv8.2-a', 'armv8.1-a', 'armv8-a']
> > -check_compiler_support = false
> > -foreach supported_march: supported_marchs
> > -if supported_march == part_number_config['march']
> > -# start checking from this version downwards
> > -check_compiler_support = true
> > -endif
> > -if (check_compiler_support and
> > -cc.has_argument('-march=' + supported_march))
> > -candidate_march = supported_march
> > -# highest supported march version found
> > -break
> > -endif
> > -endforeach
> > +if part_number_config.get('force_march', false)
> > +candidate_march = part_number_config['march']
> > +else
> > +supported_marchs = ['armv8.6-a', 'armv8.5-a', 'armv8.4-a', 
> > 'armv8.3-a',
> > +'armv8.2-a', 'armv8.1-a', 'armv8-a']
> > +check_compiler_support = false
> > +foreach supported_march: supported_marchs
> > +if supported_march == part_number_config['march']
> > +# start checking from this version downwards
> > +check_compiler_support = true
> > +endif
> > +if (check_compiler_support and
> > +cc.has_argument('-march=' + supported_march))
> > +candidate_march = supported_march
> > +# highest supported march version found
> > +break
> > +endif
> > +endforeach
> > +endif
> >   if candidate_march == ''
> >   error('No suitable armv8 march version found.')
> >   endif
> > @@ -741,7 +747,7 @@ if update_flags
> >   # apply supported compiler options
> >   if part_number_config.has_key('compiler_options')
> >   foreach flag: part_number_config['compiler_options']
> > -if cc.has_argument(flag)
> > +if cc.has_multi_arguments(machine_args + [flag])
> >   machine_args += flag
> >   else
> >   warning('Configuration compiler option ' +
> 
> 
> Reviewed-by: Paul Szczepanek 
> 
> 

Applied with Cc: sta...@dpdk.org, thanks.





Re: [PATCH v2] config: verify machine arch flag

2023-11-06 Thread Thomas Monjalon
26/10/2023 20:13, Sivaprasad Tummala:
> Added additional checks for compiler support of specific cpu arch
> flags to fix incorrect error reporting.
> 
> Without this patch, meson build reports '__SSE4_2__' not defined
> error for x86 builds when the compiler does not support the specified
> cpu_instruction_set (or) machine argument.
> 
> Signed-off-by: Sivaprasad Tummala 
> Acked-by: Bruce Richardson 

Applied, thanks.




RE: [PATCH 1/2] pipeline: fix calloc parameters

2023-11-06 Thread Dumitrescu, Cristian


> -Original Message-
> From: Ferruh Yigit 
> Sent: Thursday, November 2, 2023 1:09 PM
> To: Dumitrescu, Cristian ; R, Kamalakannan
> 
> Cc: dev@dpdk.org; sta...@dpdk.org
> Subject: [PATCH 1/2] pipeline: fix calloc parameters
> 
> gcc [1] generates warning [2] about calloc usage, because calloc
> parameter order is wrong, fixing it by replacing parameters.
> 
> [1]
> gcc (GCC) 14.0.0 20231102 (experimental)
> 
> [2]
>  Compiling C object .../pipeline_rte_swx_pipeline_spec.c.o
> .../rte_swx_pipeline_spec.c: In function ‘pipeline_spec_parse’:
> ../lib/pipeline/rte_swx_pipeline_spec.c:2893:11:
>   warning: allocation of insufficient size ‘1’ for type
>‘struct pipeline_spec’ with size ‘144’ [-Walloc-size]
>  2893 | s = calloc(sizeof(struct pipeline_spec), 1);
>   |   ^
> 
> .../rte_swx_pipeline_spec.c: In function ‘pipeline_iospec_parse’:
> ../lib/pipeline/rte_swx_pipeline_spec.c:4244:11:
>   warning: allocation of insufficient size ‘1’ for type
>‘struct pipeline_iospec’ with size ‘64’ [-Walloc-size]
>  4244 | s = calloc(sizeof(struct pipeline_iospec), 1);
>   |   ^
> 
> Fixes: 30c4abb90942 ("pipeline: rework specification file-based pipeline
> build")
> Fixes: 54cae37ef4ef ("pipeline: support I/O specification")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Ferruh Yigit 
> ---
Acked-by: Cristian Dumitrescu 



RE: [PATCH v1] config/arm: correct cpu arch for cross build

2023-11-06 Thread Joyce Kong
> -Original Message-
> From: Thomas Monjalon 
> Sent: Monday, November 6, 2023 10:10 PM
> To: Ruifeng Wang ; Joyce Kong
> 
> Cc: dev@dpdk.org; Bruce Richardson ;
> dev@dpdk.org; nd ; Paul Szczepanek
> 
> Subject: Re: [PATCH v1] config/arm: correct cpu arch for cross build
> 
> 18/10/2023 07:40, Joyce Kong:
> > > From: Thomas Monjalon 
> > > 22/08/2023 09:47, Joyce Kong:
> > > > The cn10k cross build file sets cpu to 'armv8.6-a' while
> > > > N2 is armv8.5-a arch.
> > > > The cpu field in the cross file doesn't take effect as
> > > > config/arm/meson.build controls machine_args for march.
> > > > Then correct the value in arm cross files to 'auto'.
> > >
> > > I don't get it.
> > > Why setting a value if it has no impact?
> > > Looks like something is overcomplicated.
> > >
> > We still have to declare them here because meson would check the 'cpu'
> line in the config file, otherwise it would report missing {'cpu'}.
> 
> OK
> 
> Then why not all cross files are set to auto?
> 
Actually, I set all the Arm cross files to auto in this patch. Maybe I have to 
figure it out in the commit line?


RE: [PATCH v1 1/1] ml/cnxk: fix updating internal I/O info

2023-11-06 Thread Shivah Shankar Shankar Narayan Rao
> -Original Message-
> From: Srikanth Yalavarthi 
> Sent: Sunday, October 29, 2023 6:55 PM
> To: Srikanth Yalavarthi 
> Cc: dev@dpdk.org; Shivah Shankar Shankar Narayan Rao
> ; Anup Prabhu ;
> Prince Takkar ; Jerin Jacob Kollanukkaran
> 
> Subject: [PATCH v1 1/1] ml/cnxk: fix updating internal I/O info
> 
> Update scale factor in IO info of TVM models from metadata.
> 
> Fixes: 35c3e790b4a0 ("ml/cnxk: update internal info for TVM model")
> 
> Signed-off-by: Srikanth Yalavarthi 
Acked-by: Shivah Shankar S 
<>

Re: [PATCH v5 1/1] build: add libarchive to external deps

2023-11-06 Thread Thomas Monjalon
06/11/2023 09:53, Bruce Richardson:
> On Sun, Nov 05, 2023 at 08:12:43PM -0800, Srikanth Yalavarthi wrote:
> > In order to avoid linking with Libs.private, libarchive
> > is not added to ext_deps during the meson setup stage.
> > 
> > Since libarchive is not added to ext_deps, cross-compilation
> > or native compilation with libarchive installed in non-standard
> > location fails with errors related to "cannot find -larchive"
> > or "archive.h: No such file or directory". In order to fix the
> > build failures, user is required to define the 'c_args' and
> > 'c_link_args' with '-I' and '-L'.
> > 
> > This patch adds libarchive to ext_deps and further would not
> > require setting c_args and c_link_args externally.
> > 
> > Fixes: 40edb9c0d36b ("eal: handle compressed firmware")
> > Cc: sta...@dpdk.org
> > 
> > Signed-off-by: Srikanth Yalavarthi 
> 
> Acked-by: Bruce Richardson 

I'm not sure to understand which new failure will happen.
Was there a problem solved in new libarchive packages?




Re: [PATCH v5 1/1] build: add libarchive to external deps

2023-11-06 Thread Thomas Monjalon
06/11/2023 16:24, Thomas Monjalon:
> 06/11/2023 09:53, Bruce Richardson:
> > On Sun, Nov 05, 2023 at 08:12:43PM -0800, Srikanth Yalavarthi wrote:
> > > In order to avoid linking with Libs.private, libarchive
> > > is not added to ext_deps during the meson setup stage.
> > > 
> > > Since libarchive is not added to ext_deps, cross-compilation
> > > or native compilation with libarchive installed in non-standard
> > > location fails with errors related to "cannot find -larchive"
> > > or "archive.h: No such file or directory". In order to fix the
> > > build failures, user is required to define the 'c_args' and
> > > 'c_link_args' with '-I' and '-L'.
> > > 
> > > This patch adds libarchive to ext_deps and further would not
> > > require setting c_args and c_link_args externally.
> > > 
> > > Fixes: 40edb9c0d36b ("eal: handle compressed firmware")
> > > Cc: sta...@dpdk.org
> > > 
> > > Signed-off-by: Srikanth Yalavarthi 
> > 
> > Acked-by: Bruce Richardson 
> 
> I'm not sure to understand which new failure will happen.
> Was there a problem solved in new libarchive packages?

BTW applied as it fixes an obvious problem.





Re: [PATCH v1] config/arm: correct cpu arch for cross build

2023-11-06 Thread Thomas Monjalon
06/11/2023 15:31, Joyce Kong:
> > -Original Message-
> > From: Thomas Monjalon 
> > Sent: Monday, November 6, 2023 10:10 PM
> > To: Ruifeng Wang ; Joyce Kong
> > 
> > Cc: dev@dpdk.org; Bruce Richardson ;
> > dev@dpdk.org; nd ; Paul Szczepanek
> > 
> > Subject: Re: [PATCH v1] config/arm: correct cpu arch for cross build
> > 
> > 18/10/2023 07:40, Joyce Kong:
> > > > From: Thomas Monjalon 
> > > > 22/08/2023 09:47, Joyce Kong:
> > > > > The cn10k cross build file sets cpu to 'armv8.6-a' while
> > > > > N2 is armv8.5-a arch.
> > > > > The cpu field in the cross file doesn't take effect as
> > > > > config/arm/meson.build controls machine_args for march.
> > > > > Then correct the value in arm cross files to 'auto'.
> > > >
> > > > I don't get it.
> > > > Why setting a value if it has no impact?
> > > > Looks like something is overcomplicated.
> > > >
> > > We still have to declare them here because meson would check the 'cpu'
> > line in the config file, otherwise it would report missing {'cpu'}.
> > 
> > OK
> > 
> > Then why not all cross files are set to auto?
> > 
> Actually, I set all the Arm cross files to auto in this patch. Maybe I have 
> to figure it out in the commit line?

What about these ones?

git grep 'cpu = ' config/arm | grep -v auto

config/arm/arm64_altra_linux_gcc:cpu = 'armv8.2-a'
config/arm/arm64_ampereone_linux_gcc:cpu = 'armv8.6-a'
config/arm/arm64_bluefield3_linux_gcc:cpu = 'armv8.4-a'
config/arm/arm64_cdx_linux_gcc:cpu = 'armv8-a'
config/arm/arm64_hip10_linux_gcc:cpu = 'armv8-a'





RE: [PATCH v6 1/2] bus/pci: add function to enable/disable PASID

2023-11-06 Thread Sevincer, Abdullah

>+Is PASID now part of PCIe spec? This APIs should both work for x86/arm?
>+Not sure ARM is OK with the naming, previously they are calling it more as 
>Sub Stream ID (SSID)
For reference, Look for PASID definitions in the PCIe spec.
The API takes in offset which might be different for other devices. Part of the 
reason why we defined the API this way.

>+Align with old definitions will looks better. Using TAB?
I will align.

For v7 I will use naming rte_pci_pasid_set_state, is everyone okay with that?

Thanks.
Abdullah.



Re: [PATCH] remove unnecessary null check before free/rte_free

2023-11-06 Thread Thomas Monjalon
25/10/2023 00:58, Stephen Hemminger:
> This is the latest round of places that are checking for NULL
> pointer before calling free or rte_free. It is result of applying
> the nullfree.cocci script.
> 
> Signed-off-by: Stephen Hemminger 

Applied and re-run with more fixes in new ml/cnxk code, thanks.




Re: [PATCH v5 1/1] build: add libarchive to external deps

2023-11-06 Thread David Marchand
On Mon, Nov 6, 2023 at 5:12 AM Srikanth Yalavarthi
 wrote:
>
> In order to avoid linking with Libs.private, libarchive
> is not added to ext_deps during the meson setup stage.
>
> Since libarchive is not added to ext_deps, cross-compilation
> or native compilation with libarchive installed in non-standard
> location fails with errors related to "cannot find -larchive"
> or "archive.h: No such file or directory". In order to fix the
> build failures, user is required to define the 'c_args' and
> 'c_link_args' with '-I' and '-L'.
>
> This patch adds libarchive to ext_deps and further would not
> require setting c_args and c_link_args externally.
>
> Fixes: 40edb9c0d36b ("eal: handle compressed firmware")
> Cc: sta...@dpdk.org
>
> Signed-off-by: Srikanth Yalavarthi 

This breaks static compilation of applications.
This can be reproduced with test-meson-builds.sh and in GHA (which was
not linking examples statically, I added a patch in my github repo):
https://github.com/david-marchand/dpdk/actions/runs/6772879600/job/18406442129#step:19:19572


-- 
David Marchand



Re: [PATCH v4 1/5] kvargs: add one new process API

2023-11-06 Thread Stephen Hemminger
On Mon, 6 Nov 2023 15:13:35 +0800
fengchengwen  wrote:

> >> +  
> > 
> > Looks good but may I suggest some alternatives.
> > 
> > Since this is an API and ABI change as was not announced, maybe a little 
> > late
> > in the process for this release. And since unlikely to go in 23.11 need to 
> > do something
> > better in 24.03.
> > 
> > What about changing the args to rte_kvargs_process() to add an additional 
> > default
> > value. Most callers don't have a default (use key-value) but the ones that 
> > take only-key
> > would pass the default value.  
> 
> The API definition changed, it may need modify most drivers.
> 
> Although it's a little late, better continue current.
> 
> Thanks
> Chengwen

Looking ahead, I would like to replace all of EAL args and KVargs processing
with something more like the python parseargs library. The API is cleaner and
incorporating the help with arg parsing is a real benefit. Thomas also suggested
integrating help in the arg parsing.

Something like: https://github.com/cofyc/argparse


Re: [PATCH v5 1/1] build: add libarchive to external deps

2023-11-06 Thread Bruce Richardson
On Mon, Nov 06, 2023 at 05:03:10PM +0100, David Marchand wrote:
> On Mon, Nov 6, 2023 at 5:12 AM Srikanth Yalavarthi
>  wrote:
> >
> > In order to avoid linking with Libs.private, libarchive is not added to
> > ext_deps during the meson setup stage.
> >
> > Since libarchive is not added to ext_deps, cross-compilation or native
> > compilation with libarchive installed in non-standard location fails
> > with errors related to "cannot find -larchive" or "archive.h: No such
> > file or directory". In order to fix the build failures, user is
> > required to define the 'c_args' and 'c_link_args' with '-I'
> > and '-L'.
> >
> > This patch adds libarchive to ext_deps and further would not require
> > setting c_args and c_link_args externally.
> >
> > Fixes: 40edb9c0d36b ("eal: handle compressed firmware") Cc:
> > sta...@dpdk.org
> >
> > Signed-off-by: Srikanth Yalavarthi 
> 
> This breaks static compilation of applications.  This can be reproduced
> with test-meson-builds.sh and in GHA (which was not linking examples
> statically, I added a patch in my github repo):
> https://github.com/david-marchand/dpdk/actions/runs/6772879600/job/18406442129#step:19:19572
> 
The libarchive-dev Ubuntu package does not install all its needed
dependencies for static linking. The errors can be resolved by manually
installing the 3 missing -dev packages.

It's less than ideal, but to my mind, DPDK is behaving correctly with this
fix - it is marking that it requires libarchive as a dependency. The fact
that the libarchive.pc file lists static libraries that aren't installed is
outside of our control. The previous implementation hacked around this by
just passing -larchive in all cases, rather than using pkg-config
information. This then caused other issues that the patch submitter hit.

/Bruce


Re: rte_timer_reset related issues in DPDK 20.05

2023-11-06 Thread Stephen Hemminger
On Mon, 6 Nov 2023 10:31:20 +
Nagma Meraj  wrote:

> TCS Confidential
> 
> Hi,
> 
> We are working on Front Haul Library which is using DPDK internally for data 
> accleration.
> In that we are facing the following issues:
> 1.In Front Haul Library, one of the threads xran_timing_source_thread() is 
> using DPDK rte_timer_reset() where in the timer_add() it is printing negative 
> values in the timer_get_prev_enteries().

Do you have threads fighting over timer?
What context is rte_timer_reset() being called in?


> 2. That is,in  prev[lvl]->sl_next[lvl] = -459511634 is taking negative value 
> in a while loop as in the below code snippet and is going in an infinite loop 
> because of which the above thread is getting stuck here and it is failing. 
> Can we get possible scenarios when this type of errors generally occurs?
> 
> [cid:2e8dda97-6a2a-474d-ba0a-98e7789a4014]
> 
> 
> Thanks & Regards,
> Nagma Meraj
> 
> 
> TCS Confidential
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain 
> confidential or privileged information. If you are 
> not the intended recipient, any dissemination, use, 
> review, distribution, printing or copying of the 
> information contained in this e-mail message 
> and/or attachments to it are strictly prohibited. If 
> you have received this communication in error, 
> please notify us by reply e-mail or telephone and 
> immediately and permanently delete the message 
> and any attachments. Thank you

Please don't put this on mailing list mail.
If lawyers are watching, in theory we should ignore your mail.


Re: [PATCH v3] eal: support lcore usage ratio

2023-11-06 Thread Thomas Monjalon
23/10/2023 14:51, Chengwen Feng:
> Current, the lcore usage only display two key fields: busy_cycles and
> total_cycles, which is inconvenient to obtain the usage ratio
> immediately. So adds lcore usage ratio field.
> 
> Signed-off-by: Chengwen Feng 
> Acked-by: Morten Brørup 
Acked-by: Huisong Li 

Added "telemetry" in the title,
and applied, thanks.




Re: [PATCH] eal: provide trace point register macro for MSVC

2023-11-06 Thread Thomas Monjalon
01/11/2023 23:47, Tyler Retzlaff:
> Provide an alternate RTE_TRACE_POINT_REGISTER macro when building with
> MSVC that allocates segments for the trace point using MSVC specific
> features

Please could you elaborate what is the improvement?

> +#define RTE_TRACE_POINT_REGISTER(trace, name) \
> +rte_trace_point_t \
> +__pragma(data_seg("__rte_trace_point")) \
> +__declspec(allocate("__rte_trace_point")) \
> +__##trace; \
> +static const char __##trace##_name[] = RTE_STR(name); \
> +RTE_INIT(trace##_init) \
> +{ \
> + __rte_trace_point_register(&__##trace, __##trace##_name, \
> + (void (*)(void)) trace); \
> +}





Re: [PATCH] eal: add missing extension to statement expression

2023-11-06 Thread Thomas Monjalon
01/11/2023 23:07, Tyler Retzlaff:
> add missing __extension__ keyword to RTE_ALIGN_MUL_NEAR statement
> expression to be consistent with other macros using statement
> expressions
> 
> Signed-off-by: Tyler Retzlaff 

Applied, thanks.





Re: [PATCH] eal: stop iteration after lcore info is processed

2023-11-06 Thread Thomas Monjalon
> > Telemetry iterates on lcore ID to collect info of a specific lcore.
> > Since only one lcore is processed at a time, the iteration can stop
> > when a matching lcore is found.
> > 
> > Fixes: f2b852d909f9 ("eal: add lcore info in telemetry")
> > Cc: rja...@redhat.com
> > Cc: sta...@dpdk.org
> > 
> > Signed-off-by: Ruifeng Wang 
> 
> Looks like a good optimization. Not sure it needs to go to stable.
> 
> Acked-by: Stephen Hemminger 

Applied without "Cc:stable", thanks.





[PATCH v7 0/2] *** Disable PASID for DLB Device ***

2023-11-06 Thread Abdullah Sevincer
This series implement an internal API to disable 
PASID and calls the api to disable PASID in event/dlb2 device.

Abdullah Sevincer (2):
  bus/pci: support PASID control
  event/dlb2: fix disable PASID

 drivers/bus/pci/pci_common.c  |  7 +++
 drivers/bus/pci/rte_bus_pci.h | 13 +
 drivers/bus/pci/version.map   |  1 +
 drivers/event/dlb2/pf/dlb2_main.c | 11 +++
 lib/pci/rte_pci.h |  4 
 5 files changed, 36 insertions(+)

-- 
2.25.1



Re: [PATCH v2 1/2] bus/cdx: add support for devices without MSI

2023-11-06 Thread Gupta, Nipun




On 11/3/2023 4:50 PM, Shubham Rohila wrote:

From: Nikhil Agarwal 

Update the cleanup routine for cdx device to support
device without MSI. Also, set vfio_dev_fd for such devices
This fd can be used for BME reload operations.

Signed-off-by: Nikhil Agarwal 
Signed-off-by: Shubham Rohila 
---
  v2
  - New patch in the series
  drivers/bus/cdx/cdx.c  |  2 +-
  drivers/bus/cdx/cdx_vfio.c | 19 +--
  2 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/drivers/bus/cdx/cdx.c b/drivers/bus/cdx/cdx.c
index 541aae76c3..62b108e082 100644
--- a/drivers/bus/cdx/cdx.c
+++ b/drivers/bus/cdx/cdx.c
@@ -405,9 +405,9 @@ cdx_probe_one_driver(struct rte_cdx_driver *dr,
return ret;
  
  error_probe:

+   cdx_vfio_unmap_resource(dev);
rte_intr_instance_free(dev->intr_handle);
dev->intr_handle = NULL;
-   cdx_vfio_unmap_resource(dev);
  error_map_device:
return ret;
  }
diff --git a/drivers/bus/cdx/cdx_vfio.c b/drivers/bus/cdx/cdx_vfio.c
index 8a3ac0b995..8cac79782e 100644
--- a/drivers/bus/cdx/cdx_vfio.c
+++ b/drivers/bus/cdx/cdx_vfio.c
@@ -101,13 +101,12 @@ cdx_vfio_unmap_resource_primary(struct rte_cdx_device 
*dev)
struct mapped_cdx_res_list *vfio_res_list;
int ret, vfio_dev_fd;
  
-	if (rte_intr_fd_get(dev->intr_handle) < 0)

-   return -1;


Why is this check removed? If VFIO fd is not there we may not proceed 
with other VFIO cleanup?


[PATCH v7 1/2] bus/pci: support PASID control

2023-11-06 Thread Abdullah Sevincer
Add an internal API to control PASID for a given PCIe device.

For kernels when PASID enabled by default it breaks DLB functionality,
hence disabling PASID is required for DLB to function properly.

PASID capability is not exposed to users hence offset can not be
retrieved by rte_pci_find_ext_capability() api. Therefore, api
implemented in this commit accepts an offset for PASID with an enable
flag which is used to enable/disable PASID.

Signed-off-by: Abdullah Sevincer 
---
 drivers/bus/pci/pci_common.c  |  7 +++
 drivers/bus/pci/rte_bus_pci.h | 13 +
 drivers/bus/pci/version.map   |  1 +
 lib/pci/rte_pci.h |  4 
 4 files changed, 25 insertions(+)

diff --git a/drivers/bus/pci/pci_common.c b/drivers/bus/pci/pci_common.c
index 921d957bf6..ecf080c5d7 100644
--- a/drivers/bus/pci/pci_common.c
+++ b/drivers/bus/pci/pci_common.c
@@ -938,6 +938,13 @@ rte_pci_set_bus_master(const struct rte_pci_device *dev, 
bool enable)
return 0;
 }
 
+int
+rte_pci_pasid_set_state(const struct rte_pci_device *dev, off_t offset, bool 
enable)
+{
+   uint16_t pasid = enable;
+   return rte_pci_write_config(dev, &pasid, sizeof(pasid), offset) < 0 ? 
-1 : 0;
+}
+
 struct rte_pci_bus rte_pci_bus = {
.bus = {
.scan = rte_pci_scan,
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index 21e234abf0..6d836e771a 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -295,6 +295,19 @@ void rte_pci_ioport_read(struct rte_pci_ioport *p,
 void rte_pci_ioport_write(struct rte_pci_ioport *p,
const void *data, size_t len, off_t offset);
 
+/**
+ * Enable/Disable PASID.
+ *
+ * @param dev
+ *   A pointer to a rte_pci_device structure.
+ * @param offset
+ *   Offset of the PASID external capability.
+ * @param enable
+ *   Flag to enable or disable PASID.
+ */
+__rte_internal
+int rte_pci_pasid_set_state(const struct rte_pci_device *dev, off_t offset, 
bool enable);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/bus/pci/version.map b/drivers/bus/pci/version.map
index 74c5b075d5..9fad086bdf 100644
--- a/drivers/bus/pci/version.map
+++ b/drivers/bus/pci/version.map
@@ -37,5 +37,6 @@ INTERNAL {
 
rte_pci_get_sysfs_path;
rte_pci_register;
+   rte_pci_pasid_set_state;
rte_pci_unregister;
 };
diff --git a/lib/pci/rte_pci.h b/lib/pci/rte_pci.h
index 69e932d910..d195f01950 100644
--- a/lib/pci/rte_pci.h
+++ b/lib/pci/rte_pci.h
@@ -101,6 +101,10 @@ extern "C" {
 #define RTE_PCI_EXT_CAP_ID_ACS 0x0d/* Access Control Services */
 #define RTE_PCI_EXT_CAP_ID_SRIOV   0x10/* SR-IOV */
 #define RTE_PCI_EXT_CAP_ID_PRI 0x13/* Page Request Interface */
+#define RTE_PCI_EXT_CAP_ID_PASID0x1B/* Process Address Space ID */
+
+/* Process Address Space ID */
+#define RTE_PCI_PASID_CTRL 0x06/* PASID control register */
 
 /* Advanced Error Reporting (RTE_PCI_EXT_CAP_ID_ERR) */
 #define RTE_PCI_ERR_UNCOR_STATUS   0x04/* Uncorrectable Error Status */
-- 
2.25.1



[PATCH v7 2/2] event/dlb2: fix disable PASID

2023-11-06 Thread Abdullah Sevincer
In vfio-pci driver when PASID is enabled by default DLB hardware puts
DLB in SIOV mode. This breaks DLB PF-PMD mode. For DLB PF-PMD mode to
function properly PASID needs to be disabled.

In this commit this issue is addressed and PASID is disabled by writing
a zero to PASID control register.

Fixes: 5433956d5185 ("event/dlb2: add eventdev probe")
Cc: sta...@dpdk.org

Signed-off-by: Abdullah Sevincer 
---
 drivers/event/dlb2/pf/dlb2_main.c | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/drivers/event/dlb2/pf/dlb2_main.c 
b/drivers/event/dlb2/pf/dlb2_main.c
index aa03e4c311..61a7b39eef 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -26,6 +26,7 @@
 #define PF_ID_ZERO 0   /* PF ONLY! */
 #define NO_OWNER_VF 0  /* PF ONLY! */
 #define NOT_VF_REQ false /* PF ONLY! */
+#define DLB2_PCI_PASID_CAP_OFFSET0x148   /* PASID capability offset */
 
 static int
 dlb2_pf_init_driver_state(struct dlb2_dev *dlb2_dev)
@@ -514,6 +515,16 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
}
}
 
+   /* Disable PASID if it is enabled by default, which
+* breaks the DLB if enabled.
+*/
+   off = DLB2_PCI_PASID_CAP_OFFSET + RTE_PCI_PASID_CTRL;
+   if (rte_pci_pasid_set_state(pdev, off, false)) {
+   DLB2_LOG_ERR("[%s()] failed to write the pcie config space at 
offset %d\n",
+   __func__, (int)off);
+   return -1;
+   }
+
return 0;
 }
 
-- 
2.25.1



[PATCH v5 00/23] dts: add dts api docs

2023-11-06 Thread Juraj Linkeš
The commits can be split into groups.

The first commit makes changes to the code. These code changes mainly
change the structure of the code so that the actual API docs generation
works. There are also some code changes which get reflected in the
documentation, such as making functions/methods/attributes private or
public.

The second set of commits (2-21) deal with the actual docstring
documentation (from which the API docs are generated). The format of
docstrings is the Google format [0] with PEP257 [1] and some guidelines
captured in the last commit of this group covering what the Google
format doesn't.
The docstring updates are split into many commits to make review
possible. When accepted, these may be squashed (commits 4-21).
The docstrings have been composed in anticipation of [2], adhering to
maximum line length of 100. We don't have a tool for automatic docstring
formatting, hence the usage of 100 right away to save time.

NOTE: The logger.py module is not fully documented, as it's being
refactored and the refactor will be submitted in the near future.
Documenting it now seems unnecessary.

The last two commits comprise the final group, enabling the actual
generation of documentation.
The generation is done with Sphinx, which DPDK already uses, with
slightly modified configuration (the sidebar: unlimited depth and better
collapsing - I need comment on this).

The first two groups are the most important to merge, as future
development can't proceed without them. The third group may be
finished/accepted at a later date, as it's fairly independent.

The build requires the same Python version and dependencies as DTS,
because Sphinx imports the Python modules. The modules are imported
individually, requiring the code refactoring mentioned above.
Dependencies are installed
using Poetry from the dts directory:

poetry install --with docs

After installing, enter the Poetry shell:

poetry shell

And then run the build:
ninja -C  dts-doc

[0] https://google.github.io/styleguide/pyguide.html#s3.8.4-comments-in-classes
[1] https://peps.python.org/pep-0257/
[2] https://patches.dpdk.org/project/dpdk/list/?series=29844

Juraj Linkeš (23):
  dts: code adjustments for doc generation
  dts: add docstring checker
  dts: add basic developer docs
  dts: exceptions docstring update
  dts: settings docstring update
  dts: logger and settings docstring update
  dts: dts runner and main docstring update
  dts: test suite docstring update
  dts: test result docstring update
  dts: config docstring update
  dts: remote session docstring update
  dts: interactive remote session docstring update
  dts: port and virtual device docstring update
  dts: cpu docstring update
  dts: os session docstring update
  dts: posix and linux sessions docstring update
  dts: node docstring update
  dts: sut and tg nodes docstring update
  dts: base traffic generators docstring update
  dts: scapy tg docstring update
  dts: test suites docstring update
  dts: add doc generation dependencies
  dts: add doc generation

 buildtools/call-sphinx-build.py   |  29 +-
 doc/api/meson.build   |   1 +
 doc/guides/conf.py|  34 +-
 doc/guides/meson.build|   1 +
 doc/guides/tools/dts.rst  | 103 
 dts/doc/conf_yaml_schema.json |   1 +
 dts/doc/index.rst |  17 +
 dts/doc/meson.build   |  49 ++
 dts/framework/__init__.py |  12 +-
 dts/framework/config/__init__.py  | 379 ++---
 dts/framework/config/types.py | 132 +
 dts/framework/dts.py  | 161 +-
 dts/framework/exception.py| 156 +++---
 dts/framework/logger.py   |  72 ++-
 dts/framework/remote_session/__init__.py  |  80 ++-
 .../interactive_remote_session.py |  36 +-
 .../remote_session/interactive_shell.py   | 152 ++
 dts/framework/remote_session/os_session.py| 284 --
 dts/framework/remote_session/python_shell.py  |  32 ++
 .../remote_session/remote/__init__.py |  27 -
 .../remote/interactive_shell.py   | 133 -
 .../remote_session/remote/python_shell.py |  12 -
 .../remote_session/remote/remote_session.py   | 172 --
 .../remote_session/remote/testpmd_shell.py|  49 --
 .../remote_session/remote_session.py  | 232 
 .../{remote => }/ssh_session.py   |  28 +-
 dts/framework/remote_session/testpmd_shell.py |  86 +++
 dts/framework/settings.py | 188 +--
 dts/framework/test_result.py  | 296 +++---
 dts/framework/test_suite.py   | 230 ++--
 dts/framework/testbed_model/__init__.py   |  28 +-
 dts/framework/testbed_model/{hw => }/cpu.py   | 209 +--
 dts/framework/testbed_model/hw/__init__.py|  27 -
 dts/framework/testbed_model/hw/port.py 

[PATCH v5 01/23] dts: code adjustments for doc generation

2023-11-06 Thread Juraj Linkeš
The standard Python tool for generating API documentation, Sphinx,
imports modules one-by-one when generating the documentation. This
requires code changes:
* properly guarding argument parsing in the if __name__ == '__main__'
  block,
* the logger used by DTS runner underwent the same treatment so that it
  doesn't create log files outside of a DTS run,
* however, DTS uses the arguments to construct an object holding global
  variables. The defaults for the global variables needed to be moved
  from argument parsing elsewhere,
* importing the remote_session module from framework resulted in
  circular imports because of one module trying to import another
  module. This is fixed by reorganizing the code,
* some code reorganization was done because the resulting structure
  makes more sense, improving documentation clarity.

The are some other changes which are documentation related:
* added missing type annotation so they appear in the generated docs,
* reordered arguments in some methods,
* removed superfluous arguments and attributes,
* change private functions/methods/attributes to private and vice-versa.

The above all appear in the generated documentation and the with them,
the documentation is improved.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/config/__init__.py  | 10 ++-
 dts/framework/dts.py  | 33 +--
 dts/framework/exception.py| 54 +---
 dts/framework/remote_session/__init__.py  | 41 -
 .../interactive_remote_session.py |  0
 .../{remote => }/interactive_shell.py |  0
 .../{remote => }/python_shell.py  |  0
 .../remote_session/remote/__init__.py | 27 --
 .../{remote => }/remote_session.py|  0
 .../{remote => }/ssh_session.py   | 12 +--
 .../{remote => }/testpmd_shell.py |  0
 dts/framework/settings.py | 87 +++
 dts/framework/test_result.py  |  4 +-
 dts/framework/test_suite.py   |  7 +-
 dts/framework/testbed_model/__init__.py   | 12 +--
 dts/framework/testbed_model/{hw => }/cpu.py   | 13 +++
 dts/framework/testbed_model/hw/__init__.py| 27 --
 .../linux_session.py  |  6 +-
 dts/framework/testbed_model/node.py   | 26 --
 .../os_session.py | 22 ++---
 dts/framework/testbed_model/{hw => }/port.py  |  0
 .../posix_session.py  |  4 +-
 dts/framework/testbed_model/sut_node.py   |  8 +-
 dts/framework/testbed_model/tg_node.py| 30 +--
 .../traffic_generator/__init__.py | 24 +
 .../capturing_traffic_generator.py|  6 +-
 .../{ => traffic_generator}/scapy.py  | 23 ++---
 .../traffic_generator.py  | 16 +++-
 .../testbed_model/{hw => }/virtual_device.py  |  0
 dts/framework/utils.py| 46 +++---
 dts/main.py   |  9 +-
 31 files changed, 259 insertions(+), 288 deletions(-)
 rename dts/framework/remote_session/{remote => }/interactive_remote_session.py 
(100%)
 rename dts/framework/remote_session/{remote => }/interactive_shell.py (100%)
 rename dts/framework/remote_session/{remote => }/python_shell.py (100%)
 delete mode 100644 dts/framework/remote_session/remote/__init__.py
 rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/ssh_session.py (91%)
 rename dts/framework/remote_session/{remote => }/testpmd_shell.py (100%)
 rename dts/framework/testbed_model/{hw => }/cpu.py (95%)
 delete mode 100644 dts/framework/testbed_model/hw/__init__.py
 rename dts/framework/{remote_session => testbed_model}/linux_session.py (97%)
 rename dts/framework/{remote_session => testbed_model}/os_session.py (95%)
 rename dts/framework/testbed_model/{hw => }/port.py (100%)
 rename dts/framework/{remote_session => testbed_model}/posix_session.py (98%)
 create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py
 rename dts/framework/testbed_model/{ => 
traffic_generator}/capturing_traffic_generator.py (96%)
 rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (95%)
 rename dts/framework/testbed_model/{ => 
traffic_generator}/traffic_generator.py (80%)
 rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)

diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index cb7e00ba34..2044c82611 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -17,6 +17,7 @@
 import warlock  # type: ignore[import]
 import yaml
 
+from framework.exception import ConfigurationError
 from framework.settings import SETTINGS
 from framework.utils import StrEnum
 
@@ -89,7 +90,7 @@ class TrafficGeneratorConfig:
 traffic_generator_type: TrafficGeneratorType
 
 @staticmethod
-def from_dict(d: dict):
+def from_dic

[PATCH v5 03/23] dts: add basic developer docs

2023-11-06 Thread Juraj Linkeš
Expand the framework contribution guidelines and add how to document the
code with Python docstrings.

Signed-off-by: Juraj Linkeš 
---
 doc/guides/tools/dts.rst | 73 
 1 file changed, 73 insertions(+)

diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index 32c18ee472..b1e99107c3 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -264,6 +264,65 @@ which be changed with the ``--output-dir`` command line 
argument.
 The results contain basic statistics of passed/failed test cases and DPDK 
version.
 
 
+Contributing to DTS
+---
+
+There are two areas of contribution: The DTS framework and DTS test suites.
+
+The framework contains the logic needed to run test cases, such as connecting 
to nodes,
+running DPDK apps and collecting results.
+
+The test cases call APIs from the framework to test their scenarios. Adding 
test cases may
+require adding code to the framework as well.
+
+
+Framework Coding Guidelines
+~~~
+
+When adding code to the DTS framework, pay attention to the rest of the code
+and try not to divert much from it. The :ref:`DTS developer tools 
` will issue
+warnings when some of the basics are not met.
+
+The code must be properly documented with docstrings. The style must conform to
+the `Google style 
`_.
+See an example of the style
+`here 
`_.
+For cases which are not covered by the Google style, refer
+to `PEP 257 `_. There are some cases which 
are not covered by
+the two style guides, where we deviate or where some additional clarification 
is helpful:
+
+   * The __init__() methods of classes are documented separately from the 
docstring of the class
+ itself.
+   * The docstrigs of implemented abstract methods should refer to the 
superclass's definition
+ if there's no deviation.
+   * Instance variables/attributes should be documented in the docstring of 
the class
+ in the ``Attributes:`` section.
+   * The dataclass.dataclass decorator changes how the attributes are 
processed. The dataclass
+ attributes which result in instance variables/attributes should also be 
recorded
+ in the ``Attributes:`` section.
+   * Class variables/attributes, on the other hand, should be documented with 
``#:`` above
+ the type annotated line. The description may be omitted if the meaning is 
obvious.
+   * The Enum and TypedDict also process the attributes in particular ways and 
should be documented
+ with ``#:`` as well. This is mainly so that the autogenerated docs 
contain the assigned value.
+   * When referencing a parameter of a function or a method in their 
docstring, don't use
+ any articles and put the parameter into single backticks. This mimics the 
style of
+ `Python's documentation `_.
+   * When specifying a value, use double backticks::
+
+def foo(greet: bool) -> None:
+"""Demonstration of single and double backticks.
+
+`greet` controls whether ``Hello World`` is printed.
+
+Args:
+   greet: Whether to print the ``Hello World`` message.
+"""
+if greet:
+   print(f"Hello World")
+
+   * The docstring maximum line length is the same as the code maximum line 
length.
+
+
 How To Write a Test Suite
 -
 
@@ -293,6 +352,18 @@ There are four types of methods that comprise a test suite:
| These methods don't need to be implemented if there's no need for them in 
a test suite.
  In that case, nothing will happen when they're is executed.
 
+#. **Configuration, traffic and other logic**
+
+   The ``TestSuite`` class contains a variety of methods for anything that
+   a test suite setup or teardown or a test case may need to do.
+
+   The test suites also frequently use a DPDK app, such as testpmd, in 
interactive mode
+   and use the interactive shell instances directly.
+
+   These are the two main ways to call the framework logic in test suites. If 
there's any
+   functionality or logic missing from the framework, it should be implemented 
so that
+   the test suites can use one of these two ways.
+
 #. **Test case verification**
 
Test case verification should be done with the ``verify`` method, which 
records the result.
@@ -308,6 +379,8 @@ There are four types of methods that comprise a test suite:
and used by the test suite via the ``sut_node`` field.
 
 
+.. _dts_dev_tools:
+
 DTS Developer Tools
 ---
 
-- 
2.34.1



[PATCH v5 02/23] dts: add docstring checker

2023-11-06 Thread Juraj Linkeš
Python docstrings are the in-code way to document the code. The
docstring checker of choice is pydocstyle which we're executing from
Pylama, but the current latest versions are not complatible due to [0],
so pin the pydocstyle version to the latest working version.

[0] https://github.com/klen/pylama/issues/232

Signed-off-by: Juraj Linkeš 
---
 dts/poetry.lock| 12 ++--
 dts/pyproject.toml |  6 +-
 2 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index f7b3b6d602..a734fa71f0 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -489,20 +489,20 @@ files = [
 
 [[package]]
 name = "pydocstyle"
-version = "6.3.0"
+version = "6.1.1"
 description = "Python docstring style checker"
 optional = false
 python-versions = ">=3.6"
 files = [
-{file = "pydocstyle-6.3.0-py3-none-any.whl", hash = 
"sha256:118762d452a49d6b05e194ef344a55822987a462831ade91ec5c06fd2169d019"},
-{file = "pydocstyle-6.3.0.tar.gz", hash = 
"sha256:7ce43f0c0ac87b07494eb9c0b462c0b73e6ff276807f204d6b53edc72b7e44e1"},
+{file = "pydocstyle-6.1.1-py3-none-any.whl", hash = 
"sha256:6987826d6775056839940041beef5c08cc7e3d71d63149b48e36727f70144dc4"},
+{file = "pydocstyle-6.1.1.tar.gz", hash = 
"sha256:1d41b7c459ba0ee6c345f2eb9ae827cab14a7533a88c5c6f7e94923f72df92dc"},
 ]
 
 [package.dependencies]
-snowballstemmer = ">=2.2.0"
+snowballstemmer = "*"
 
 [package.extras]
-toml = ["tomli (>=1.2.3)"]
+toml = ["toml"]
 
 [[package]]
 name = "pyflakes"
@@ -837,4 +837,4 @@ jsonschema = ">=4,<5"
 [metadata]
 lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = 
"0b1e4a1cb8323e17e5ee5951c97e74bde6e60d0413d7b25b1803d5b2bab39639"
+content-hash = 
"3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index 6762edfa6b..3943c87c87 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -25,6 +25,7 @@ PyYAML = "^6.0"
 types-PyYAML = "^6.0.8"
 fabric = "^2.7.1"
 scapy = "^2.5.0"
+pydocstyle = "6.1.1"
 
 [tool.poetry.group.dev.dependencies]
 mypy = "^0.961"
@@ -39,10 +40,13 @@ requires = ["poetry-core>=1.0.0"]
 build-backend = "poetry.core.masonry.api"
 
 [tool.pylama]
-linters = "mccabe,pycodestyle,pyflakes"
+linters = "mccabe,pycodestyle,pydocstyle,pyflakes"
 format = "pylint"
 max_line_length = 88 # 
https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html#line-length
 
+[tool.pylama.linter.pydocstyle]
+convention = "google"
+
 [tool.mypy]
 python_version = "3.10"
 enable_error_code = ["ignore-without-code"]
-- 
2.34.1



[PATCH v5 04/23] dts: exceptions docstring update

2023-11-06 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/__init__.py  |  12 -
 dts/framework/exception.py | 106 +
 2 files changed, 83 insertions(+), 35 deletions(-)

diff --git a/dts/framework/__init__.py b/dts/framework/__init__.py
index d551ad4bf0..662e6ccad2 100644
--- a/dts/framework/__init__.py
+++ b/dts/framework/__init__.py
@@ -1,3 +1,13 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022 University of New Hampshire
+
+"""Libraries and utilities for running DPDK Test Suite (DTS).
+
+The various modules in the DTS framework offer:
+
+* Connections to nodes, both interactive and non-interactive,
+* A straightforward way to add support for different operating systems of 
remote nodes,
+* Test suite setup, execution and teardown, along with test case setup, 
execution and teardown,
+* Pre-test suite setup and post-test suite teardown.
+"""
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 7489c03570..ee1562c672 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -3,8 +3,10 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-User-defined exceptions used across the framework.
+"""DTS exceptions.
+
+The exceptions all have different severities expressed as an integer.
+The highest severity of all raised exception is used as the exit code of DTS.
 """
 
 from enum import IntEnum, unique
@@ -13,59 +15,79 @@
 
 @unique
 class ErrorSeverity(IntEnum):
-"""
-The severity of errors that occur during DTS execution.
+"""The severity of errors that occur during DTS execution.
+
 All exceptions are caught and the most severe error is used as return code.
 """
 
+#:
 NO_ERR = 0
+#:
 GENERIC_ERR = 1
+#:
 CONFIG_ERR = 2
+#:
 REMOTE_CMD_EXEC_ERR = 3
+#:
 SSH_ERR = 4
+#:
 DPDK_BUILD_ERR = 10
+#:
 TESTCASE_VERIFY_ERR = 20
+#:
 BLOCKING_TESTSUITE_ERR = 25
 
 
 class DTSError(Exception):
-"""
-The base exception from which all DTS exceptions are derived.
-Stores error severity.
+"""The base exception from which all DTS exceptions are subclassed.
+
+Do not use this exception, only use subclassed exceptions.
 """
 
+#:
 severity: ClassVar[ErrorSeverity] = ErrorSeverity.GENERIC_ERR
 
 
 class SSHTimeoutError(DTSError):
-"""
-Command execution timeout.
-"""
+"""The SSH execution of a command timed out."""
 
+#:
 severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
 _command: str
 
 def __init__(self, command: str):
+"""Define the meaning of the first argument.
+
+Args:
+command: The executed command.
+"""
 self._command = command
 
 def __str__(self) -> str:
-return f"TIMEOUT on {self._command}"
+"""Add some context to the string representation."""
+return f"{self._command} execution timed out."
 
 
 class SSHConnectionError(DTSError):
-"""
-SSH connection error.
-"""
+"""An unsuccessful SSH connection."""
 
+#:
 severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
 _host: str
 _errors: list[str]
 
 def __init__(self, host: str, errors: list[str] | None = None):
+"""Define the meaning of the first two arguments.
+
+Args:
+host: The hostname to which we're trying to connect.
+errors: Any errors that occurred during the connection attempt.
+"""
 self._host = host
 self._errors = [] if errors is None else errors
 
 def __str__(self) -> str:
+"""Include the errors in the string representation."""
 message = f"Error trying to connect with {self._host}."
 if self._errors:
 message += f" Errors encountered while retrying: {', 
'.join(self._errors)}"
@@ -74,43 +96,53 @@ def __str__(self) -> str:
 
 
 class SSHSessionDeadError(DTSError):
-"""
-SSH session is not alive.
-It can no longer be used.
-"""
+"""The SSH session is no longer alive."""
 
+#:
 severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
 _host: str
 
 def __init__(self, host: str):
+"""Define the meaning of the first argument.
+
+Args:
+host: The hostname of the disconnected node.
+"""
 self._host = host
 
 def __str__(self) -> str:
-return f"SSH session with {self._host} has died"
+"""Add some context to the string representation."""
+return f"SSH session with {self._host} has died."
 
 
 class ConfigurationError(DTSError):
-"""
-Raised when an invalid configuration is encountered.
-"""
+"""An invalid configuration."""
 
+#:
 severity: ClassVar[ErrorSeverity] = E

[PATCH v5 05/23] dts: settings docstring update

2023-11-06 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/settings.py | 101 +-
 1 file changed, 100 insertions(+), 1 deletion(-)

diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 7f5841d073..787db7c198 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -3,6 +3,70 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022 University of New Hampshire
 
+"""Environment variables and command line arguments parsing.
+
+This is a simple module utilizing the built-in argparse module to parse 
command line arguments,
+augment them with values from environment variables and make them available 
across the framework.
+
+The command line value takes precedence, followed by the environment variable 
value,
+followed by the default value defined in this module.
+
+The command line arguments along with the supported environment variables are:
+
+.. option:: --config-file
+.. envvar:: DTS_CFG_FILE
+
+The path to the YAML test run configuration file.
+
+.. option:: --output-dir, --output
+.. envvar:: DTS_OUTPUT_DIR
+
+The directory where DTS logs and results are saved.
+
+.. option:: --compile-timeout
+.. envvar:: DTS_COMPILE_TIMEOUT
+
+The timeout for compiling DPDK.
+
+.. option:: -t, --timeout
+.. envvar:: DTS_TIMEOUT
+
+The timeout for all DTS operation except for compiling DPDK.
+
+.. option:: -v, --verbose
+.. envvar:: DTS_VERBOSE
+
+Set to any value to enable logging everything to the console.
+
+.. option:: -s, --skip-setup
+.. envvar:: DTS_SKIP_SETUP
+
+Set to any value to skip building DPDK.
+
+.. option:: --tarball, --snapshot, --git-ref
+.. envvar:: DTS_DPDK_TARBALL
+
+The path to a DPDK tarball, git commit ID, tag ID or tree ID to test.
+
+.. option:: --test-cases
+.. envvar:: DTS_TESTCASES
+
+A comma-separated list of test cases to execute. Unknown test cases will 
be silently ignored.
+
+.. option:: --re-run, --re_run
+.. envvar:: DTS_RERUN
+
+Re-run each test case this many times in case of a failure.
+
+Attributes:
+SETTINGS: The module level variable storing framework-wide DTS settings.
+
+Typical usage example::
+
+  from framework.settings import SETTINGS
+  foo = SETTINGS.foo
+"""
+
 import argparse
 import os
 from collections.abc import Callable, Iterable, Sequence
@@ -16,6 +80,23 @@
 
 
 def _env_arg(env_var: str) -> Any:
+"""A helper method augmenting the argparse Action with environment 
variables.
+
+If the supplied environment variable is defined, then the default value
+of the argument is modified. This satisfies the priority order of
+command line argument > environment variable > default value.
+
+Arguments with no values (flags) should be defined using the const keyword 
argument
+(True or False). When the argument is specified, it will be set to const, 
if not specified,
+the default will be stored (possibly modified by the corresponding 
environment variable).
+
+Other arguments work the same as default argparse arguments, that is using
+the default 'store' action.
+
+Returns:
+  The modified argparse.Action.
+"""
+
 class _EnvironmentArgument(argparse.Action):
 def __init__(
 self,
@@ -68,14 +149,28 @@ def __call__(
 
 @dataclass(slots=True)
 class Settings:
+"""Default framework-wide user settings.
+
+The defaults may be modified at the start of the run.
+"""
+
+#:
 config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml")
+#:
 output_dir: str = "output"
+#:
 timeout: float = 15
+#:
 verbose: bool = False
+#:
 skip_setup: bool = False
+#:
 dpdk_tarball_path: Path | str = "dpdk.tar.xz"
+#:
 compile_timeout: float = 1200
+#:
 test_cases: list[str] = field(default_factory=list)
+#:
 re_run: int = 0
 
 
@@ -169,7 +264,7 @@ def _get_parser() -> argparse.ArgumentParser:
 action=_env_arg("DTS_RERUN"),
 default=SETTINGS.re_run,
 type=int,
-help="[DTS_RERUN] Re-run each test case the specified amount of times "
+help="[DTS_RERUN] Re-run each test case the specified number of times "
 "if a test failure occurs",
 )
 
@@ -177,6 +272,10 @@ def _get_parser() -> argparse.ArgumentParser:
 
 
 def get_settings() -> Settings:
+"""Create new settings with inputs from the user.
+
+The inputs are taken from the command line and from environment variables.
+"""
 parsed_args = _get_parser().parse_args()
 return Settings(
 config_file_path=parsed_args.config_file,
-- 
2.34.1



[PATCH v5 07/23] dts: dts runner and main docstring update

2023-11-06 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/dts.py | 128 ---
 dts/main.py  |   8 ++-
 2 files changed, 112 insertions(+), 24 deletions(-)

diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 4c7fb0c40a..331fed7dc4 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -3,6 +3,33 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+r"""Test suite runner module.
+
+A DTS run is split into stages:
+
+#. Execution stage,
+#. Build target stage,
+#. Test suite stage,
+#. Test case stage.
+
+The module is responsible for running tests on testbeds defined in the test 
run configuration.
+Each setup or teardown of each stage is recorded in a 
:class:`~framework.test_result.DTSResult` or
+one of its subclasses. The test case results are also recorded.
+
+If an error occurs, the current stage is aborted, the error is recorded and 
the run continues in
+the next iteration of the same stage. The return code is the highest 
`severity` of all
+:class:`~.framework.exception.DTSError`\s.
+
+Example:
+An error occurs in a build target setup. The current build target is 
aborted and the run
+continues with the next build target. If the errored build target was the 
last one in the given
+execution, the next execution begins.
+
+Attributes:
+dts_logger: The logger instance used in this module.
+result: The top level result used in the module.
+"""
+
 import sys
 
 from .config import (
@@ -23,9 +50,38 @@
 
 
 def run_all() -> None:
-"""
-The main process of DTS. Runs all build targets in all executions from the 
main
-config file.
+"""Run all build targets in all executions from the test run configuration.
+
+Before running test suites, executions and build targets are first set up.
+The executions and build targets defined in the test run configuration are 
iterated over.
+The executions define which tests to run and where to run them and build 
targets define
+the DPDK build setup.
+
+The tests suites are set up for each execution/build target tuple and each 
scheduled
+test case within the test suite is set up, executed and torn down. After 
all test cases
+have been executed, the test suite is torn down and the next build target 
will be tested.
+
+All the nested steps look like this:
+
+#. Execution setup
+
+#. Build target setup
+
+#. Test suite setup
+
+#. Test case setup
+#. Test case logic
+#. Test case teardown
+
+#. Test suite teardown
+
+#. Build target teardown
+
+#. Execution teardown
+
+The test cases are filtered according to the specification in the test run 
configuration and
+the :option:`--test-cases` command line argument or
+the :envvar:`DTS_TESTCASES` environment variable.
 """
 global dts_logger
 global result
@@ -87,6 +143,8 @@ def run_all() -> None:
 
 
 def _check_dts_python_version() -> None:
+"""Check the required Python version - v3.10."""
+
 def RED(text: str) -> str:
 return f"\u001B[31;1m{str(text)}\u001B[0m"
 
@@ -111,9 +169,16 @@ def _run_execution(
 execution: ExecutionConfiguration,
 result: DTSResult,
 ) -> None:
-"""
-Run the given execution. This involves running the execution setup as well 
as
-running all build targets in the given execution.
+"""Run the given execution.
+
+This involves running the execution setup as well as running all build 
targets
+in the given execution. After that, execution teardown is run.
+
+Args:
+sut_node: The execution's SUT node.
+tg_node: The execution's TG node.
+execution: An execution's test run configuration.
+result: The top level result object.
 """
 dts_logger.info(
 f"Running execution with SUT 
'{execution.system_under_test_node.name}'."
@@ -150,8 +215,18 @@ def _run_build_target(
 execution: ExecutionConfiguration,
 execution_result: ExecutionResult,
 ) -> None:
-"""
-Run the given build target.
+"""Run the given build target.
+
+This involves running the build target setup as well as running all test 
suites
+in the given execution the build target is defined in.
+After that, build target teardown is run.
+
+Args:
+sut_node: The execution's SUT node.
+tg_node: The execution's TG node.
+build_target: A build target's test run configuration.
+execution: The build target's execution's test run configuration.
+execution_result: The execution level result object associated with 
the execution.
 """
 dts_logger.info(f"Running build target '{build_target.name}'.")
 build_target_result = execution_result.add_build_target(build_targe

[PATCH v5 08/23] dts: test suite docstring update

2023-11-06 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/test_suite.py | 223 +++-
 1 file changed, 168 insertions(+), 55 deletions(-)

diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index d53553bf34..8daac35818 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -2,8 +2,19 @@
 # Copyright(c) 2010-2014 Intel Corporation
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-Base class for creating DTS test cases.
+"""Features common to all test suites.
+
+The module defines the :class:`TestSuite` class which doesn't contain any test 
cases, and as such
+must be extended by subclasses which add test cases. The :class:`TestSuite` 
contains the basics
+needed by subclasses:
+
+* Test suite and test case execution flow,
+* Testbed (SUT, TG) configuration,
+* Packet sending and verification,
+* Test case verification.
+
+The module also defines a function, :func:`get_test_suites`,
+for gathering test suites from a Python module.
 """
 
 import importlib
@@ -31,25 +42,44 @@
 
 
 class TestSuite(object):
-"""
-The base TestSuite class provides methods for handling basic flow of a 
test suite:
-* test case filtering and collection
-* test suite setup/cleanup
-* test setup/cleanup
-* test case execution
-* error handling and results storage
-Test cases are implemented by derived classes. Test cases are all methods
-starting with test_, further divided into performance test cases
-(starting with test_perf_) and functional test cases (all other test 
cases).
-By default, all test cases will be executed. A list of testcase str names
-may be specified in conf.yaml or on the command line
-to filter which test cases to run.
-The methods named [set_up|tear_down]_[suite|test_case] should be overridden
-in derived classes if the appropriate suite/test case fixtures are needed.
+"""The base class with methods for handling the basic flow of a test suite.
+
+* Test case filtering and collection,
+* Test suite setup/cleanup,
+* Test setup/cleanup,
+* Test case execution,
+* Error handling and results storage.
+
+Test cases are implemented by subclasses. Test cases are all methods 
starting with ``test_``,
+further divided into performance test cases (starting with ``test_perf_``)
+and functional test cases (all other test cases).
+
+By default, all test cases will be executed. A list of testcase names may 
be specified
+in the YAML test run configuration file and in the :option:`--test-cases` 
command line argument
+or in the :envvar:`DTS_TESTCASES` environment variable to filter which 
test cases to run.
+The union of both lists will be used. Any unknown test cases from the 
latter lists
+will be silently ignored.
+
+If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` 
environment variable
+is set, in case of a test case failure, the test case will be executed 
again until it passes
+or it fails that many times in addition of the first failure.
+
+The methods named ``[set_up|tear_down]_[suite|test_case]`` should be 
overridden in subclasses
+if the appropriate test suite/test case fixtures are needed.
+
+The test suite is aware of the testbed (the SUT and TG) it's running on. 
From this, it can
+properly choose the IP addresses and other configuration that must be 
tailored to the testbed.
+
+Attributes:
+sut_node: The SUT node where the test suite is running.
+tg_node: The TG node where the test suite is running.
+is_blocking: Whether the test suite is blocking. A failure of a 
blocking test suite
+will block the execution of all subsequent test suites in the 
current build target.
 """
 
 sut_node: SutNode
-is_blocking = False
+tg_node: TGNode
+is_blocking: bool = False
 _logger: DTSLOG
 _test_cases_to_run: list[str]
 _func: bool
@@ -72,6 +102,19 @@ def __init__(
 func: bool,
 build_target_result: BuildTargetResult,
 ):
+"""Initialize the test suite testbed information and basic 
configuration.
+
+Process what test cases to run, create the associated 
:class:`TestSuiteResult`,
+find links between ports and set up default IP addresses to be used 
when configuring them.
+
+Args:
+sut_node: The SUT node where the test suite will run.
+tg_node: The TG node where the test suite will run.
+test_cases: The list of test cases to execute.
+If empty, all test cases will be executed.
+func: Whether to run functional tests.
+build_target_result: The build target result this test suite is 
run in.
+"""
 self.sut_node = sut_node
 self.tg_node = tg_node
 self._logger = getL

[PATCH v5 06/23] dts: logger and settings docstring update

2023-11-06 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/logger.py | 72 +--
 dts/framework/utils.py  | 96 ++---
 2 files changed, 121 insertions(+), 47 deletions(-)

diff --git a/dts/framework/logger.py b/dts/framework/logger.py
index bb2991e994..d3eb75a4e4 100644
--- a/dts/framework/logger.py
+++ b/dts/framework/logger.py
@@ -3,9 +3,9 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-DTS logger module with several log level. DTS framework and TestSuite logs
-are saved in different log files.
+"""DTS logger module.
+
+DTS framework and TestSuite logs are saved in different log files.
 """
 
 import logging
@@ -18,19 +18,21 @@
 stream_fmt = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
 
 
-class LoggerDictType(TypedDict):
-logger: "DTSLOG"
-name: str
-node: str
-
+class DTSLOG(logging.LoggerAdapter):
+"""DTS logger adapter class for framework and testsuites.
 
-# List for saving all using loggers
-Loggers: list[LoggerDictType] = []
+The :option:`--verbose` command line argument and the 
:envvar:`DTS_VERBOSE` environment
+variable control the verbosity of output. If enabled, all messages will be 
emitted to the
+console.
 
+The :option:`--output` command line argument and the 
:envvar:`DTS_OUTPUT_DIR` environment
+variable modify the directory where the logs will be stored.
 
-class DTSLOG(logging.LoggerAdapter):
-"""
-DTS log class for framework and testsuite.
+Attributes:
+node: The additional identifier. Currently unused.
+sh: The handler which emits logs to console.
+fh: The handler which emits logs to a file.
+verbose_fh: Just as fh, but logs with a different, more verbose, 
format.
 """
 
 _logger: logging.Logger
@@ -40,6 +42,15 @@ class DTSLOG(logging.LoggerAdapter):
 verbose_fh: logging.FileHandler
 
 def __init__(self, logger: logging.Logger, node: str = "suite"):
+"""Extend the constructor with additional handlers.
+
+One handler logs to the console, the other one to a file, with either 
a regular or verbose
+format.
+
+Args:
+logger: The logger from which to create the logger adapter.
+node: An additional identifier. Currently unused.
+"""
 self._logger = logger
 # 1 means log everything, this will be used by file handlers if their 
level
 # is not set
@@ -92,26 +103,43 @@ def __init__(self, logger: logging.Logger, node: str = 
"suite"):
 super(DTSLOG, self).__init__(self._logger, dict(node=self.node))
 
 def logger_exit(self) -> None:
-"""
-Remove stream handler and logfile handler.
-"""
+"""Remove the stream handler and the logfile handler."""
 for handler in (self.sh, self.fh, self.verbose_fh):
 handler.flush()
 self._logger.removeHandler(handler)
 
 
+class _LoggerDictType(TypedDict):
+logger: DTSLOG
+name: str
+node: str
+
+
+# List for saving all loggers in use
+_Loggers: list[_LoggerDictType] = []
+
+
 def getLogger(name: str, node: str = "suite") -> DTSLOG:
+"""Get DTS logger adapter identified by name and node.
+
+An existing logger will be return if one with the exact name and node 
already exists.
+A new one will be created and stored otherwise.
+
+Args:
+name: The name of the logger.
+node: An additional identifier for the logger.
+
+Returns:
+A logger uniquely identified by both name and node.
 """
-Get logger handler and if there's no handler for specified Node will 
create one.
-"""
-global Loggers
+global _Loggers
 # return saved logger
-logger: LoggerDictType
-for logger in Loggers:
+logger: _LoggerDictType
+for logger in _Loggers:
 if logger["name"] == name and logger["node"] == node:
 return logger["logger"]
 
 # return new logger
 dts_logger: DTSLOG = DTSLOG(logging.getLogger(name), node)
-Loggers.append({"logger": dts_logger, "name": name, "node": node})
+_Loggers.append({"logger": dts_logger, "name": name, "node": node})
 return dts_logger
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index f0c916471c..0613adf7ad 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -3,6 +3,16 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+"""Various utility classes and functions.
+
+These are used in multiple modules across the framework. They're here because
+they provide some non-specific functionality, greatly simplify imports or just 
don't
+fit elsewhere.
+
+Attributes:
+REGEX_FOR_PCI_ADDRESS: The regex representing a PCI address, e.g. 
``:00:08.0``.
+"""
+
 import atexit
 import json
 import os
@@ -19,12

[PATCH v5 09/23] dts: test result docstring update

2023-11-06 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/test_result.py | 292 ---
 1 file changed, 234 insertions(+), 58 deletions(-)

diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
index 603e18872c..f553948454 100644
--- a/dts/framework/test_result.py
+++ b/dts/framework/test_result.py
@@ -2,8 +2,25 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
-"""
-Generic result container and reporters
+r"""Record and process DTS results.
+
+The results are recorded in a hierarchical manner:
+
+* :class:`DTSResult` contains
+* :class:`ExecutionResult` contains
+* :class:`BuildTargetResult` contains
+* :class:`TestSuiteResult` contains
+* :class:`TestCaseResult`
+
+Each result may contain multiple lower level results, e.g. there are multiple
+:class:`TestSuiteResult`\s in a :class:`BuildTargetResult`.
+The results have common parts, such as setup and teardown results, captured in 
:class:`BaseResult`,
+which also defines some common behaviors in its methods.
+
+Each result class has its own idiosyncrasies which they implement in 
overridden methods.
+
+The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` 
environment
+variable modify the directory where the files with results will be stored.
 """
 
 import os.path
@@ -26,26 +43,34 @@
 
 
 class Result(Enum):
-"""
-An Enum defining the possible states that
-a setup, a teardown or a test case may end up in.
-"""
+"""The possible states that a setup, a teardown or a test case may end up 
in."""
 
+#:
 PASS = auto()
+#:
 FAIL = auto()
+#:
 ERROR = auto()
+#:
 SKIP = auto()
 
 def __bool__(self) -> bool:
+"""Only PASS is True."""
 return self is self.PASS
 
 
 class FixtureResult(object):
-"""
-A record that stored the result of a setup or a teardown.
-The default is FAIL because immediately after creating the object
-the setup of the corresponding stage will be executed, which also 
guarantees
-the execution of teardown.
+"""A record that stores the result of a setup or a teardown.
+
+FAIL is a sensible default since it prevents false positives
+(which could happen if the default was TRUE).
+
+Preventing false positives or other false results is preferable since a 
failure
+is mostly likely to be investigated (the other false results may not be 
investigated at all).
+
+Attributes:
+result: The associated result.
+error: The error in case of a failure.
 """
 
 result: Result
@@ -56,21 +81,32 @@ def __init__(
 result: Result = Result.FAIL,
 error: Exception | None = None,
 ):
+"""Initialize the constructor with the fixture result and store a 
possible error.
+
+Args:
+result: The result to store.
+error: The error which happened when a failure occurred.
+"""
 self.result = result
 self.error = error
 
 def __bool__(self) -> bool:
+"""A wrapper around the stored :class:`Result`."""
 return bool(self.result)
 
 
 class Statistics(dict):
-"""
-A helper class used to store the number of test cases by its result
-along a few other basic information.
-Using a dict provides a convenient way to format the data.
+"""How many test cases ended in which result state along some other basic 
information.
+
+Subclassing :class:`dict` provides a convenient way to format the data.
 """
 
 def __init__(self, dpdk_version: str | None):
+"""Extend the constructor with relevant keys.
+
+Args:
+dpdk_version: The version of tested DPDK.
+"""
 super(Statistics, self).__init__()
 for result in Result:
 self[result.name] = 0
@@ -78,8 +114,17 @@ def __init__(self, dpdk_version: str | None):
 self["DPDK VERSION"] = dpdk_version
 
 def __iadd__(self, other: Result) -> "Statistics":
-"""
-Add a Result to the final count.
+"""Add a Result to the final count.
+
+Example:
+stats: Statistics = Statistics()  # empty Statistics
+stats += Result.PASS  # add a Result to `stats`
+
+Args:
+other: The Result to add to this statistics object.
+
+Returns:
+The modified statistics object.
 """
 self[other.name] += 1
 self["PASS RATE"] = (
@@ -90,9 +135,7 @@ def __iadd__(self, other: Result) -> "Statistics":
 return self
 
 def __str__(self) -> str:
-"""
-Provide a string representation of the data.
-"""
+"""Each line contains the formatted key = value pair."""
 stats_str = ""
 for key, value in self.items():
 stats_str += f"{key:<12} = {value}\n"
@@ -102,10 +145,16 @@ d

[PATCH v5 10/23] dts: config docstring update

2023-11-06 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/config/__init__.py | 371 ++-
 dts/framework/config/types.py| 132 +++
 2 files changed, 446 insertions(+), 57 deletions(-)
 create mode 100644 dts/framework/config/types.py

diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 2044c82611..0aa149a53d 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -3,8 +3,34 @@
 # Copyright(c) 2022-2023 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-Yaml config parsing methods
+"""Testbed configuration and test suite specification.
+
+This package offers classes that hold real-time information about the testbed, 
hold test run
+configuration describing the tested testbed and a loader function, 
:func:`load_config`, which loads
+the YAML test run configuration file
+and validates it according to :download:`the schema `.
+
+The YAML test run configuration file is parsed into a dictionary, parts of 
which are used throughout
+this package. The allowed keys and types inside this dictionary are defined in
+the :doc:`types ` module.
+
+The test run configuration has two main sections:
+
+* The :class:`ExecutionConfiguration` which defines what tests are going 
to be run
+  and how DPDK will be built. It also references the testbed where these 
tests and DPDK
+  are going to be run,
+* The nodes of the testbed are defined in the other section,
+  a :class:`list` of :class:`NodeConfiguration` objects.
+
+The real-time information about testbed is supposed to be gathered at runtime.
+
+The classes defined in this package make heavy use of :mod:`dataclasses`.
+All of them use slots and are frozen:
+
+* Slots enables some optimizations, by pre-allocating space for the defined
+  attributes in the underlying data structure,
+* Frozen makes the object immutable. This enables further optimizations,
+  and makes it thread safe should we every want to move in that direction.
 """
 
 import json
@@ -12,11 +38,20 @@
 import pathlib
 from dataclasses import dataclass
 from enum import auto, unique
-from typing import Any, TypedDict, Union
+from typing import Union
 
 import warlock  # type: ignore[import]
 import yaml
 
+from framework.config.types import (
+BuildTargetConfigDict,
+ConfigurationDict,
+ExecutionConfigDict,
+NodeConfigDict,
+PortConfigDict,
+TestSuiteConfigDict,
+TrafficGeneratorConfigDict,
+)
 from framework.exception import ConfigurationError
 from framework.settings import SETTINGS
 from framework.utils import StrEnum
@@ -24,55 +59,97 @@
 
 @unique
 class Architecture(StrEnum):
+r"""The supported architectures of 
:class:`~framework.testbed_model.node.Node`\s."""
+
+#:
 i686 = auto()
+#:
 x86_64 = auto()
+#:
 x86_32 = auto()
+#:
 arm64 = auto()
+#:
 ppc64le = auto()
 
 
 @unique
 class OS(StrEnum):
+r"""The supported operating systems of 
:class:`~framework.testbed_model.node.Node`\s."""
+
+#:
 linux = auto()
+#:
 freebsd = auto()
+#:
 windows = auto()
 
 
 @unique
 class CPUType(StrEnum):
+r"""The supported CPUs of :class:`~framework.testbed_model.node.Node`\s."""
+
+#:
 native = auto()
+#:
 armv8a = auto()
+#:
 dpaa2 = auto()
+#:
 thunderx = auto()
+#:
 xgene1 = auto()
 
 
 @unique
 class Compiler(StrEnum):
+r"""The supported compilers of 
:class:`~framework.testbed_model.node.Node`\s."""
+
+#:
 gcc = auto()
+#:
 clang = auto()
+#:
 icc = auto()
+#:
 msvc = auto()
 
 
 @unique
 class TrafficGeneratorType(StrEnum):
+"""The supported traffic generators."""
+
+#:
 SCAPY = auto()
 
 
-# Slots enables some optimizations, by pre-allocating space for the defined
-# attributes in the underlying data structure.
-#
-# Frozen makes the object immutable. This enables further optimizations,
-# and makes it thread safe should we every want to move in that direction.
 @dataclass(slots=True, frozen=True)
 class HugepageConfiguration:
+r"""The hugepage configuration of 
:class:`~framework.testbed_model.node.Node`\s.
+
+Attributes:
+amount: The number of hugepages.
+force_first_numa: If :data:`True`, the hugepages will be configured on 
the first NUMA node.
+"""
+
 amount: int
 force_first_numa: bool
 
 
 @dataclass(slots=True, frozen=True)
 class PortConfig:
+r"""The port configuration of 
:class:`~framework.testbed_model.node.Node`\s.
+
+Attributes:
+node: The :class:`~framework.testbed_model.node.Node` where this port 
exists.
+pci: The PCI address of the port.
+os_driver_for_dpdk: The operating system driver name for use with DPDK.
+os_driver: The operating system driver name when the operating system 
controls the port.
+

[PATCH v5 11/23] dts: remote session docstring update

2023-11-06 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/remote_session/__init__.py  |  39 +-
 .../remote_session/remote_session.py  | 128 +-
 dts/framework/remote_session/ssh_session.py   |  16 +--
 3 files changed, 135 insertions(+), 48 deletions(-)

diff --git a/dts/framework/remote_session/__init__.py 
b/dts/framework/remote_session/__init__.py
index 5e7ddb2b05..51a01d6b5e 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -2,12 +2,14 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
-"""
-The package provides modules for managing remote connections to a remote host 
(node),
-differentiated by OS.
-The package provides a factory function, create_session, that returns the 
appropriate
-remote connection based on the passed configuration. The differences are in the
-underlying transport protocol (e.g. SSH) and remote OS (e.g. Linux).
+"""Remote interactive and non-interactive sessions.
+
+This package provides modules for managing remote connections to a remote host 
(node).
+
+The non-interactive sessions send commands and return their output and exit 
code.
+
+The interactive sessions open an interactive shell which is continuously open,
+allowing it to send and receive data within that particular shell.
 """
 
 # pylama:ignore=W0611
@@ -26,10 +28,35 @@
 def create_remote_session(
 node_config: NodeConfiguration, name: str, logger: DTSLOG
 ) -> RemoteSession:
+"""Factory for non-interactive remote sessions.
+
+The function returns an SSH session, but will be extended if support
+for other protocols is added.
+
+Args:
+node_config: The test run configuration of the node to connect to.
+name: The name of the session.
+logger: The logger instance this session will use.
+
+Returns:
+The SSH remote session.
+"""
 return SSHSession(node_config, name, logger)
 
 
 def create_interactive_session(
 node_config: NodeConfiguration, logger: DTSLOG
 ) -> InteractiveRemoteSession:
+"""Factory for interactive remote sessions.
+
+The function returns an interactive SSH session, but will be extended if 
support
+for other protocols is added.
+
+Args:
+node_config: The test run configuration of the node to connect to.
+logger: The logger instance this session will use.
+
+Returns:
+The interactive SSH remote session.
+"""
 return InteractiveRemoteSession(node_config, logger)
diff --git a/dts/framework/remote_session/remote_session.py 
b/dts/framework/remote_session/remote_session.py
index 0647d93de4..629c2d7b9c 100644
--- a/dts/framework/remote_session/remote_session.py
+++ b/dts/framework/remote_session/remote_session.py
@@ -3,6 +3,13 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+"""Base remote session.
+
+This module contains the abstract base class for remote sessions and defines
+the structure of the result of a command execution.
+"""
+
+
 import dataclasses
 from abc import ABC, abstractmethod
 from pathlib import PurePath
@@ -15,8 +22,14 @@
 
 @dataclasses.dataclass(slots=True, frozen=True)
 class CommandResult:
-"""
-The result of remote execution of a command.
+"""The result of remote execution of a command.
+
+Attributes:
+name: The name of the session that executed the command.
+command: The executed command.
+stdout: The standard output the command produced.
+stderr: The standard error output the command produced.
+return_code: The return code the command exited with.
 """
 
 name: str
@@ -26,6 +39,7 @@ class CommandResult:
 return_code: int
 
 def __str__(self) -> str:
+"""Format the command outputs."""
 return (
 f"stdout: '{self.stdout}'\n"
 f"stderr: '{self.stderr}'\n"
@@ -34,13 +48,24 @@ def __str__(self) -> str:
 
 
 class RemoteSession(ABC):
-"""
-The base class for defining which methods must be implemented in order to 
connect
-to a remote host (node) and maintain a remote session. The derived classes 
are
-supposed to implement/use some underlying transport protocol (e.g. SSH) to
-implement the methods. On top of that, it provides some basic services 
common to
-all derived classes, such as keeping history and logging what's being 
executed
-on the remote node.
+"""Non-interactive remote session.
+
+The abstract methods must be implemented in order to connect to a remote 
host (node)
+and maintain a remote session.
+The subclasses must use (or implement) some underlying transport protocol 
(e.g. SSH)
+to implement the methods. On top of that, it provides some basic services 
common to all
+subclasses, such as keeping history and logging what's being executed on 
the 

[PATCH v5 12/23] dts: interactive remote session docstring update

2023-11-06 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 .../interactive_remote_session.py | 36 +++
 .../remote_session/interactive_shell.py   | 99 +++
 dts/framework/remote_session/python_shell.py  | 26 -
 dts/framework/remote_session/testpmd_shell.py | 61 +---
 4 files changed, 150 insertions(+), 72 deletions(-)

diff --git a/dts/framework/remote_session/interactive_remote_session.py 
b/dts/framework/remote_session/interactive_remote_session.py
index 9085a668e8..c1bf30ac61 100644
--- a/dts/framework/remote_session/interactive_remote_session.py
+++ b/dts/framework/remote_session/interactive_remote_session.py
@@ -22,27 +22,23 @@
 class InteractiveRemoteSession:
 """SSH connection dedicated to interactive applications.
 
-This connection is created using paramiko and is a persistent connection 
to the
-host. This class defines methods for connecting to the node and configures 
this
-connection to send "keep alive" packets every 30 seconds. Because paramiko 
attempts
-to use SSH keys to establish a connection first, providing a password is 
optional.
-This session is utilized by InteractiveShells and cannot be interacted with
-directly.
-
-Arguments:
-node_config: Configuration class for the node you are connecting to.
-_logger: Desired logger for this session to use.
+The connection is created using `paramiko 
`_
+and is a persistent connection to the host. This class defines the methods 
for connecting
+to the node and configures the connection to send "keep alive" packets 
every 30 seconds.
+Because paramiko attempts to use SSH keys to establish a connection first, 
providing
+a password is optional. This session is utilized by InteractiveShells
+and cannot be interacted with directly.
 
 Attributes:
-hostname: Hostname that will be used to initialize a connection to the 
node.
-ip: A subsection of hostname that removes the port for the connection 
if there
+hostname: The hostname that will be used to initialize a connection to 
the node.
+ip: A subsection of `hostname` that removes the port for the 
connection if there
 is one. If there is no port, this will be the same as hostname.
-port: Port to use for the ssh connection. This will be extracted from 
the
-hostname if there is a port included, otherwise it will default to 
22.
+port: Port to use for the ssh connection. This will be extracted from 
`hostname`
+if there is a port included, otherwise it will default to ``22``.
 username: User to connect to the node with.
 password: Password of the user connecting to the host. This will 
default to an
 empty string if a password is not provided.
-session: Underlying paramiko connection.
+session: The underlying paramiko connection.
 
 Raises:
 SSHConnectionError: There is an error creating the SSH connection.
@@ -58,9 +54,15 @@ class InteractiveRemoteSession:
 _node_config: NodeConfiguration
 _transport: Transport | None
 
-def __init__(self, node_config: NodeConfiguration, _logger: DTSLOG) -> 
None:
+def __init__(self, node_config: NodeConfiguration, logger: DTSLOG) -> None:
+"""Connect to the node during initialization.
+
+Args:
+node_config: The test run configuration of the node to connect to.
+logger: The logger instance this session will use.
+"""
 self._node_config = node_config
-self._logger = _logger
+self._logger = logger
 self.hostname = node_config.hostname
 self.username = node_config.user
 self.password = node_config.password if node_config.password else ""
diff --git a/dts/framework/remote_session/interactive_shell.py 
b/dts/framework/remote_session/interactive_shell.py
index c24376b2a8..a98a822e91 100644
--- a/dts/framework/remote_session/interactive_shell.py
+++ b/dts/framework/remote_session/interactive_shell.py
@@ -3,18 +3,20 @@
 
 """Common functionality for interactive shell handling.
 
-This base class, InteractiveShell, is meant to be extended by other classes 
that
-contain functionality specific to that shell type. These derived classes will 
often
-modify things like the prompt to expect or the arguments to pass into the 
application,
-but still utilize the same method for sending a command and collecting output. 
How
-this output is handled however is often application specific. If an 
application needs
-elevated privileges to start it is expected that the method for gaining those
-privileges is provided when initializing the class.
+The base class, :class:`InteractiveShell`, is meant to be extended by 
subclasses that contain
+functionality specific to that shell type. These subclasses will often modify 
things like
+the 

[PATCH v5 13/23] dts: port and virtual device docstring update

2023-11-06 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/testbed_model/__init__.py   | 16 --
 dts/framework/testbed_model/port.py   | 53 +++
 dts/framework/testbed_model/virtual_device.py | 17 +-
 3 files changed, 71 insertions(+), 15 deletions(-)

diff --git a/dts/framework/testbed_model/__init__.py 
b/dts/framework/testbed_model/__init__.py
index 8ced05653b..a02be1f2d9 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -2,9 +2,19 @@
 # Copyright(c) 2022-2023 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-This package contains the classes used to model the physical traffic generator,
-system under test and any other components that need to be interacted with.
+"""Testbed modelling.
+
+This package defines the testbed elements DTS works with:
+
+* A system under test node: :class:`SutNode`,
+* A traffic generator node: :class:`TGNode`,
+* The ports of network interface cards (NICs) present on nodes: 
:class:`Port`,
+* The logical cores of CPUs present on nodes: :class:`LogicalCore`,
+* The virtual devices that can be created on nodes: :class:`VirtualDevice`,
+* The operating systems running on nodes: :class:`LinuxSession` and 
:class:`PosixSession`.
+
+DTS needs to be able to connect to nodes and understand some of the hardware 
present on these nodes
+to properly build and test DPDK.
 """
 
 # pylama:ignore=W0611
diff --git a/dts/framework/testbed_model/port.py 
b/dts/framework/testbed_model/port.py
index 680c29bfe3..817405bea4 100644
--- a/dts/framework/testbed_model/port.py
+++ b/dts/framework/testbed_model/port.py
@@ -2,6 +2,13 @@
 # Copyright(c) 2022 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""NIC port model.
+
+Basic port information, such as location (the port are identified by their PCI 
address on a node),
+drivers and address.
+"""
+
+
 from dataclasses import dataclass
 
 from framework.config import PortConfig
@@ -9,24 +16,35 @@
 
 @dataclass(slots=True, frozen=True)
 class PortIdentifier:
+"""The port identifier.
+
+Attributes:
+node: The node where the port resides.
+pci: The PCI address of the port on `node`.
+"""
+
 node: str
 pci: str
 
 
 @dataclass(slots=True)
 class Port:
-"""
-identifier: The PCI address of the port on a node.
-
-os_driver: The driver used by this port when the OS is controlling it.
-Example: i40e
-os_driver_for_dpdk: The driver the device must be bound to for DPDK to use 
it,
-Example: vfio-pci.
+"""Physical port on a node.
 
-Note: os_driver and os_driver_for_dpdk may be the same thing.
-Example: mlx5_core
+The ports are identified by the node they're on and their PCI addresses. 
The port on the other
+side of the connection is also captured here.
+Each port is serviced by a driver, which may be different for the 
operating system (`os_driver`)
+and for DPDK (`os_driver_for_dpdk`). For some devices, they are the same, 
e.g.: ``mlx5_core``.
 
-peer: The identifier of a port this port is connected with.
+Attributes:
+identifier: The PCI address of the port on a node.
+os_driver: The operating system driver name when the operating system 
controls the port,
+e.g.: ``i40e``.
+os_driver_for_dpdk: The operating system driver name for use with 
DPDK, e.g.: ``vfio-pci``.
+peer: The identifier of a port this port is connected with.
+The `peer` is on a different node.
+mac_address: The MAC address of the port.
+logical_name: The logical name of the port. Must be discovered.
 """
 
 identifier: PortIdentifier
@@ -37,6 +55,12 @@ class Port:
 logical_name: str = ""
 
 def __init__(self, node_name: str, config: PortConfig):
+"""Initialize the port from `node_name` and `config`.
+
+Args:
+node_name: The name of the port's node.
+config: The test run configuration of the port.
+"""
 self.identifier = PortIdentifier(
 node=node_name,
 pci=config.pci,
@@ -47,14 +71,23 @@ def __init__(self, node_name: str, config: PortConfig):
 
 @property
 def node(self) -> str:
+"""The node where the port resides."""
 return self.identifier.node
 
 @property
 def pci(self) -> str:
+"""The PCI address of the port."""
 return self.identifier.pci
 
 
 @dataclass(slots=True, frozen=True)
 class PortLink:
+"""The physical, cabled connection between the ports.
+
+Attributes:
+sut_port: The port on the SUT node connected to `tg_port`.
+tg_port: The port on the TG node connected to `sut_port`.
+"""
+
 sut_port: Port
 tg_port: Port
diff --git a/dts/framework/testbed_model/virtual_device.py 
b/dts/framework/testbed_model/

[PATCH v5 14/23] dts: cpu docstring update

2023-11-06 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/testbed_model/cpu.py | 196 +
 1 file changed, 144 insertions(+), 52 deletions(-)

diff --git a/dts/framework/testbed_model/cpu.py 
b/dts/framework/testbed_model/cpu.py
index 8fe785dfe4..4edeb4a7c2 100644
--- a/dts/framework/testbed_model/cpu.py
+++ b/dts/framework/testbed_model/cpu.py
@@ -1,6 +1,22 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""CPU core representation and filtering.
+
+This module provides a unified representation of logical CPU cores along
+with filtering capabilities.
+
+When symmetric multiprocessing (SMP or multithreading) is enabled on a server,
+the physical CPU cores are split into logical CPU cores with different IDs.
+
+:class:`LogicalCoreCountFilter` filters by the number of logical cores. It's 
possible to specify
+the socket from which to filter the number of logical cores. It's also 
possible to not use all
+logical CPU cores from each physical core (e.g. only the first logical core of 
each physical core).
+
+:class:`LogicalCoreListFilter` filters by logical core IDs. This mostly checks 
that
+the logical cores are actually present on the server.
+"""
+
 import dataclasses
 from abc import ABC, abstractmethod
 from collections.abc import Iterable, ValuesView
@@ -11,9 +27,17 @@
 
 @dataclass(slots=True, frozen=True)
 class LogicalCore(object):
-"""
-Representation of a CPU core. A physical core is represented in OS
-by multiple logical cores (lcores) if CPU multithreading is enabled.
+"""Representation of a logical CPU core.
+
+A physical core is represented in OS by multiple logical cores (lcores)
+if CPU multithreading is enabled. When multithreading is disabled, their 
IDs are the same.
+
+Attributes:
+lcore: The logical core ID of a CPU core. It's the same as `core` with
+disabled multithreading.
+core: The physical core ID of a CPU core.
+socket: The physical socket ID where the CPU resides.
+node: The NUMA node ID where the CPU resides.
 """
 
 lcore: int
@@ -22,27 +46,36 @@ class LogicalCore(object):
 node: int
 
 def __int__(self) -> int:
+"""The CPU is best represented by the logical core, as that's what we 
configure in EAL."""
 return self.lcore
 
 
 class LogicalCoreList(object):
-"""
-Convert these options into a list of logical core ids.
-lcore_list=[LogicalCore1, LogicalCore2] - a list of LogicalCores
-lcore_list=[0,1,2,3] - a list of int indices
-lcore_list=['0','1','2-3'] - a list of str indices; ranges are supported
-lcore_list='0,1,2-3' - a comma delimited str of indices; ranges are 
supported
-
-The class creates a unified format used across the framework and allows
-the user to use either a str representation (using str(instance) or 
directly
-in f-strings) or a list representation (by accessing instance.lcore_list).
-Empty lcore_list is allowed.
+r"""A unified way to store :class:`LogicalCore`\s.
+
+Create a unified format used across the framework and allow the user to use
+either a :class:`str` representation (using ``str(instance)`` or directly 
in f-strings)
+or a :class:`list` representation (by accessing the `lcore_list` property,
+which stores logical core IDs).
 """
 
 _lcore_list: list[int]
 _lcore_str: str
 
 def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | 
str):
+"""Process `lcore_list`, then sort.
+
+There are four supported logical core list formats::
+
+lcore_list=[LogicalCore1, LogicalCore2]  # a list of LogicalCores
+lcore_list=[0,1,2,3]# a list of int indices
+lcore_list=['0','1','2-3']  # a list of str indices; ranges are 
supported
+lcore_list='0,1,2-3'# a comma delimited str of indices; 
ranges are supported
+
+Args:
+lcore_list: Various ways to represent multiple logical cores.
+Empty `lcore_list` is allowed.
+"""
 self._lcore_list = []
 if isinstance(lcore_list, str):
 lcore_list = lcore_list.split(",")
@@ -60,6 +93,7 @@ def __init__(self, lcore_list: list[int] | list[str] | 
list[LogicalCore] | str):
 
 @property
 def lcore_list(self) -> list[int]:
+"""The logical core IDs."""
 return self._lcore_list
 
 def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> 
list[str]:
@@ -89,28 +123,30 @@ def _get_consecutive_lcores_range(self, lcore_ids_list: 
list[int]) -> list[str]:
 return formatted_core_list
 
 def __str__(self) -> str:
+"""The consecutive ranges of logical core IDs."""
 return self._lcore_str
 
 
 @dataclasses.dataclass(slots=True, frozen=True)
 class LogicalCoreCount(object):
-"""
-Define the number of

[PATCH v5 15/23] dts: os session docstring update

2023-11-06 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/testbed_model/os_session.py | 275 --
 1 file changed, 208 insertions(+), 67 deletions(-)

diff --git a/dts/framework/testbed_model/os_session.py 
b/dts/framework/testbed_model/os_session.py
index 76e595a518..bad75d52e7 100644
--- a/dts/framework/testbed_model/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -2,6 +2,29 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""OS-aware remote session.
+
+DPDK supports multiple different operating systems, meaning it can run on 
these different operating
+systems. This module defines the common API that OS-unaware layers use and 
translates the API into
+OS-aware calls/utility usage.
+
+Note:
+Running commands with administrative privileges requires OS awareness. 
This is the only layer
+that's aware of OS differences, so this is where non-privileged command 
get converted
+to privileged commands.
+
+Example:
+A user wishes to remove a directory on
+a remote :class:`~framework.testbed_model.sut_node.SutNode`.
+The :class:`~framework.testbed_model.sut_node.SutNode` object isn't aware 
what OS the node
+is running - it delegates the OS translation logic
+to :attr:`~framework.testbed_model.node.Node.main_session`. The SUT node 
calls
+:meth:`~OSSession.remove_remote_dir` with a generic, OS-unaware path and
+the :attr:`~framework.testbed_model.node.Node.main_session` translates that
+to ``rm -rf`` if the node's OS is Linux and other commands for other OSs.
+It also translates the path to match the underlying OS.
+"""
+
 from abc import ABC, abstractmethod
 from collections.abc import Iterable
 from ipaddress import IPv4Interface, IPv6Interface
@@ -28,10 +51,16 @@
 
 
 class OSSession(ABC):
-"""
-The OS classes create a DTS node remote session and implement OS specific
+"""OS-unaware to OS-aware translation API definition.
+
+The OSSession classes create a remote session to a DTS node and implement 
OS specific
 behavior. There a few control methods implemented by the base class, the 
rest need
-to be implemented by derived classes.
+to be implemented by subclasses.
+
+Attributes:
+name: The name of the session.
+remote_session: The remote session maintaining the connection to the 
node.
+interactive_session: The interactive remote session maintaining the 
connection to the node.
 """
 
 _config: NodeConfiguration
@@ -46,6 +75,15 @@ def __init__(
 name: str,
 logger: DTSLOG,
 ):
+"""Initialize the OS-aware session.
+
+Connect to the node right away and also create an interactive remote 
session.
+
+Args:
+node_config: The test run configuration of the node to connect to.
+name: The name of the session.
+logger: The logger instance this session will use.
+"""
 self._config = node_config
 self.name = name
 self._logger = logger
@@ -53,15 +91,15 @@ def __init__(
 self.interactive_session = create_interactive_session(node_config, 
logger)
 
 def close(self, force: bool = False) -> None:
-"""
-Close the remote session.
+"""Close the underlying remote session.
+
+Args:
+force: Force the closure of the connection.
 """
 self.remote_session.close(force)
 
 def is_alive(self) -> bool:
-"""
-Check whether the remote session is still responding.
-"""
+"""Check whether the underlying remote session is still responding."""
 return self.remote_session.is_alive()
 
 def send_command(
@@ -72,10 +110,23 @@ def send_command(
 verify: bool = False,
 env: dict | None = None,
 ) -> CommandResult:
-"""
-An all-purpose API in case the command to be executed is already
-OS-agnostic, such as when the path to the executed command has been
-constructed beforehand.
+"""An all-purpose API for OS-agnostic commands.
+
+This can be used for an execution of a portable command that's 
executed the same way
+on all operating systems, such as Python.
+
+The :option:`--timeout` command line argument and the 
:envvar:`DTS_TIMEOUT`
+environment variable configure the timeout of command execution.
+
+Args:
+command: The command to execute.
+timeout: Wait at most this long in seconds to execute the command.
+privileged: Whether to run the command with administrative 
privileges.
+verify: If True, will check the exit code of the command.
+env: A dictionary with environment variables to be used with the 
command execution.
+
+Raises:
+RemoteCommandExecutionError: If verify is True and the comman

[PATCH v5 16/23] dts: posix and linux sessions docstring update

2023-11-06 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/testbed_model/linux_session.py | 63 ++-
 dts/framework/testbed_model/posix_session.py | 81 +---
 2 files changed, 113 insertions(+), 31 deletions(-)

diff --git a/dts/framework/testbed_model/linux_session.py 
b/dts/framework/testbed_model/linux_session.py
index f472bb8f0f..279954ff63 100644
--- a/dts/framework/testbed_model/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -2,6 +2,13 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""Linux OS translator.
+
+Translate OS-unaware calls into Linux calls/utilities. Most of Linux 
distributions are mostly
+compliant with POSIX standards, so this module only implements the parts that 
aren't.
+This intermediate module implements the common parts of mostly POSIX compliant 
distributions.
+"""
+
 import json
 from ipaddress import IPv4Interface, IPv6Interface
 from typing import TypedDict, Union
@@ -17,43 +24,51 @@
 
 
 class LshwConfigurationOutput(TypedDict):
+"""The relevant parts of ``lshw``'s ``configuration`` section."""
+
+#:
 link: str
 
 
 class LshwOutput(TypedDict):
-"""
-A model of the relevant information from json lshw output, e.g.:
-{
-...
-"businfo" : "pci@:08:00.0",
-"logicalname" : "enp8s0",
-"version" : "00",
-"serial" : "52:54:00:59:e1:ac",
-...
-"configuration" : {
-  ...
-  "link" : "yes",
-  ...
-},
-...
+"""A model of the relevant information from ``lshw``'s json output.
+
+e.g.::
+
+{
+...
+"businfo" : "pci@:08:00.0",
+"logicalname" : "enp8s0",
+"version" : "00",
+"serial" : "52:54:00:59:e1:ac",
+...
+"configuration" : {
+  ...
+  "link" : "yes",
+  ...
+},
+...
 """
 
+#:
 businfo: str
+#:
 logicalname: NotRequired[str]
+#:
 serial: NotRequired[str]
+#:
 configuration: LshwConfigurationOutput
 
 
 class LinuxSession(PosixSession):
-"""
-The implementation of non-Posix compliant parts of Linux remote sessions.
-"""
+"""The implementation of non-Posix compliant parts of Linux."""
 
 @staticmethod
 def _get_privileged_command(command: str) -> str:
 return f"sudo -- sh -c '{command}'"
 
 def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
+"""Overrides :meth:`~.os_session.OSSession.get_remote_cpus`."""
 cpu_info = self.send_command("lscpu -p=CPU,CORE,SOCKET,NODE|grep -v 
\\#").stdout
 lcores = []
 for cpu_line in cpu_info.splitlines():
@@ -65,18 +80,20 @@ def get_remote_cpus(self, use_first_core: bool) -> 
list[LogicalCore]:
 return lcores
 
 def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
+"""Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
 return dpdk_prefix
 
-def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> 
None:
+def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> 
None:
+"""Overrides :meth:`~.os_session.OSSession.setup_hugepages`."""
 self._logger.info("Getting Hugepage information.")
 hugepage_size = self._get_hugepage_size()
 hugepages_total = self._get_hugepages_total()
 self._numa_nodes = self._get_numa_nodes()
 
-if force_first_numa or hugepages_total != hugepage_amount:
+if force_first_numa or hugepages_total != hugepage_count:
 # when forcing numa, we need to clear existing hugepages regardless
 # of size, so they can be moved to the first numa node
-self._configure_huge_pages(hugepage_amount, hugepage_size, 
force_first_numa)
+self._configure_huge_pages(hugepage_count, hugepage_size, 
force_first_numa)
 else:
 self._logger.info("Hugepages already configured.")
 self._mount_huge_pages()
@@ -140,6 +157,7 @@ def _configure_huge_pages(
 )
 
 def update_ports(self, ports: list[Port]) -> None:
+"""Overrides :meth:`~.os_session.OSSession.update_ports`."""
 self._logger.debug("Gathering port info.")
 for port in ports:
 assert (
@@ -178,6 +196,7 @@ def _update_port_attr(
 )
 
 def configure_port_state(self, port: Port, enable: bool) -> None:
+"""Overrides :meth:`~.os_session.OSSession.configure_port_state`."""
 state = "up" if enable else "down"
 self.send_command(
 f"ip link set dev {port.logical_name} {state}", privileged=True
@@ -189,6 +208,7 @@ def configure_port_ip_address(
 port: Port,
 delete: bool,
 ) -> None:
+"""Overrides 
:meth:`~.os_session.OSSession.configure_port_ip_address`."""
 command = "del" if delete else "add"
 

[PATCH v5 17/23] dts: node docstring update

2023-11-06 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/testbed_model/node.py | 191 +++-
 1 file changed, 131 insertions(+), 60 deletions(-)

diff --git a/dts/framework/testbed_model/node.py 
b/dts/framework/testbed_model/node.py
index 7571e7b98d..abf86793a7 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -3,8 +3,13 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-A node is a generic host that DTS connects to and manages.
+"""Common functionality for node management.
+
+A node is any host/server DTS connects to.
+
+The base class, :class:`Node`, provides functionality common to all nodes and 
is supposed
+to be extended by subclasses with functionality specific to each node type.
+The decorator :func:`Node.skip_setup` can be used without subclassing.
 """
 
 from abc import ABC
@@ -35,10 +40,22 @@
 
 
 class Node(ABC):
-"""
-Basic class for node management. This class implements methods that
-manage a node, such as information gathering (of CPU/PCI/NIC) and
-environment setup.
+"""The base class for node management.
+
+It shouldn't be instantiated, but rather subclassed.
+It implements common methods to manage any node:
+
+* Connection to the node,
+* Hugepages setup.
+
+Attributes:
+main_session: The primary OS-aware remote session used to communicate 
with the node.
+config: The node configuration.
+name: The name of the node.
+lcores: The list of logical cores that DTS can use on the node.
+It's derived from logical cores present on the node and the test 
run configuration.
+ports: The ports of this node specified in the test run configuration.
+virtual_devices: The virtual devices used on the node.
 """
 
 main_session: OSSession
@@ -52,6 +69,17 @@ class Node(ABC):
 virtual_devices: list[VirtualDevice]
 
 def __init__(self, node_config: NodeConfiguration):
+"""Connect to the node and gather info during initialization.
+
+Extra gathered information:
+
+* The list of available logical CPUs. This is then filtered by
+  the ``lcores`` configuration in the YAML test run configuration file,
+* Information about ports from the YAML test run configuration file.
+
+Args:
+node_config: The node's test run configuration.
+"""
 self.config = node_config
 self.name = node_config.name
 self._logger = getLogger(self.name)
@@ -60,7 +88,7 @@ def __init__(self, node_config: NodeConfiguration):
 self._logger.info(f"Connected to node: {self.name}")
 
 self._get_remote_cpus()
-# filter the node lcores according to user config
+# filter the node lcores according to the test run configuration
 self.lcores = LogicalCoreListFilter(
 self.lcores, LogicalCoreList(self.config.lcores)
 ).filter()
@@ -77,9 +105,14 @@ def _init_ports(self) -> None:
 self.configure_port_state(port)
 
 def set_up_execution(self, execution_config: ExecutionConfiguration) -> 
None:
-"""
-Perform the execution setup that will be done for each execution
-this node is part of.
+"""Execution setup steps.
+
+Configure hugepages and call :meth:`_set_up_execution` where
+the rest of the configuration steps (if any) are implemented.
+
+Args:
+execution_config: The execution test run configuration according 
to which
+the setup steps will be taken.
 """
 self._setup_hugepages()
 self._set_up_execution(execution_config)
@@ -88,58 +121,74 @@ def set_up_execution(self, execution_config: 
ExecutionConfiguration) -> None:
 self.virtual_devices.append(VirtualDevice(vdev))
 
 def _set_up_execution(self, execution_config: ExecutionConfiguration) -> 
None:
-"""
-This method exists to be optionally overwritten by derived classes and
-is not decorated so that the derived class doesn't have to use the 
decorator.
+"""Optional additional execution setup steps for subclasses.
+
+Subclasses should override this if they need to add additional 
execution setup steps.
 """
 
 def tear_down_execution(self) -> None:
-"""
-Perform the execution teardown that will be done after each execution
-this node is part of concludes.
+"""Execution teardown steps.
+
+There are currently no common execution teardown steps common to all 
DTS node types.
 """
 self.virtual_devices = []
 self._tear_down_execution()
 
 def _tear_down_execution(self) -> None:
-"""
-This method exists to be optionally overwritten by derived classes and
-is not decorated s

[PATCH v5 18/23] dts: sut and tg nodes docstring update

2023-11-06 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/testbed_model/sut_node.py | 219 
 dts/framework/testbed_model/tg_node.py  |  42 +++--
 2 files changed, 170 insertions(+), 91 deletions(-)

diff --git a/dts/framework/testbed_model/sut_node.py 
b/dts/framework/testbed_model/sut_node.py
index 4e33cf02ea..b57d48fd31 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -3,6 +3,14 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""System under test (DPDK + hardware) node.
+
+A system under test (SUT) is the combination of DPDK
+and the hardware we're testing with DPDK (NICs, crypto and other devices).
+An SUT node is where this SUT runs.
+"""
+
+
 import os
 import tarfile
 import time
@@ -26,6 +34,11 @@
 
 
 class EalParameters(object):
+"""The environment abstraction layer parameters.
+
+The string representation can be created by converting the instance to a 
string.
+"""
+
 def __init__(
 self,
 lcore_list: LogicalCoreList,
@@ -35,21 +48,23 @@ def __init__(
 vdevs: list[VirtualDevice],
 other_eal_param: str,
 ):
-"""
-Generate eal parameters character string;
-:param lcore_list: the list of logical cores to use.
-:param memory_channels: the number of memory channels to use.
-:param prefix: set file prefix string, eg:
-prefix='vf'
-:param no_pci: switch of disable PCI bus eg:
-no_pci=True
-:param vdevs: virtual device list, eg:
-vdevs=[
-VirtualDevice('net_ring0'),
-VirtualDevice('net_ring1')
-]
-:param other_eal_param: user defined DPDK eal parameters, eg:
-other_eal_param='--single-file-segments'
+"""Initialize the parameters according to inputs.
+
+Process the parameters into the format used on the command line.
+
+Args:
+lcore_list: The list of logical cores to use.
+memory_channels: The number of memory channels to use.
+prefix: Set the file prefix string with which to start DPDK, e.g.: 
``prefix='vf'``.
+no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``.
+vdevs: Virtual devices, e.g.::
+
+vdevs=[
+VirtualDevice('net_ring0'),
+VirtualDevice('net_ring1')
+]
+other_eal_param: user defined DPDK EAL parameters, e.g.:
+``other_eal_param='--single-file-segments'``
 """
 self._lcore_list = f"-l {lcore_list}"
 self._memory_channels = f"-n {memory_channels}"
@@ -61,6 +76,7 @@ def __init__(
 self._other_eal_param = other_eal_param
 
 def __str__(self) -> str:
+"""Create the EAL string."""
 return (
 f"{self._lcore_list} "
 f"{self._memory_channels} "
@@ -72,11 +88,21 @@ def __str__(self) -> str:
 
 
 class SutNode(Node):
-"""
-A class for managing connections to the System under Test, providing
-methods that retrieve the necessary information about the node (such as
-CPU, memory and NIC details) and configuration capabilities.
-Another key capability is building DPDK according to given build target.
+"""The system under test node.
+
+The SUT node extends :class:`Node` with DPDK specific features:
+
+* DPDK build,
+* Gathering of DPDK build info,
+* The running of DPDK apps, interactively or one-time execution,
+* DPDK apps cleanup.
+
+The :option:`--tarball` command line argument and the 
:envvar:`DTS_DPDK_TARBALL`
+environment variable configure the path to the DPDK tarball
+or the git commit ID, tag ID or tree ID to test.
+
+Attributes:
+config: The SUT node configuration
 """
 
 config: SutNodeConfiguration
@@ -93,6 +119,11 @@ class SutNode(Node):
 _compiler_version: str | None
 
 def __init__(self, node_config: SutNodeConfiguration):
+"""Extend the constructor with SUT node specifics.
+
+Args:
+node_config: The SUT node's test run configuration.
+"""
 super(SutNode, self).__init__(node_config)
 self._dpdk_prefix_list = []
 self._build_target_config = None
@@ -111,6 +142,12 @@ def __init__(self, node_config: SutNodeConfiguration):
 
 @property
 def _remote_dpdk_dir(self) -> PurePath:
+"""The remote DPDK dir.
+
+This internal property should be set after extracting the DPDK 
tarball. If it's not set,
+that implies the DPDK setup step has been skipped, in which case we 
can guess where
+a previous build was located.
+"""
 if self.__remote_dpdk_dir is None:
 

[PATCH v5 19/23] dts: base traffic generators docstring update

2023-11-06 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 .../traffic_generator/__init__.py | 22 -
 .../capturing_traffic_generator.py| 46 +++
 .../traffic_generator/traffic_generator.py| 33 +++--
 3 files changed, 68 insertions(+), 33 deletions(-)

diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py 
b/dts/framework/testbed_model/traffic_generator/__init__.py
index 11bfa1ee0f..51cca77da4 100644
--- a/dts/framework/testbed_model/traffic_generator/__init__.py
+++ b/dts/framework/testbed_model/traffic_generator/__init__.py
@@ -1,6 +1,19 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""DTS traffic generators.
+
+A traffic generator is capable of generating traffic and then monitor 
returning traffic.
+A traffic generator may just count the number of received packets
+and it may additionally capture individual packets.
+
+A traffic generator may be software running on generic hardware or it could be 
specialized hardware.
+
+The traffic generators that only count the number of received packets are 
suitable only for
+performance testing. In functional testing, we need to be able to dissect each 
arrived packet
+and a capturing traffic generator is required.
+"""
+
 from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
 from framework.exception import ConfigurationError
 from framework.testbed_model.node import Node
@@ -12,8 +25,15 @@
 def create_traffic_generator(
 tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
 ) -> CapturingTrafficGenerator:
-"""A factory function for creating traffic generator object from user 
config."""
+"""The factory function for creating traffic generator objects from the 
test run configuration.
+
+Args:
+tg_node: The traffic generator node where the created traffic 
generator will be running.
+traffic_generator_config: The traffic generator config.
 
+Returns:
+A traffic generator capable of capturing received packets.
+"""
 match traffic_generator_config.traffic_generator_type:
 case TrafficGeneratorType.SCAPY:
 return ScapyTrafficGenerator(tg_node, traffic_generator_config)
diff --git 
a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py 
b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
index e521211ef0..b0a43ad003 100644
--- 
a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
+++ 
b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
@@ -23,19 +23,22 @@
 
 
 def _get_default_capture_name() -> str:
-"""
-This is the function used for the default implementation of capture names.
-"""
 return str(uuid.uuid4())
 
 
 class CapturingTrafficGenerator(TrafficGenerator):
 """Capture packets after sending traffic.
 
-A mixin interface which enables a packet generator to declare that it can 
capture
+The intermediary interface which enables a packet generator to declare 
that it can capture
 packets and return them to the user.
 
+Similarly to
+
:class:`~framework.testbed_model.traffic_generator.traffic_generator.TrafficGenerator`,
+this class exposes the public methods specific to capturing traffic 
generators and defines
+a private method that must implement the traffic generation and capturing 
logic in subclasses.
+
 The methods of capturing traffic generators obey the following workflow:
+
 1. send packets
 2. capture packets
 3. write the capture to a .pcap file
@@ -44,6 +47,7 @@ class CapturingTrafficGenerator(TrafficGenerator):
 
 @property
 def is_capturing(self) -> bool:
+"""This traffic generator can capture traffic."""
 return True
 
 def send_packet_and_capture(
@@ -54,11 +58,12 @@ def send_packet_and_capture(
 duration: float,
 capture_name: str = _get_default_capture_name(),
 ) -> list[Packet]:
-"""Send a packet, return received traffic.
+"""Send `packet` and capture received traffic.
+
+Send `packet` on `send_port` and then return all traffic captured
+on `receive_port` for the given `duration`.
 
-Send a packet on the send_port and then return all traffic captured
-on the receive_port for the given duration. Also record the captured 
traffic
-in a pcap file.
+The captured traffic is recorded in the `capture_name`.pcap file.
 
 Args:
 packet: The packet to send.
@@ -68,7 +73,7 @@ def send_packet_and_capture(
 capture_name: The name of the .pcap file where to store the 
capture.
 
 Returns:
- A list of received packets. May be empty if no packets are 
captured.
+ The received packets. May be empty if no packets are captured.
 ""

[PATCH v5 20/23] dts: scapy tg docstring update

2023-11-06 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 .../testbed_model/traffic_generator/scapy.py  | 91 +++
 1 file changed, 54 insertions(+), 37 deletions(-)

diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py 
b/dts/framework/testbed_model/traffic_generator/scapy.py
index 51864b6e6b..d0fe03055a 100644
--- a/dts/framework/testbed_model/traffic_generator/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -2,14 +2,15 @@
 # Copyright(c) 2022 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""Scapy traffic generator.
+"""The Scapy traffic generator.
 
-Traffic generator used for functional testing, implemented using the Scapy 
library.
+A traffic generator used for functional testing, implemented with
+`the Scapy library `_.
 The traffic generator uses an XML-RPC server to run Scapy on the remote TG 
node.
 
-The XML-RPC server runs in an interactive remote SSH session running Python 
console,
-where we start the server. The communication with the server is facilitated 
with
-a local server proxy.
+The traffic generator uses the :mod:`xmlrpc.server` module to run an XML-RPC 
server
+in an interactive remote Python SSH session. The communication with the server 
is facilitated
+with a local server proxy from the :mod:`xmlrpc.client` module.
 """
 
 import inspect
@@ -69,20 +70,20 @@ def scapy_send_packets_and_capture(
 recv_iface: str,
 duration: float,
 ) -> list[bytes]:
-"""RPC function to send and capture packets.
+"""The RPC function to send and capture packets.
 
-The function is meant to be executed on the remote TG node.
+The function is meant to be executed on the remote TG node via the server 
proxy.
 
 Args:
 xmlrpc_packets: The packets to send. These need to be converted to
-xmlrpc.client.Binary before sending to the remote server.
+:class:`~xmlrpc.client.Binary` objects before sending to the 
remote server.
 send_iface: The logical name of the egress interface.
 recv_iface: The logical name of the ingress interface.
 duration: Capture for this amount of time, in seconds.
 
 Returns:
 A list of bytes. Each item in the list represents one packet, which 
needs
-to be converted back upon transfer from the remote node.
+to be converted back upon transfer from the remote node.
 """
 scapy_packets = [scapy.all.Packet(packet.data) for packet in 
xmlrpc_packets]
 sniffer = scapy.all.AsyncSniffer(
@@ -98,19 +99,15 @@ def scapy_send_packets_and_capture(
 def scapy_send_packets(
 xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: str
 ) -> None:
-"""RPC function to send packets.
+"""The RPC function to send packets.
 
-The function is meant to be executed on the remote TG node.
-It doesn't return anything, only sends packets.
+The function is meant to be executed on the remote TG node via the server 
proxy.
+It only sends `xmlrpc_packets`, without capturing them.
 
 Args:
 xmlrpc_packets: The packets to send. These need to be converted to
-xmlrpc.client.Binary before sending to the remote server.
+:class:`~xmlrpc.client.Binary` objects before sending to the 
remote server.
 send_iface: The logical name of the egress interface.
-
-Returns:
-A list of bytes. Each item in the list represents one packet, which 
needs
-to be converted back upon transfer from the remote node.
 """
 scapy_packets = [scapy.all.Packet(packet.data) for packet in 
xmlrpc_packets]
 scapy.all.sendp(scapy_packets, iface=send_iface, realtime=True, 
verbose=True)
@@ -130,11 +127,19 @@ def scapy_send_packets(
 
 
 class QuittableXMLRPCServer(SimpleXMLRPCServer):
-"""Basic XML-RPC server that may be extended
-by functions serializable by the marshal module.
+r"""Basic XML-RPC server.
+
+The server may be augmented by functions serializable by the 
:mod:`marshal` module.
 """
 
 def __init__(self, *args, **kwargs):
+"""Extend the XML-RPC server initialization.
+
+Args:
+args: The positional arguments that will be passed to the 
superclass's constructor.
+kwargs: The keyword arguments that will be passed to the 
superclass's constructor.
+The `allow_none` argument will be set to ``True``.
+"""
 kwargs["allow_none"] = True
 super().__init__(*args, **kwargs)
 self.register_introspection_functions()
@@ -142,13 +147,12 @@ def __init__(self, *args, **kwargs):
 self.register_function(self.add_rpc_function)
 
 def quit(self) -> None:
+"""Quit the server."""
 self._BaseServer__shutdown_request = True
 return None
 
 def add_rpc_function(self, name: str, function_bytes: 
xmlrpc.client.Binary) -> 

[PATCH v5 21/23] dts: test suites docstring update

2023-11-06 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/tests/TestSuite_hello_world.py | 16 +
 dts/tests/TestSuite_os_udp.py  | 16 +
 dts/tests/TestSuite_smoke_tests.py | 53 +++---
 3 files changed, 68 insertions(+), 17 deletions(-)

diff --git a/dts/tests/TestSuite_hello_world.py 
b/dts/tests/TestSuite_hello_world.py
index 7e3d95c0cf..662a8f8726 100644
--- a/dts/tests/TestSuite_hello_world.py
+++ b/dts/tests/TestSuite_hello_world.py
@@ -1,7 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2010-2014 Intel Corporation
 
-"""
+"""The DPDK hello world app test suite.
+
 Run the helloworld example app and verify it prints a message for each used 
core.
 No other EAL parameters apart from cores are used.
 """
@@ -15,22 +16,25 @@
 
 
 class TestHelloWorld(TestSuite):
+"""DPDK hello world app test suite."""
+
 def set_up_suite(self) -> None:
-"""
+"""Set up the test suite.
+
 Setup:
 Build the app we're about to test - helloworld.
 """
 self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld")
 
 def test_hello_world_single_core(self) -> None:
-"""
+"""Single core test case.
+
 Steps:
 Run the helloworld app on the first usable logical core.
 Verify:
 The app prints a message from the used core:
 "hello from core "
 """
-
 # get the first usable core
 lcore_amount = LogicalCoreCount(1, 1, 1)
 lcores = LogicalCoreCountFilter(self.sut_node.lcores, 
lcore_amount).filter()
@@ -44,14 +48,14 @@ def test_hello_world_single_core(self) -> None:
 )
 
 def test_hello_world_all_cores(self) -> None:
-"""
+"""All cores test case.
+
 Steps:
 Run the helloworld app on all usable logical cores.
 Verify:
 The app prints a message from all used cores:
 "hello from core "
 """
-
 # get the maximum logical core number
 eal_para = self.sut_node.create_eal_parameters(
 lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores)
diff --git a/dts/tests/TestSuite_os_udp.py b/dts/tests/TestSuite_os_udp.py
index 9b5f39711d..f99c4d76e3 100644
--- a/dts/tests/TestSuite_os_udp.py
+++ b/dts/tests/TestSuite_os_udp.py
@@ -1,7 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
+"""Basic IPv4 OS routing test suite.
+
 Configure SUT node to route traffic from if1 to if2.
 Send a packet to the SUT node, verify it comes back on the second port on the 
TG node.
 """
@@ -13,22 +14,24 @@
 
 
 class TestOSUdp(TestSuite):
+"""IPv4 UDP OS routing test suite."""
+
 def set_up_suite(self) -> None:
-"""
+"""Set up the test suite.
+
 Setup:
 Configure SUT ports and SUT to route traffic from if1 to if2.
 """
-
 self.configure_testbed_ipv4()
 
 def test_os_udp(self) -> None:
-"""
+"""Basic UDP IPv4 traffic test case.
+
 Steps:
 Send a UDP packet.
 Verify:
 The packet with proper addresses arrives at the other TG port.
 """
-
 packet = Ether() / IP() / UDP()
 
 received_packets = self.send_packet_and_capture(packet)
@@ -38,7 +41,8 @@ def test_os_udp(self) -> None:
 self.verify_packets(expected_packet, received_packets)
 
 def tear_down_suite(self) -> None:
-"""
+"""Tear down the test suite.
+
 Teardown:
 Remove the SUT port configuration configured in setup.
 """
diff --git a/dts/tests/TestSuite_smoke_tests.py 
b/dts/tests/TestSuite_smoke_tests.py
index 4a269df75b..36ff10a862 100644
--- a/dts/tests/TestSuite_smoke_tests.py
+++ b/dts/tests/TestSuite_smoke_tests.py
@@ -1,6 +1,17 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 University of New Hampshire
 
+"""Smoke test suite.
+
+Smoke tests are a class of tests which are used for validating a minimal set 
of important features.
+These are the most important features without which (or when they're faulty) 
the software wouldn't
+work properly. Thus, if any failure occurs while testing these features,
+there isn't that much of a reason to continue testing, as the software is 
fundamentally broken.
+
+These tests don't have to include only DPDK tests, as the reason for failures 
could be
+in the infrastructure (a faulty link between NICs or a misconfiguration).
+"""
+
 import re
 
 from framework.config import PortConfig
@@ -11,13 +22,25 @@
 
 
 class SmokeTests(TestSuite):
+"""DPDK and infrastructure smoke test suite.
+
+The test cases validate the most basic DPDK functionality needed for all 
other test suites.
+The infrastructure also needs to be tested, as that is also used by all 
other test suites.
+
+Attributes:
+is_blocking: 

[PATCH v5 22/23] dts: add doc generation dependencies

2023-11-06 Thread Juraj Linkeš
Sphinx imports every Python module when generating documentation from
docstrings, meaning all dts dependencies, including Python version,
must be satisfied.
By adding Sphinx to dts dependencies we make sure that the proper
Python version and dependencies are used when Sphinx is executed.

Signed-off-by: Juraj Linkeš 
---
 dts/poetry.lock| 499 -
 dts/pyproject.toml |   7 +
 2 files changed, 505 insertions(+), 1 deletion(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index a734fa71f0..dea98f6913 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -1,5 +1,16 @@
 # This file is automatically @generated by Poetry 1.5.1 and should not be 
changed by hand.
 
+[[package]]
+name = "alabaster"
+version = "0.7.13"
+description = "A configurable sidebar-enabled Sphinx theme"
+optional = false
+python-versions = ">=3.6"
+files = [
+{file = "alabaster-0.7.13-py3-none-any.whl", hash = 
"sha256:1ee19aca801bbabb5ba3f5f258e4422dfa86f82f3e9cefb0859b283cdd7f62a3"},
+{file = "alabaster-0.7.13.tar.gz", hash = 
"sha256:a27a4a084d5e690e16e01e03ad2b2e552c61a65469419b907243193de1a84ae2"},
+]
+
 [[package]]
 name = "attrs"
 version = "23.1.0"
@@ -18,6 +29,23 @@ docs = ["furo", "myst-parser", "sphinx", 
"sphinx-notfound-page", "sphinxcontrib-
 tests = ["attrs[tests-no-zope]", "zope-interface"]
 tests-no-zope = ["cloudpickle", "hypothesis", "mypy (>=1.1.1)", "pympler", 
"pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
 
+[[package]]
+name = "babel"
+version = "2.13.1"
+description = "Internationalization utilities"
+optional = false
+python-versions = ">=3.7"
+files = [
+{file = "Babel-2.13.1-py3-none-any.whl", hash = 
"sha256:7077a4984b02b6727ac10f1f7294484f737443d7e2e66c5e4380e41a3ae0b4ed"},
+{file = "Babel-2.13.1.tar.gz", hash = 
"sha256:33e0952d7dd6374af8dbf6768cc4ddf3ccfefc244f9986d4074704f2fbd18900"},
+]
+
+[package.dependencies]
+setuptools = {version = "*", markers = "python_version >= \"3.12\""}
+
+[package.extras]
+dev = ["freezegun (>=1.0,<2.0)", "pytest (>=6.0)", "pytest-cov"]
+
 [[package]]
 name = "bcrypt"
 version = "4.0.1"
@@ -86,6 +114,17 @@ d = ["aiohttp (>=3.7.4)"]
 jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
 uvloop = ["uvloop (>=0.15.2)"]
 
+[[package]]
+name = "certifi"
+version = "2023.7.22"
+description = "Python package for providing Mozilla's CA Bundle."
+optional = false
+python-versions = ">=3.6"
+files = [
+{file = "certifi-2023.7.22-py3-none-any.whl", hash = 
"sha256:92d6037539857d8206b8f6ae472e8b77db8058fec5937a1ef3f54304089edbb9"},
+{file = "certifi-2023.7.22.tar.gz", hash = 
"sha256:539cc1d13202e33ca466e88b2807e29f4c13049d6d87031a3c110744495cb082"},
+]
+
 [[package]]
 name = "cffi"
 version = "1.15.1"
@@ -162,6 +201,105 @@ files = [
 [package.dependencies]
 pycparser = "*"
 
+[[package]]
+name = "charset-normalizer"
+version = "3.3.2"
+description = "The Real First Universal Charset Detector. Open, modern and 
actively maintained alternative to Chardet."
+optional = false
+python-versions = ">=3.7.0"
+files = [
+{file = "charset-normalizer-3.3.2.tar.gz", hash = 
"sha256:f30c3cb33b24454a82faecaf01b19c18562b1e89558fb6c56de4d9118a032fd5"},
+{file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_universal2.whl", 
hash = 
"sha256:25baf083bf6f6b341f4121c2f3c548875ee6f5339300e08be3f2b2ba1721cdd3"},
+{file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_x86_64.whl", 
hash = 
"sha256:06435b539f889b1f6f4ac1758871aae42dc3a8c0e24ac9e60c2384973ad73027"},
+{file = "charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl", hash 
= "sha256:9063e24fdb1e498ab71cb7419e24622516c4a04476b17a2dab57e8baa30d6e03"},
+{file = 
"charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
 hash = 
"sha256:6897af51655e3691ff853668779c7bad41579facacf5fd7253b0133308cf000d"},
+{file = 
"charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl",
 hash = 
"sha256:1d3193f4a680c64b4b6a9115943538edb896edc190f0b222e73761716519268e"},
+{file = 
"charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl",
 hash = 
"sha256:cd70574b12bb8a4d2aaa0094515df2463cb429d8536cfb6c7ce983246983e5a6"},
+{file = 
"charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
 hash = 
"sha256:8465322196c8b4d7ab6d1e049e4c5cb460d0394da4a27d23cc242fbf0034b6b5"},
+{file = 
"charset_normalizer-3.3.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl",
 hash = 
"sha256:a9a8e9031d613fd2009c182b69c7b2c1ef8239a0efb1df3f7c8da66d5dd3d537"},
+{file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_aarch64.whl", 
hash = 
"sha256:beb58fe5cdb101e3a055192ac291b7a21e3b7ef4f67fa1d74e331a7f2124341c"},
+{file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_i686.whl", 
hash = 
"sha256:e06ed3eb3218bc64786f7db41917d4e686cc4856944f53d5bdf83a6884432e12"},
+{file = "ch

[PATCH v5 23/23] dts: add doc generation

2023-11-06 Thread Juraj Linkeš
The tool used to generate developer docs is Sphinx, which is already
used in DPDK. The same configuration is used to preserve style, but it's
been augmented with doc-generating configuration and a change to how the
sidebar displays the content hierarchy.

Sphinx generates the documentation from Python docstrings. The docstring
format is the Google format [0] which requires the sphinx.ext.napoleon
extension. The other extension, sphinx.ext.intersphinx, enables linking
to object in external documentations, such as the Python documentation.

There are two requirements for building DTS docs:
* The same Python version as DTS or higher, because Sphinx imports the
  code.
* Also the same Python packages as DTS, for the same reason.

[0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings

Signed-off-by: Juraj Linkeš 
---
 buildtools/call-sphinx-build.py | 29 ---
 doc/api/meson.build |  1 +
 doc/guides/conf.py  | 34 +++
 doc/guides/meson.build  |  1 +
 doc/guides/tools/dts.rst| 32 -
 dts/doc/conf_yaml_schema.json   |  1 +
 dts/doc/index.rst   | 17 
 dts/doc/meson.build | 49 +
 dts/meson.build | 16 +++
 meson.build |  1 +
 10 files changed, 165 insertions(+), 16 deletions(-)
 create mode 12 dts/doc/conf_yaml_schema.json
 create mode 100644 dts/doc/index.rst
 create mode 100644 dts/doc/meson.build
 create mode 100644 dts/meson.build

diff --git a/buildtools/call-sphinx-build.py b/buildtools/call-sphinx-build.py
index 39a60d09fa..c2f3acfb1d 100755
--- a/buildtools/call-sphinx-build.py
+++ b/buildtools/call-sphinx-build.py
@@ -3,37 +3,46 @@
 # Copyright(c) 2019 Intel Corporation
 #
 
+import argparse
 import sys
 import os
 from os.path import join
 from subprocess import run, PIPE, STDOUT
 from packaging.version import Version
 
-# assign parameters to variables
-(sphinx, version, src, dst, *extra_args) = sys.argv[1:]
+parser = argparse.ArgumentParser()
+parser.add_argument('sphinx')
+parser.add_argument('version')
+parser.add_argument('src')
+parser.add_argument('dst')
+parser.add_argument('--dts-root', default='.')
+args, extra_args = parser.parse_known_args()
 
 # set the version in environment for sphinx to pick up
-os.environ['DPDK_VERSION'] = version
+os.environ['DPDK_VERSION'] = args.version
+os.environ['DTS_ROOT'] = args.dts_root
 
 # for sphinx version >= 1.7 add parallelism using "-j auto"
-ver = run([sphinx, '--version'], stdout=PIPE,
+ver = run([args.sphinx, '--version'], stdout=PIPE,
   stderr=STDOUT).stdout.decode().split()[-1]
-sphinx_cmd = [sphinx] + extra_args
+sphinx_cmd = [args.sphinx] + extra_args
 if Version(ver) >= Version('1.7'):
 sphinx_cmd += ['-j', 'auto']
 
 # find all the files sphinx will process so we can write them as dependencies
 srcfiles = []
-for root, dirs, files in os.walk(src):
+for root, dirs, files in os.walk(args.src):
 srcfiles.extend([join(root, f) for f in files])
 
 # run sphinx, putting the html output in a "html" directory
-with open(join(dst, 'sphinx_html.out'), 'w') as out:
-process = run(sphinx_cmd + ['-b', 'html', src, join(dst, 'html')],
-  stdout=out)
+with open(join(args.dst, 'sphinx_html.out'), 'w') as out:
+process = run(
+sphinx_cmd + ['-b', 'html', args.src, join(args.dst, 'html')],
+stdout=out
+)
 
 # create a gcc format .d file giving all the dependencies of this doc build
-with open(join(dst, '.html.d'), 'w') as d:
+with open(join(args.dst, '.html.d'), 'w') as d:
 d.write('html: ' + ' '.join(srcfiles) + '\n')
 
 sys.exit(process.returncode)
diff --git a/doc/api/meson.build b/doc/api/meson.build
index 5b50692df9..92fe10d9e7 100644
--- a/doc/api/meson.build
+++ b/doc/api/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Luca Boccassi 
 
+doc_api_build_dir = meson.current_build_dir()
 doxygen = find_program('doxygen', required: get_option('enable_docs'))
 
 if not doxygen.found()
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index 0f7ff5282d..169b1d24bc 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -7,10 +7,9 @@
 from sphinx import __version__ as sphinx_version
 from os import listdir
 from os import environ
-from os.path import basename
-from os.path import dirname
+from os.path import basename, dirname
 from os.path import join as path_join
-from sys import argv, stderr
+from sys import argv, stderr, path
 
 import configparser
 
@@ -24,6 +23,31 @@
   file=stderr)
 pass
 
+extensions = ['sphinx.ext.napoleon', 'sphinx.ext.intersphinx']
+
+# Python docstring options
+autodoc_default_options = {
+'members': True,
+'member-order': 'bysource',
+'show-inheritance': True,
+}
+autodoc_class_signature = 'separated'
+autodoc_typehints = 'both'
+autodoc_typehints_format = 'short'
+autodoc_ty

Re: [PATCH] eal: provide trace point register macro for MSVC

2023-11-06 Thread Tyler Retzlaff
On Mon, Nov 06, 2023 at 05:40:12PM +0100, Thomas Monjalon wrote:
> 01/11/2023 23:47, Tyler Retzlaff:
> > Provide an alternate RTE_TRACE_POINT_REGISTER macro when building with
> > MSVC that allocates segments for the trace point using MSVC specific
> > features
> 
> Please could you elaborate what is the improvement?

well not intended to be an improvement, intended to align the msvc build
with the gcc/clang builds placement of registered tracepoint in their
own section.

the alternate expansion for msvc is provided to place the trace point
being registered in it's own section `__rte_trace_point'

msvc doesn't have __attribute__(section("name")) instead as an alternate
we use msvc's data_seg pragma to create and place the trace point into
a named section.

i.e.
gcc/clang
T __attribute__(section("__rte_trace_point") __##trace;
msvc
T __pragma(data_seg("__rte_trace_point")) 
__declspec(allocate("__rte_trace_point")) __##trace;

> 
> > +#define RTE_TRACE_POINT_REGISTER(trace, name) \
> > +rte_trace_point_t \
> > +__pragma(data_seg("__rte_trace_point")) \
> > +__declspec(allocate("__rte_trace_point")) \
> > +__##trace; \
> > +static const char __##trace##_name[] = RTE_STR(name); \
> > +RTE_INIT(trace##_init) \
> > +{ \
> > +   __rte_trace_point_register(&__##trace, __##trace##_name, \
> > +   (void (*)(void)) trace); \
> > +}
> 
> 



Re: [PATCH v7 0/2] *** Disable PASID for DLB Device ***

2023-11-06 Thread Thomas Monjalon
06/11/2023 18:05, Abdullah Sevincer:
> This series implement an internal API to disable 
> PASID and calls the api to disable PASID in event/dlb2 device.
> 
> Abdullah Sevincer (2):
>   bus/pci: support PASID control
>   event/dlb2: fix disable PASID

Moved things in the right place/order and added PASID definition in Doxygen.

Applied, thanks.

PS: please keep all versions in the same mail thread.




Re: [PATCH v5 1/1] build: add libarchive to external deps

2023-11-06 Thread David Marchand
On Mon, Nov 6, 2023 at 5:24 PM Bruce Richardson
 wrote:
>
> On Mon, Nov 06, 2023 at 05:03:10PM +0100, David Marchand wrote:
> > On Mon, Nov 6, 2023 at 5:12 AM Srikanth Yalavarthi
> >  wrote:
> > >
> > > In order to avoid linking with Libs.private, libarchive is not added to
> > > ext_deps during the meson setup stage.
> > >
> > > Since libarchive is not added to ext_deps, cross-compilation or native
> > > compilation with libarchive installed in non-standard location fails
> > > with errors related to "cannot find -larchive" or "archive.h: No such
> > > file or directory". In order to fix the build failures, user is
> > > required to define the 'c_args' and 'c_link_args' with '-I'
> > > and '-L'.
> > >
> > > This patch adds libarchive to ext_deps and further would not require
> > > setting c_args and c_link_args externally.
> > >
> > > Fixes: 40edb9c0d36b ("eal: handle compressed firmware") Cc:
> > > sta...@dpdk.org
> > >
> > > Signed-off-by: Srikanth Yalavarthi 
> >
> > This breaks static compilation of applications.  This can be reproduced
> > with test-meson-builds.sh and in GHA (which was not linking examples
> > statically, I added a patch in my github repo):
> > https://github.com/david-marchand/dpdk/actions/runs/6772879600/job/18406442129#step:19:19572
> >
> The libarchive-dev Ubuntu package does not install all its needed
> dependencies for static linking. The errors can be resolved by manually
> installing the 3 missing -dev packages.

Same with fedora.

>
> It's less than ideal, but to my mind, DPDK is behaving correctly with this
> fix - it is marking that it requires libarchive as a dependency. The fact
> that the libarchive.pc file lists static libraries that aren't installed is
> outside of our control. The previous implementation hacked around this by
> just passing -larchive in all cases, rather than using pkg-config
> information. This then caused other issues that the patch submitter hit.

Ok, I'll see how to workaround this on my side.


-- 
David Marchand



Re: [PATCH v7 1/2] bus/pci: support PASID control

2023-11-06 Thread David Marchand
On Mon, Nov 6, 2023 at 6:05 PM Abdullah Sevincer
 wrote:
>
> Add an internal API to control PASID for a given PCIe device.
>
> For kernels when PASID enabled by default it breaks DLB functionality,
> hence disabling PASID is required for DLB to function properly.
>
> PASID capability is not exposed to users hence offset can not be
> retrieved by rte_pci_find_ext_capability() api. Therefore, api
> implemented in this commit accepts an offset for PASID with an enable
> flag which is used to enable/disable PASID.
>
> Signed-off-by: Abdullah Sevincer 
> ---
>  drivers/bus/pci/pci_common.c  |  7 +++
>  drivers/bus/pci/rte_bus_pci.h | 13 +
>  drivers/bus/pci/version.map   |  1 +
>  lib/pci/rte_pci.h |  4 
>  4 files changed, 25 insertions(+)
>
> diff --git a/drivers/bus/pci/pci_common.c b/drivers/bus/pci/pci_common.c
> index 921d957bf6..ecf080c5d7 100644
> --- a/drivers/bus/pci/pci_common.c
> +++ b/drivers/bus/pci/pci_common.c
> @@ -938,6 +938,13 @@ rte_pci_set_bus_master(const struct rte_pci_device *dev, 
> bool enable)
> return 0;
>  }
>
> +int
> +rte_pci_pasid_set_state(const struct rte_pci_device *dev, off_t offset, bool 
> enable)
> +{
> +   uint16_t pasid = enable;
> +   return rte_pci_write_config(dev, &pasid, sizeof(pasid), offset) < 0 ? 
> -1 : 0;
> +}

I don't see much point in providing a wrapper that does nothing more
than call rte_pci_write_config() and let the driver pass the right
offsets.

If anything, can't this wrapper find out about the pasid offset itself?
There is a extended capability for this, so I would expect it can be used.

Something like (only compile tested):

diff --git a/drivers/bus/pci/pci_common.c b/drivers/bus/pci/pci_common.c
index ba5e280d33..2ca28bd4d4 100644
--- a/drivers/bus/pci/pci_common.c
+++ b/drivers/bus/pci/pci_common.c
@@ -939,13 +939,18 @@ rte_pci_set_bus_master(const struct
rte_pci_device *dev, bool enable)
 }

 int
-rte_pci_pasid_set_state(const struct rte_pci_device *dev,
-   off_t offset, bool enable)
+rte_pci_pasid_set_state(const struct rte_pci_device *dev, bool enable)
 {
-   uint16_t pasid = enable;
-   return rte_pci_write_config(dev, &pasid, sizeof(pasid), offset) < 0
-   ? -1
-   : 0;
+   uint16_t state = enable;
+   off_t pasid_offset;
+   int ret = -1;
+
+   pasid_offset = rte_pci_find_ext_capability(dev,
RTE_PCI_EXT_CAP_ID_PASID);
+   if (pasid_offset >= 0 && rte_pci_write_config(dev, &state,
sizeof(state),
+   pasid_offset + RTE_PCI_PASID_CTRL) == sizeof(state))
+   ret = 0;
+
+   return ret;
 }

 struct rte_pci_bus rte_pci_bus = {
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index f07bf9b588..6d5dbc1d50 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -160,14 +160,14 @@ int rte_pci_set_bus_master(const struct
rte_pci_device *dev, bool enable);
  *
  * @param dev
  *   A pointer to a rte_pci_device structure.
- * @param offset
- *   Offset of the PASID external capability.
  * @param enable
  *   Flag to enable or disable PASID.
+ *
+ *  @return
+ *  0 on success, -1 on error in PCI config space read/write.
  */
 __rte_internal
-int rte_pci_pasid_set_state(const struct rte_pci_device *dev,
-   off_t offset, bool enable);
+int rte_pci_pasid_set_state(const struct rte_pci_device *dev, bool enable);

 /**
  * Read PCI config space.
diff --git a/drivers/event/dlb2/pf/dlb2_main.c
b/drivers/event/dlb2/pf/dlb2_main.c
index 61a7b39eef..bd1ee4af27 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -26,7 +26,6 @@
 #define PF_ID_ZERO 0   /* PF ONLY! */
 #define NO_OWNER_VF 0  /* PF ONLY! */
 #define NOT_VF_REQ false /* PF ONLY! */
-#define DLB2_PCI_PASID_CAP_OFFSET0x148   /* PASID capability offset */

 static int
 dlb2_pf_init_driver_state(struct dlb2_dev *dlb2_dev)
@@ -518,8 +517,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
/* Disable PASID if it is enabled by default, which
 * breaks the DLB if enabled.
 */
-   off = DLB2_PCI_PASID_CAP_OFFSET + RTE_PCI_PASID_CTRL;
-   if (rte_pci_pasid_set_state(pdev, off, false)) {
+   if (rte_pci_pasid_set_state(pdev, false)) {
DLB2_LOG_ERR("[%s()] failed to write the pcie config
space at offset %d\n",
__func__, (int)off);
return -1;


-- 
David Marchand



RE: [PATCH v7 1/2] bus/pci: support PASID control

2023-11-06 Thread Sevincer, Abdullah

>+I don't see much point in providing a wrapper that does nothing more than 
>call rte_pci_write_config() and let the driver pass the right offsets.

>+If anything, can't this wrapper find out about the pasid offset itself?
>+There is a extended capability for this, so I would expect it can be used.

>+Something like (only compile tested):

>+diff --git a/drivers/bus/pci/pci_common.c b/drivers/bus/pci/pci_common.c 
>index ba5e280d33..2ca28bd4d4 100644
>+--- a/drivers/bus/pci/pci_common.c
> b/drivers/bus/pci/pci_common.c
>+@@ -939,13 +939,18 @@ rte_pci_set_bus_master(const struct rte_pci_device 
>*dev, bool enable)  }

>+ int
>+-rte_pci_pasid_set_state(const struct rte_pci_device *dev,
>+-   off_t offset, bool enable)
>++rte_pci_pasid_set_state(const struct rte_pci_device *dev, bool enable)
>+ {
>+-   uint16_t pasid = enable;
>+-   return rte_pci_write_config(dev, &pasid, sizeof(pasid), offset) < 0
>+-   ? -1
>+-   : 0;
>++   uint16_t state = enable;
>++   off_t pasid_offset;
>++   int ret = -1;
>++
>++   pasid_offset = rte_pci_find_ext_capability(dev,
>+RTE_PCI_EXT_CAP_ID_PASID);
>++   if (pasid_offset >= 0 && rte_pci_write_config(dev, &state,
>+sizeof(state),
>++   pasid_offset + RTE_PCI_PASID_CTRL) == sizeof(state))
>++   ret = 0;
>++
>++   return ret;
 >+}

 >+struct rte_pci_bus rte_pci_bus = {
>+diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h 
>index f07bf9b588..6d5dbc1d50 100644
>+--- a/drivers/bus/pci/rte_bus_pci.h
> b/drivers/bus/pci/rte_bus_pci.h
>+@@ -160,14 +160,14 @@ int rte_pci_set_bus_master(const struct rte_pci_device 
>*dev, bool enable);
 >+ *
  >+* @param dev
  >+*   A pointer to a rte_pci_device structure.
>+- * @param offset
>+- *   Offset of the PASID external capability.
>+  * @param enable
>+  *   Flag to enable or disable PASID.
>++ *
>++ *  @return
>++ *  0 on success, -1 on error in PCI config space read/write.
>+  */
>+ __rte_internal
>+-int rte_pci_pasid_set_state(const struct rte_pci_device *dev,
>+-   off_t offset, bool enable);
>++int rte_pci_pasid_set_state(const struct rte_pci_device *dev, bool 
>++enable);

 >+/**
  >+* Read PCI config space.
>+diff --git a/drivers/event/dlb2/pf/dlb2_main.c
>+b/drivers/event/dlb2/pf/dlb2_main.c
>+index 61a7b39eef..bd1ee4af27 100644
>+--- a/drivers/event/dlb2/pf/dlb2_main.c
> b/drivers/event/dlb2/pf/dlb2_main.c
>+@@ -26,7 +26,6 @@
>+ #define PF_ID_ZERO 0   /* PF ONLY! */
 >+#define NO_OWNER_VF 0  /* PF ONLY! */
>+ #define NOT_VF_REQ false /* PF ONLY! */
>+-#define DLB2_PCI_PASID_CAP_OFFSET0x148   /* PASID capability offset 
>*/

 >+static int
 >+dlb2_pf_init_driver_state(struct dlb2_dev *dlb2_dev) @@ -518,8 +517,7 @@ 
 >dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
  >+  /* Disable PASID if it is enabled by default, which
   >+  * breaks the DLB if enabled.
  >+   */
>+-   off = DLB2_PCI_PASID_CAP_OFFSET + RTE_PCI_PASID_CTRL;
>+-   if (rte_pci_pasid_set_state(pdev, off, false)) {
>++   if (rte_pci_pasid_set_state(pdev, false)) {
 >+   DLB2_LOG_ERR("[%s()] failed to write the pcie config space at 
 >offset %d\n",
 >+   __func__, (int)off);
 >+   return -1;

Hi David,
>++   pasid_offset = rte_pci_find_ext_capability(dev,
>+RTE_PCI_EXT_CAP_ID_PASID);

That  rte_pci_find_ext_capability() api does not work for PASID since PASID is 
not exposed to user from kernel.
So, we can not retrieve offset. Instead we came up with a solution that passes 
an offset to an internal function to disable PASID and make the function 
internal so we can change it later.
When the linux limitation is lifted we can re-write the functions and use 
rte_pci_find_ext_capability api to retrieve offset and your 
solution above can be done.



Re: Updating examples which use coremask parameters

2023-11-06 Thread Stephen Hemminger
On Thu, 2 Nov 2023 16:58:52 +
Bruce Richardson  wrote:

> On Thu, Nov 02, 2023 at 05:28:42PM +0100, Thomas Monjalon wrote:
> > 02/11/2023 15:56, Bruce Richardson:  
> > > Hi all,
> > > 
> > > looking to start a discussion and get some input here.
> > > 
> > > There are a number of our examples in DPDK which still track core usage 
> > > via
> > > a 64-bit bitmask, and, as such, cannot run on cores between 64 and
> > > RTE_MAX_LCORE. Two examples I have recently come across with this issue 
> > > are
> > > "eventdev_pipeline" and "qos_sched", but I am sure there are others. The
> > > former is a good example (or bad example depending on your viewpoint) of
> > > this as it takes multiple coremask parameters - for RX cores, for TX 
> > > cores,
> > > for worker cores and optionally for scheduler cores.
> > > 
> > > Now, the simple solution to this is to just expand the 64-bit bitmask to
> > > 128 bit or more, but I think that is just making things harder for the
> > > user, since dealing with long bitmasks is very awkward and unwieldy. 
> > > Better
> > > instead to convert all examples using coremasks to using core lists
> > > instead.
> > > 
> > > First step should be to take our EAL corelist processing code and refactor
> > > it into a function that can be made public, so that it can be used by all
> > > apps for parsing core lists. Simple enough!  
> > 
> > OK to add some command line parsing helpers.
> > It should probably be the start of a new library for command line.
> >   
> 
> Funnily enough, separate to this I had already been working on an
> "rte_args" library to have some functions for working on argc/argv
> parameters. I'm planning on pushing out an RFC for 24.03 fairly shortly.
> 
> However, pulling in functions for arg parsing is a different set of
> functionality to what I had in mind, so we may yet get two libraries out of
> this. [Merging may be tricky due to issues around circular dependencies
> with EAL. My arg management library is designed to "sit on top of" EAL,
> while any lib offering e.g. coremask, corelist parsing functions would need
> to "sit beneath" EAL, so EAL can re-use it's functions].
> 
> Let's postpone the details of both these to when we get some RFCs out
> though.
> 
> 
> > > The next part I'm looking for input on is - how do we switch the apps from
> > > coremasks to core lists? Some options:
> > > 
> > > 1. Add in new commandline parameters for each app to work with core lists.
> > >   This is what we did in the past with EAL, by adding -l as a replacement
> > >   for -c. The advantage is that we maintain backward compatibility, but 
> > > the
> > >   downside is that it becomes hard to find new suitable letter options for
> > >   the core lists. Taking eventdev_pipeline again, we would need "new"
> > >   options for "-r", "-t", "-w" and "-s" parameters. Using the capitalized
> > >   versions of these would be a simple alternative, but "-W" is already 
> > > used
> > >   as an app parameter so we can't do that.
> > > 
> > > 2. Just break backward compatibility and switch the apps to taking
> > >   core lists instead of masks. Advantage is that it gives us the cleanest
> > >   solution, but the downside is that and testing done using these 
> > > examples,
> > >   or any users who may have run them in the past, get different 
> > > behaviour.  
> > 
> > We don't need to offer backward compatibility in examples.
> > So this option is acceptable.
> >   
> 
> Glad to hear it. Makes life simpler.
> 
> > > 3. An interesting further alternative is to allow apps to take both
> > > coremasks and corelists and use heuristics to determine which is which.
> > > For example, anything starting with "0x" is a mask, anything containing
> > > "-" or "," is a list. There would be ambiguous values such as e.g. 2,
> > > which could be either, but we can always find ways to disambiguate
> > > these, e.g. allow trailing commas in lists, so that "0x2" is the
> > > coremask, and "2," is the corelist. [Could be other alternatives]. This
> > > largely keeps backward compatibility and also allows use of corelists.  
> > 
> > The option 3 can be interesting as well.
> >  
> Yep. If we start offering a library of arg-parsing functions, one of those
> could be a function using heuristics to identify core-mask, core-list or
> ambiguous values. Then each app can decide what to do in the latter case.
> Since we don't care about backward compatibility, the examples can just parse
> ambiguous values as core-list. User could then still use coremasks by
> prefixing them with "0x".

Noticed that lib/graph is using 64 bit coremask internally.
Wonder if others have the same issue.
Would be good if DPDK had a library to handle cpusets better, something
like the Linux kernel cpuset which uses comma separated list of cpu masks.


Re: [PATCH] dumpcap: fix mbuf pool ring type

2023-11-06 Thread Stephen Hemminger
On Mon, 2 Oct 2023 10:42:53 +0200
Morten Brørup  wrote:

> > Switching to rte_pktmbuf_pool_create() still leaves the user with the
> > possibility to shoot himself in the foot (I was thinking of setting
> > some --mbuf-pool-ops-name EAL option).
> > 
> > This application has explicit requirements in terms of concurrent
> > access (and I don't think the mempool library exposes per driver
> > capabilities in that regard).
> > The application was enforcing the use of mempool/ring so far.
> > 
> > I think it is safer to go with an explicit
> > rte_pktmbuf_pool_create_by_ops(... "ring_mp_mc").
> > WDYT?  
> 
> 
> Or perhaps one of "ring_mt_rts" or "ring_mt_hts", if any of those mbuf pool 
> drivers are specified on the command line; otherwise fall back to 
> "ring_mp_mc".
> 
> Actually, I prefer Stephen's suggestion of using the default mbuf pool 
> driver. The option is there for a reason.
> 
> However, David is right: We want to prevent the user from using a 
> thread-unsafe mempool driver in this use case.
> 
> And I guess there might be other use cases than this one, where a thread-safe 
> mempool driver is required. So adding a generalized function to get the 
> "upgraded" (i.e. thread safe) variant of a mempool driver would be nice.
> 

If the user overrides the default mbuf pool type, then it will need to be 
thread safe for
the general case of driver as well (or they are on single cpu). 


I think the patch should be applied as is.


Re: [PATCH] dumpcap: fix mbuf pool ring type

2023-11-06 Thread Stephen Hemminger
On Mon, 2 Oct 2023 09:33:50 +0200
David Marchand  wrote:

> Switching to rte_pktmbuf_pool_create() still leaves the user with the
> possibility to shoot himself in the foot (I was thinking of setting
> some --mbuf-pool-ops-name EAL option).
> 
> This application has explicit requirements in terms of concurrent
> access (and I don't think the mempool library exposes per driver
> capabilities in that regard).
> The application was enforcing the use of mempool/ring so far.
> 
> I think it is safer to go with an explicit
> rte_pktmbuf_pool_create_by_ops(... "ring_mp_mc").
> WDYT?

The dumpcap command does not have EAL command line arguments that can
be changed by user.


[PATCH v2] dumpcap: fix mbuf pool ring type

2023-11-06 Thread Stephen Hemminger
The internal buffer pool used for copies of mbufs captured
needs to be thread safe.  If capturing on multiple interfaces
or multiple queues, the same pool will be used (consumers).
And if the capture ring gets full, the queues will need
to put back the capture buffer which leads to multiple producers.

Since this is the same use case as normal drivers and
the default pool time can not be overridden on dumpcap command
line, it is OK to use the default pool type.

Bugzilla ID: 1271
Fixes: cbb44143be74 ("app/dumpcap: add new packet capture application")
Signed-off-by: Stephen Hemminger 
---
v2 - reword commit message

 app/dumpcap/main.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c
index 64294bbfb3e6..991174e95022 100644
--- a/app/dumpcap/main.c
+++ b/app/dumpcap/main.c
@@ -691,10 +691,9 @@ static struct rte_mempool *create_mempool(void)
data_size = mbuf_size;
}
 
-   mp = rte_pktmbuf_pool_create_by_ops(pool_name, num_mbufs,
-   MBUF_POOL_CACHE_SIZE, 0,
-   data_size,
-   rte_socket_id(), "ring_mp_sc");
+   mp = rte_pktmbuf_pool_create(pool_name, num_mbufs,
+MBUF_POOL_CACHE_SIZE, 0,
+data_size, rte_socket_id());
if (mp == NULL)
rte_exit(EXIT_FAILURE,
 "Mempool (%s) creation failed: %s\n", pool_name,
-- 
2.39.2



Re: [PATCH] dmadev: add tracepoints at control path APIs

2023-11-06 Thread Thomas Monjalon
20/10/2023 04:21, Chengwen Feng:
> Add tracepoints at control path APIs for tracing support.
> 
> Note: Fast path APIs don't support tracepoints because the APIs contains
> struct and enum, if adding tracepints will lead to chkincs failure.
> 
> Signed-off-by: Chengwen Feng 
> Acked-by: Morten Brørup 

Applied, thanks.





Re: [PATCH v4] dmadev: add tracepoints

2023-11-06 Thread Thomas Monjalon
11/10/2023 11:55, fengchengwen:
> Hi Thomas,
> 
>   Sorry for the late reply.
> 
> On 2023/8/14 22:16, Thomas Monjalon wrote:
> > jeudi 3 août 2023, fengchengwen:
> >> Hi Thomas,
> >>
> >> On 2023/7/31 20:48, Thomas Monjalon wrote:
> >>> 10/07/2023 09:50, fengchengwen:
>  Hi Thomas,
> 
>  On 2023/7/10 14:49, Thomas Monjalon wrote:
> > 09/07/2023 05:23, fengchengwen:
> >> Hi Thomas,
> >>
> >> On 2023/7/7 18:40, Thomas Monjalon wrote:
> >>> 26/05/2023 10:42, Chengwen Feng:
>  Add tracepoints at important APIs for tracing support.
> 
>  Signed-off-by: Chengwen Feng 
>  Acked-by: Morten Brørup 
> 
>  ---
>  v4: Fix asan smoke fail.
>  v3: Address Morten's comment:
>  Move stats_get and vchan_status and to trace_fp.h.
>  v2: Address Morten's comment:
>  Make stats_get as fast-path trace-points.
>  Place fast-path trace-point functions behind in version.map.
> >>>
> >>> There are more things to fix.
> >>> First you must export rte_dmadev_trace_fp.h as it is included by 
> >>> rte_dmadev.h.
> >>
> >> It was already included by rte_dmadev.h:
> >> diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
> >> index e61d71959e..e792b90ef8 100644
> >> --- a/lib/dmadev/rte_dmadev.h
> >> +++ b/lib/dmadev/rte_dmadev.h
> >> @@ -796,6 +796,7 @@ struct rte_dma_sge {
> >>  };
> >>
> >>  #include "rte_dmadev_core.h"
> >> +#include "rte_dmadev_trace_fp.h"
> >>
> >>
> >>> Note: you could have caught this if testing the example app for DMA.
> >>> Second, you must avoid structs and enum in this header file,
> >>
> >> Let me explain the #if #endif logic:
> >>
> >> For the function:
> >> uint16_t
> >> rte_dma_completed(int16_t dev_id, uint16_t vchan, const uint16_t 
> >> nb_cpls,
> >>  uint16_t *last_idx, bool *has_error)
> >>
> >> The common trace implementation:
> >> RTE_TRACE_POINT_FP(
> >>rte_dma_trace_completed,
> >>RTE_TRACE_POINT_ARGS(int16_t dev_id, uint16_t vchan,
> >> const uint16_t nb_cpls, uint16_t *last_idx,
> >> bool *has_error, uint16_t ret),
> >>rte_trace_point_emit_i16(dev_id);
> >>rte_trace_point_emit_u16(vchan);
> >>rte_trace_point_emit_u16(nb_cpls);
> >>rte_trace_point_emit_ptr(idx_val);
> >>rte_trace_point_emit_ptr(has_error);
> >>rte_trace_point_emit_u16(ret);
> >> )
> >>
> >> But it has a problem: for pointer parameter (e.g. last_idx and 
> >> has_error), only record
> >> the pointer value (i.e. address value).
> >>
> >> I think the pointer value has no mean (in particular, many of there 
> >> pointers are stack
> >> variables), the value of the pointer point to is meaningful.
> >>
> >> So I add the pointer reference like below (as V3 did):
> >> RTE_TRACE_POINT_FP(
> >>rte_dma_trace_completed,
> >>RTE_TRACE_POINT_ARGS(int16_t dev_id, uint16_t vchan,
> >> const uint16_t nb_cpls, uint16_t *last_idx,
> >> bool *has_error, uint16_t ret),
> >>int has_error_val = *has_error;// pointer reference
> >>int last_idx_val = *last_idx;  // pointer reference
> >>rte_trace_point_emit_i16(dev_id);
> >>rte_trace_point_emit_u16(vchan);
> >>rte_trace_point_emit_u16(nb_cpls);
> >>rte_trace_point_emit_int(last_idx_val);// record the value 
> >> of pointer
> >>rte_trace_point_emit_int(has_error_val);   // record the value 
> >> of pointer
> >>rte_trace_point_emit_u16(ret);
> >> )
> >>
> >> Unfortunately, the above lead to asan failed. because in:
> >> RTE_TRACE_POINT_REGISTER(rte_dma_trace_completed,
> >>lib.dmadev.completed)
> >> it will invoke rte_dma_trace_completed() with the parameter is 
> >> undefined.
> >>
> >>
> >> To solve this problem, consider the rte_dmadev_trace_points.c will 
> >> include rte_trace_point_register.h,
> >> and the rte_trace_point_register.h will defined macro: 
> >> _RTE_TRACE_POINT_REGISTER_H_.
> >>
> >> so we update trace points as (as V4 did):
> >> RTE_TRACE_POINT_FP(
> >>rte_dma_trace_completed,
> >>RTE_TRACE_POINT_ARGS(int16_t dev_id, uint16_t vchan,
> >> const uint16_t nb_cpls, uint16_t *last_idx,
> >> bool *has_error, uint16_t ret),
> >> #ifdef _RTE_TRACE_POINT_REGISTER_H_
> >>uint16_t __last_idx = 0;
> >>bool __has_error = false;
> >>last_idx = &__last_idx;  // make sure the 
> >> p

Re: [PATCH] config/x86: config support for AMD EPYC processors

2023-11-06 Thread Thomas Monjalon
17/10/2023 12:27, Morten Brørup:
> > >> From: Tummala, Sivaprasad 
> > >>> From: David Marchand 
> > >>> On Mon, Sep 25, 2023 at 5:11 PM Sivaprasad Tummala
> >  From: Sivaprasad Tummala 
> > 
> >  By default, max lcores are limited to 128 for x86 platforms.
> >  On AMD EPYC processors, this limit needs to be increased to
> > leverage
> >  all the cores.
> > 
> >  The patch adjusts the limit specifically for native compilation on
> >  AMD EPYC CPUs.
> > 
> >  Signed-off-by: Sivaprasad Tummala 
> > >>>
> > >>> This patch is a revamp of
> > >>>
> > >>
> > http://inbox.dpdk.org/dev/BY5PR12MB3681C3FC6676BC03F0B42CCC96789@BY5PR
> > >>> 12MB3681.namprd12.prod.outlook.com/
> > >>> for which a discussion at techboard is supposed to have taken place.
> > >>> But I didn't find a trace of it.
> > >>>
> > >>> One option that had been discussed in the previous thread was to
> > >>> increase the max number of cores for x86.
> > >>> I am unclear if this option has been properly evaluated/debatted.
> 
> Here are the minutes from the previous techboard discussions:
> [1]: http://inbox.dpdk.org/dev/YZ43U36bFWHYClAi@platinum/
> [2]: http://inbox.dpdk.org/dev/20211202112506.68acaa1a@hermes.local/
> 
> AFAIK, there has been no progress with dynamic max_lcores, so I guess the 
> techboard's conclusion still stands:
> 
> There is no identified use-case where a single application requires more than 
> 128 lcores. If a case a use-case exists for a single application that uses 
> more than 128 lcores, the TB is ok to update the default config value.
> 
> > >>>
> > >>> Can the topic be brought again at techboard?
> > >>
> > >> Hi David,
> > >>
> > >> The patch is intended to detect AMD platforms and enable all CPU
> > cores by default
> > >> on native builds.
> 
> This is done on native ARM builds, so why not on native X86 builds too?
> 
> > >>
> > >> As an optimization for memory footprint, users can override this by
> > specifying "-
> > >> Dmax_lcores" option based on DPDK lcores required for their usecases.
> > >>
> > >> Sure, will request to add this topic for discussion at techboard.

This is the summary of the techboard meeting:
(see https://mails.dpdk.org/archives/dev/2023-October/279672.html)

- There is some asks for more than 128 worker cores
- Discussion about generally increasing the default max core count and
trade-offs with memory consumption but this is longer term issue
- Acceptance for the direction of this patch in the short term
- Details of whether it should be for EPYC only or x86 to be figured out
on mailing list

So now let's figure out the details please.
Suggestions?




Re: [PATCH] maintainers: update for mempool library

2023-11-06 Thread Thomas Monjalon
01/11/2023 17:48, Andrew Rybchenko:
> 
> On November 1, 2023 19:20:29 Morten Brørup  wrote:
> 
> > Add co-maintainer for Memory pool.
> >
> > Suggested-by: Thomas Monjalon 
> > Signed-off-by: Morten Brørup 
> 
> Acked-by: Andrew Rybchenko 

Applied, thanks Morten for helping.





  1   2   >