[dpdk-dev] [PATCH 00/44] Marvell CNXK Ethdev Driver

2021-03-06 Thread Nithin Dabilpuram
This patchset adds support for Marvell CN106XX SoC based on 'common/cnxk'
driver. In future, CN9K a.k.a octeontx2 will also be supported by same
driver when code is ready and 'net/octeontx2' will be deprecated.

Depends-on: series-15511 ("Add Marvell CNXK mempool driver")

Jerin Jacob (6):
  net/cnxk: add Rx support for cn9k
  net/cnxk: add Rx vector version for cn9k
  net/cnxk: add Tx support for cn9k
  net/cnxk: add Rx support for cn10k
  net/cnxk: add Rx vector version for cn10k
  net/cnxk: add Tx support for cn10k

Kiran Kumar K (2):
  net/cnxk: add support to configure npc
  net/cnxk: add initial version of rte flow support

Nithin Dabilpuram (17):
  net/cnxk: add build infra and common probe
  net/cnxk: add platform specific probe and remove
  net/cnxk: add common devargs parsing function
  net/cnxk: add common dev infos get support
  net/cnxk: add device configuration operation
  net/cnxk: add link status update support
  net/cnxk: add Rx queue setup and release
  net/cnxk: add Tx queue setup and release
  net/cnxk: add packet type support
  net/cnxk: add queue start and stop support
  net/cnxk: add Rx multi-segmented version for cn9k
  net/cnxk: add Tx multi-segment version for cn9k
  net/cnxk: add Tx vector version for cn9k
  net/cnxk: add Rx multi-segment version for cn10k
  net/cnxk: add Tx multi-segment version for cn10k
  net/cnxk: add Tx vector version for cn10k
  net/cnxk: add device start and stop operations

Satha Rao (5):
  net/cnxk: add port/queue stats
  net/cnxk: add xstats apis
  net/cnxk: add rxq/txq info get operations
  net/cnxk: add ethdev firmware version get
  net/cnxk: add get register operation

Satheesh Paul (1):
  net/cnxk: add filter ctrl operation

Sunil Kumar Kori (13):
  net/cnxk: add MAC address set ops
  net/cnxk: add MTU set device operation
  net/cnxk: add promiscuous mode enable and disable
  net/cnxk: add DMAC filter support
  net/cnxk: add all multicast enable/disable ethops
  net/cnxk: add Rx/Tx burst mode get ops
  net/cnxk: add flow ctrl set/get ops
  net/cnxk: add link up/down operations
  net/cnxk: add EEPROM module info get operations
  net/cnxk: add Rx queue interrupt enable/disable ops
  net/cnxk: add validation API for mempool ops
  net/cnxk: add device close and reset operations
  net/cnxk: add pending Tx mbuf cleanup operation

 MAINTAINERS|3 +
 doc/guides/nics/cnxk.rst   |  343 
 doc/guides/nics/features/cnxk.ini  |   44 +
 doc/guides/nics/features/cnxk_vec.ini  |   42 +
 doc/guides/nics/features/cnxk_vf.ini   |   39 +
 doc/guides/nics/index.rst  |1 +
 doc/guides/platform/cnxk.rst   |3 +
 drivers/common/cnxk/roc_npc.c  |2 +
 drivers/net/cnxk/cn10k_ethdev.c|  374 +
 drivers/net/cnxk/cn10k_ethdev.h|   39 +
 drivers/net/cnxk/cn10k_rx.c|  388 +
 drivers/net/cnxk/cn10k_rx.h|  212 +
 drivers/net/cnxk/cn10k_tx.c| 1284 
 drivers/net/cnxk/cn10k_tx.h|  442 ++
 drivers/net/cnxk/cn9k_ethdev.c |  404 +
 drivers/net/cnxk/cn9k_ethdev.h |   37 +
 drivers/net/cnxk/cn9k_rx.c |  388 +
 drivers/net/cnxk/cn9k_rx.h |  215 +
 drivers/net/cnxk/cn9k_tx.c | 1122 +
 drivers/net/cnxk/cn9k_tx.h |  475 +++
 drivers/net/cnxk/cnxk_ethdev.c | 1449 
 drivers/net/cnxk/cnxk_ethdev.h |  387 +
 drivers/net/cnxk/cnxk_ethdev_devargs.c |  169 
 drivers/net/cnxk/cnxk_ethdev_ops.c |  729 
 drivers/net/cnxk/cnxk_link.c   |  113 +++
 drivers/net/cnxk/cnxk_lookup.c |  326 +++
 drivers/net/cnxk/cnxk_rte_flow.c   |  280 ++
 drivers/net/cnxk/cnxk_rte_flow.h   |   69 ++
 drivers/net/cnxk/cnxk_stats.c  |  217 +
 drivers/net/cnxk/meson.build   |   36 +
 drivers/net/cnxk/version.map   |3 +
 drivers/net/meson.build|1 +
 32 files changed, 9636 insertions(+)
 create mode 100644 doc/guides/nics/cnxk.rst
 create mode 100644 doc/guides/nics/features/cnxk.ini
 create mode 100644 doc/guides/nics/features/cnxk_vec.ini
 create mode 100644 doc/guides/nics/features/cnxk_vf.ini
 create mode 100644 drivers/net/cnxk/cn10k_ethdev.c
 create mode 100644 drivers/net/cnxk/cn10k_ethdev.h
 create mode 100644 drivers/net/cnxk/cn10k_rx.c
 create mode 100644 drivers/net/cnxk/cn10k_rx.h
 create mode 100644 drivers/net/cnxk/cn10k_tx.c
 create mode 100644 drivers/net/cnxk/cn10k_tx.h
 create mode 100644 drivers/net/cnxk/cn9k_ethdev.c
 create mode 100644 drivers/net/cnxk/cn9k_ethdev.h
 create mode 100644 drivers/net/cnxk/cn9k_rx.c
 create mode 100644 drivers/net/cnxk/cn9k_rx.h
 create mode 100644 drivers/net/cnxk/cn9k_tx.c
 create mode 100644 drivers/net/cnxk/cn9k_tx.h
 create mode 100644 drivers/net/cnxk/cnxk_ethdev.c
 create mode 100644 drivers/

[dpdk-dev] [PATCH 01/44] net/cnxk: add build infra and common probe

2021-03-06 Thread Nithin Dabilpuram
Add build infrastructure and common probe and remove for cnxk driver
which is used by both CN10K and CN9K SoC.

Signed-off-by: Nithin Dabilpuram 
---
 MAINTAINERS   |   3 +
 doc/guides/nics/cnxk.rst  |  29 +
 doc/guides/nics/features/cnxk.ini |   9 ++
 doc/guides/nics/features/cnxk_vec.ini |   9 ++
 doc/guides/nics/features/cnxk_vf.ini  |   9 ++
 doc/guides/nics/index.rst |   1 +
 doc/guides/platform/cnxk.rst  |   3 +
 drivers/net/cnxk/cnxk_ethdev.c| 219 ++
 drivers/net/cnxk/cnxk_ethdev.h|  57 +
 drivers/net/cnxk/meson.build  |  21 
 drivers/net/cnxk/version.map  |   3 +
 drivers/net/meson.build   |   1 +
 12 files changed, 364 insertions(+)
 create mode 100644 doc/guides/nics/cnxk.rst
 create mode 100644 doc/guides/nics/features/cnxk.ini
 create mode 100644 doc/guides/nics/features/cnxk_vec.ini
 create mode 100644 doc/guides/nics/features/cnxk_vf.ini
 create mode 100644 drivers/net/cnxk/cnxk_ethdev.c
 create mode 100644 drivers/net/cnxk/cnxk_ethdev.h
 create mode 100644 drivers/net/cnxk/meson.build
 create mode 100644 drivers/net/cnxk/version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 67c179f..efabc3c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -745,6 +745,9 @@ M: Sunil Kumar Kori 
 M: Satha Rao 
 T: git://dpdk.org/next/dpdk-next-net-mrvl
 F: drivers/common/cnxk/
+F: drivers/net/cnxk/
+F: doc/guides/nics/cnxk.rst
+F: doc/guides/nics/features/cnxk*.ini
 F: doc/guides/platform/cnxk.rst
 
 Marvell mvpp2
diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
new file mode 100644
index 000..ca21842
--- /dev/null
+++ b/doc/guides/nics/cnxk.rst
@@ -0,0 +1,29 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+Copyright(C) 2021 Marvell.
+
+CNXK Poll Mode driver
+=
+
+The CNXK ETHDEV PMD (**librte_net_cnxk**) provides poll mode ethdev driver
+support for the inbuilt network device found in **Marvell OCTEON CN9K/CN10K**
+SoC family as well as for their virtual functions (VF) in SR-IOV context.
+
+More information can be found at `Marvell Official Website
+`_.
+
+Features
+
+
+Features of the CNXK Ethdev PMD are:
+
+Prerequisites
+-
+
+See :doc:`../platform/cnxk` for setup information.
+
+
+Driver compilation and testing
+--
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC 
`
+for details.
diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
new file mode 100644
index 000..2c23464
--- /dev/null
+++ b/doc/guides/nics/features/cnxk.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'cnxk' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux= Y
+ARMv8= Y
+Usage doc= Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
new file mode 100644
index 000..de78516
--- /dev/null
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'cnxk_vec' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux= Y
+ARMv8= Y
+Usage doc= Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini 
b/doc/guides/nics/features/cnxk_vf.ini
new file mode 100644
index 000..9c96351
--- /dev/null
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'cnxk_vf' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux= Y
+ARMv8= Y
+Usage doc= Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 799697c..c1a04d9 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -19,6 +19,7 @@ Network Interface Controller Drivers
 axgbe
 bnx2x
 bnxt
+cnxk
 cxgbe
 dpaa
 dpaa2
diff --git a/doc/guides/platform/cnxk.rst b/doc/guides/platform/cnxk.rst
index 9bbba65..a9c050e 100644
--- a/doc/guides/platform/cnxk.rst
+++ b/doc/guides/platform/cnxk.rst
@@ -141,6 +141,9 @@ HW Offload Drivers
 
 This section lists dataplane H/W block(s) available in CNXK SoC.
 
+#. **Ethdev Driver**
+   See :doc:`../nics/cnxk` for NIX Ethdev driver information.
+
 #. **Mempool Driver**
See :doc:`../mempool/cnxk` for NPA mempool driver information.
 
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
new file mode 100644
index 000..6717410
--- /dev/null
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -0,0 +1,219 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+#include 
+
+/* CNXK platform independent eth dev ops */
+struct eth_dev_ops cnxk_eth_dev_ops;
+
+static int
+cnxk_

[dpdk-dev] [PATCH 02/44] net/cnxk: add platform specific probe and remove

2021-03-06 Thread Nithin Dabilpuram
Add platform specific probe and remove callbacks for CN9K
and CN10K which use common probe and remove functions.
Register ethdev driver for CN9K and CN10K.

Signed-off-by: Nithin Dabilpuram 
---
 drivers/net/cnxk/cn10k_ethdev.c | 64 
 drivers/net/cnxk/cn10k_ethdev.h |  9 +
 drivers/net/cnxk/cn9k_ethdev.c  | 82 +
 drivers/net/cnxk/cn9k_ethdev.h  |  9 +
 drivers/net/cnxk/cnxk_ethdev.c  | 42 +
 drivers/net/cnxk/cnxk_ethdev.h  | 19 ++
 drivers/net/cnxk/meson.build|  5 +++
 7 files changed, 230 insertions(+)
 create mode 100644 drivers/net/cnxk/cn10k_ethdev.c
 create mode 100644 drivers/net/cnxk/cn10k_ethdev.h
 create mode 100644 drivers/net/cnxk/cn9k_ethdev.c
 create mode 100644 drivers/net/cnxk/cn9k_ethdev.h

diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
new file mode 100644
index 000..54711ea
--- /dev/null
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+#include "cn10k_ethdev.h"
+
+static int
+cn10k_nix_remove(struct rte_pci_device *pci_dev)
+{
+   return cnxk_nix_remove(pci_dev);
+}
+
+static int
+cn10k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+   struct rte_eth_dev *eth_dev;
+   int rc;
+
+   if (RTE_CACHE_LINE_SIZE != 64) {
+   plt_err("Driver not compiled for CN10K");
+   return -EFAULT;
+   }
+
+   rc = plt_init();
+   if (rc) {
+   plt_err("Failed to initialize platform model, rc=%d", rc);
+   return rc;
+   }
+
+   /* Common probe */
+   rc = cnxk_nix_probe(pci_drv, pci_dev);
+   if (rc)
+   return rc;
+
+   if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+   eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+   if (!eth_dev)
+   return -ENOENT;
+   }
+   return 0;
+}
+
+static const struct rte_pci_id cn10k_pci_nix_map[] = {
+   CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_PF),
+   CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_PF),
+   CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_VF),
+   CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_VF),
+   CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_AF_VF),
+   CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_AF_VF),
+   {
+   .vendor_id = 0,
+   },
+};
+
+static struct rte_pci_driver cn10k_pci_nix = {
+   .id_table = cn10k_pci_nix_map,
+   .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA |
+RTE_PCI_DRV_INTR_LSC,
+   .probe = cn10k_nix_probe,
+   .remove = cn10k_nix_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_cn10k, cn10k_pci_nix);
+RTE_PMD_REGISTER_PCI_TABLE(net_cn10k, cn10k_pci_nix_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_cn10k, "vfio-pci");
diff --git a/drivers/net/cnxk/cn10k_ethdev.h b/drivers/net/cnxk/cn10k_ethdev.h
new file mode 100644
index 000..1bf4a65
--- /dev/null
+++ b/drivers/net/cnxk/cn10k_ethdev.h
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+#ifndef __CN10K_ETHDEV_H__
+#define __CN10K_ETHDEV_H__
+
+#include 
+
+#endif /* __CN10K_ETHDEV_H__ */
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
new file mode 100644
index 000..bd97d5f
--- /dev/null
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+#include "cn9k_ethdev.h"
+
+static int
+cn9k_nix_remove(struct rte_pci_device *pci_dev)
+{
+   return cnxk_nix_remove(pci_dev);
+}
+
+static int
+cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+   struct rte_eth_dev *eth_dev;
+   struct cnxk_eth_dev *dev;
+   int rc;
+
+   if (RTE_CACHE_LINE_SIZE != 128) {
+   plt_err("Driver not compiled for CN9K");
+   return -EFAULT;
+   }
+
+   rc = plt_init();
+   if (rc) {
+   plt_err("Failed to initialize platform model, rc=%d", rc);
+   return rc;
+   }
+
+   /* Common probe */
+   rc = cnxk_nix_probe(pci_drv, pci_dev);
+   if (rc)
+   return rc;
+
+   /* Find eth dev allocated */
+   eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+   if (!eth_dev)
+   return -ENOENT;
+
+   dev = cnxk_eth_pmd_priv(eth_dev);
+   /* Update capabilities already set for TSO.
+* TSO not supported for earlier chip revisions
+*/
+   if (roc_model_is_cn96_A0() || roc_model_is_cn95_A0())
+   dev->tx_offload_capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
+ DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
+ DEV_TX_OFFLOAD_GE

[dpdk-dev] [PATCH 03/44] net/cnxk: add common devargs parsing function

2021-03-06 Thread Nithin Dabilpuram
Add various devargs parsing command line arguments
parsing functions supported by CN9K and CN10K.

Signed-off-by: Nithin Dabilpuram 
---
 doc/guides/nics/cnxk.rst   |  94 +++
 drivers/net/cnxk/cnxk_ethdev.c |   7 ++
 drivers/net/cnxk/cnxk_ethdev.h |   9 ++
 drivers/net/cnxk/cnxk_ethdev_devargs.c | 166 +
 drivers/net/cnxk/meson.build   |   3 +-
 5 files changed, 278 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/cnxk/cnxk_ethdev_devargs.c

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index ca21842..611ffb4 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -27,3 +27,97 @@ Driver compilation and testing
 
 Refer to the document :ref:`compiling and testing a PMD for a NIC 
`
 for details.
+
+Runtime Config Options
+--
+
+- ``Rx&Tx scalar mode enable`` (default ``0``)
+
+   Ethdev supports both scalar and vector mode, it may be selected at runtime
+   using ``scalar_enable`` ``devargs`` parameter.
+
+- ``RSS reta size`` (default ``64``)
+
+   RSS redirection table size may be configured during runtime using 
``reta_size``
+   ``devargs`` parameter.
+
+   For example::
+
+  -a 0002:02:00.0,reta_size=256
+
+   With the above configuration, reta table of size 256 is populated.
+
+- ``Flow priority levels`` (default ``3``)
+
+   RTE Flow priority levels can be configured during runtime using
+   ``flow_max_priority`` ``devargs`` parameter.
+
+   For example::
+
+  -a 0002:02:00.0,flow_max_priority=10
+
+   With the above configuration, priority level was set to 10 (0-9). Max
+   priority level supported is 32.
+
+- ``Reserve Flow entries`` (default ``8``)
+
+   RTE flow entries can be pre allocated and the size of pre allocation can be
+   selected runtime using ``flow_prealloc_size`` ``devargs`` parameter.
+
+   For example::
+
+  -a 0002:02:00.0,flow_prealloc_size=4
+
+   With the above configuration, pre alloc size was set to 4. Max pre alloc
+   size supported is 32.
+
+- ``Max SQB buffer count`` (default ``512``)
+
+   Send queue descriptor buffer count may be limited during runtime using
+   ``max_sqb_count`` ``devargs`` parameter.
+
+   For example::
+
+  -a 0002:02:00.0,max_sqb_count=64
+
+   With the above configuration, each send queue's decscriptor buffer count is
+   limited to a maximum of 64 buffers.
+
+- ``Switch header enable`` (default ``none``)
+
+   A port can be configured to a specific switch header type by using
+   ``switch_header`` ``devargs`` parameter.
+
+   For example::
+
+  -a 0002:02:00.0,switch_header="higig2"
+
+   With the above configuration, higig2 will be enabled on that port and the
+   traffic on this port should be higig2 traffic only. Supported switch header
+   types are "higig2", "dsa", "chlen90b" and "chlen24b".
+
+- ``RSS tag as XOR`` (default ``0``)
+
+   The HW gives two options to configure the RSS adder i.e
+
+   * ``rss_adder<7:0> = flow_tag<7:0> ^ flow_tag<15:8> ^ flow_tag<23:16> ^ 
flow_tag<31:24>``
+
+   * ``rss_adder<7:0> = flow_tag<7:0>``
+
+   Latter one aligns with standard NIC behavior vs former one is a legacy
+   RSS adder scheme used in OCTEON TX2 products.
+
+   By default, the driver runs in the latter mode.
+   Setting this flag to 1 to select the legacy mode.
+
+   For example to select the legacy mode(RSS tag adder as XOR)::
+
+  -a 0002:02:00.0,tag_as_xor=1
+
+
+
+.. note::
+
+   Above devarg parameters are configurable per device, user needs to pass the
+   parameters to all the PCIe devices if application requires to configure on
+   all the ethdev ports.
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index b836fc2..3a2309e 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -58,6 +58,13 @@ cnxk_eth_dev_init(struct rte_eth_dev *eth_dev)
rte_eth_copy_pci_info(eth_dev, pci_dev);
eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 
+   /* Parse devargs string */
+   rc = cnxk_ethdev_parse_devargs(eth_dev->device->devargs, dev);
+   if (rc) {
+   plt_err("Failed to parse devargs rc=%d", rc);
+   goto error;
+   }
+
/* Initialize base roc nix */
nix->pci_dev = pci_dev;
rc = roc_nix_dev_init(nix);
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index ba2bfcd..97e3a15 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -9,11 +9,15 @@
 
 #include 
 #include 
+#include 
 
 #include "roc_api.h"
 
 #define CNXK_ETH_DEV_PMD_VERSION "1.0"
 
+/* Max supported SQB count */
+#define CNXK_NIX_TX_MAX_SQB 512
+
 #define CNXK_NIX_TX_OFFLOAD_CAPA   
\
(DEV_TX_OFFLOAD_MBUF_FAST_FREE | DEV_TX_OFFLOAD_MT_LOCKFREE |  \
 DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT | \
@@ -38,6 +42,7 @@ struct

[dpdk-dev] [PATCH 04/44] net/cnxk: add common dev infos get support

2021-03-06 Thread Nithin Dabilpuram
Add support to retrieve dev infos get for CN9K and CN10K.

Signed-off-by: Nithin Dabilpuram 
---
 doc/guides/nics/cnxk.rst  |  3 ++
 doc/guides/nics/features/cnxk.ini |  4 ++
 doc/guides/nics/features/cnxk_vec.ini |  4 ++
 doc/guides/nics/features/cnxk_vf.ini  |  3 ++
 drivers/net/cnxk/cnxk_ethdev.c|  4 +-
 drivers/net/cnxk/cnxk_ethdev.h| 33 
 drivers/net/cnxk/cnxk_ethdev_ops.c| 71 +++
 drivers/net/cnxk/meson.build  |  1 +
 8 files changed, 122 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/cnxk/cnxk_ethdev_ops.c

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 611ffb4..dfe2e7a 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -16,6 +16,9 @@ Features
 
 Features of the CNXK Ethdev PMD are:
 
+- SR-IOV VF
+- Lock-free Tx queue
+
 Prerequisites
 -
 
diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index 2c23464..b426340 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -4,6 +4,10 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Speed capabilities   = Y
+Lock-free Tx queue   = Y
+SR-IOV   = Y
+Multiprocess aware   = Y
 Linux= Y
 ARMv8= Y
 Usage doc= Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index de78516..292ac1e 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -4,6 +4,10 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Speed capabilities   = Y
+Lock-free Tx queue   = Y
+SR-IOV   = Y
+Multiprocess aware   = Y
 Linux= Y
 ARMv8= Y
 Usage doc= Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini 
b/doc/guides/nics/features/cnxk_vf.ini
index 9c96351..bc2eb8a 100644
--- a/doc/guides/nics/features/cnxk_vf.ini
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -4,6 +4,9 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Speed capabilities   = Y
+Lock-free Tx queue   = Y
+Multiprocess aware   = Y
 Linux= Y
 ARMv8= Y
 Usage doc= Y
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 3a2309e..1567007 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -38,7 +38,9 @@ nix_get_speed_capa(struct cnxk_eth_dev *dev)
 }
 
 /* CNXK platform independent eth dev ops */
-struct eth_dev_ops cnxk_eth_dev_ops;
+struct eth_dev_ops cnxk_eth_dev_ops = {
+   .dev_infos_get = cnxk_nix_info_get,
+};
 
 static int
 cnxk_eth_dev_init(struct rte_eth_dev *eth_dev)
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 97e3a15..8d9a7e0 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -15,9 +15,40 @@
 
 #define CNXK_ETH_DEV_PMD_VERSION "1.0"
 
+/* VLAN tag inserted by NIX_TX_VTAG_ACTION.
+ * In Tx space is always reserved for this in FRS.
+ */
+#define CNXK_NIX_MAX_VTAG_INS 2
+#define CNXK_NIX_MAX_VTAG_ACT_SIZE (4 * CNXK_NIX_MAX_VTAG_INS)
+
+/* ETH_HLEN+ETH_FCS+2*VLAN_HLEN */
+#define CNXK_NIX_L2_OVERHEAD (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + 8)
+
+#define CNXK_NIX_RX_MIN_DESC   16
+#define CNXK_NIX_RX_MIN_DESC_ALIGN  16
+#define CNXK_NIX_RX_NB_SEG_MAX 6
+#define CNXK_NIX_RX_DEFAULT_RING_SZ 4096
 /* Max supported SQB count */
 #define CNXK_NIX_TX_MAX_SQB 512
 
+/* If PTP is enabled additional SEND MEM DESC is required which
+ * takes 2 words, hence max 7 iova address are possible
+ */
+#if defined(RTE_LIBRTE_IEEE1588)
+#define CNXK_NIX_TX_NB_SEG_MAX 7
+#else
+#define CNXK_NIX_TX_NB_SEG_MAX 9
+#endif
+
+#define CNXK_NIX_RSS_L3_L4_SRC_DST 
\
+   (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | ETH_RSS_L4_SRC_ONLY | \
+ETH_RSS_L4_DST_ONLY)
+
+#define CNXK_NIX_RSS_OFFLOAD   
\
+   (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP |   \
+ETH_RSS_SCTP | ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD |  \
+CNXK_NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | ETH_RSS_C_VLAN)
+
 #define CNXK_NIX_TX_OFFLOAD_CAPA   
\
(DEV_TX_OFFLOAD_MBUF_FAST_FREE | DEV_TX_OFFLOAD_MT_LOCKFREE |  \
 DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT | \
@@ -77,6 +108,8 @@ extern struct eth_dev_ops cnxk_eth_dev_ops;
 int cnxk_nix_probe(struct rte_pci_driver *pci_drv,
   struct rte_pci_device *pci_dev);
 int cnxk_nix_remove(struct rte_pci_device *pci_dev);
+int cnxk_nix_info_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_dev_info *dev_info);
 
 /* Devargs */
 int cnxk_ethdev_par

[dpdk-dev] [PATCH 05/44] net/cnxk: add device configuration operation

2021-03-06 Thread Nithin Dabilpuram
Add device configuration op for CN9K and CN10K. Most of the
device configuration is common between two platforms except for
some supported offloads.

Signed-off-by: Nithin Dabilpuram 
---
 doc/guides/nics/cnxk.rst  |   2 +
 doc/guides/nics/features/cnxk.ini |   2 +
 doc/guides/nics/features/cnxk_vec.ini |   2 +
 doc/guides/nics/features/cnxk_vf.ini  |   2 +
 drivers/net/cnxk/cn10k_ethdev.c   |  34 +++
 drivers/net/cnxk/cn9k_ethdev.c|  45 +++
 drivers/net/cnxk/cnxk_ethdev.c| 521 ++
 drivers/net/cnxk/cnxk_ethdev.h|  70 +
 8 files changed, 678 insertions(+)

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index dfe2e7a..73eb62a 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -18,6 +18,8 @@ Features of the CNXK Ethdev PMD are:
 
 - SR-IOV VF
 - Lock-free Tx queue
+- Multiple queues for TX and RX
+- Receiver Side Scaling (RSS)
 
 Prerequisites
 -
diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index b426340..96dba2a 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -8,6 +8,8 @@ Speed capabilities   = Y
 Lock-free Tx queue   = Y
 SR-IOV   = Y
 Multiprocess aware   = Y
+RSS hash = Y
+Inner RSS= Y
 Linux= Y
 ARMv8= Y
 Usage doc= Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index 292ac1e..616991c 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -8,6 +8,8 @@ Speed capabilities   = Y
 Lock-free Tx queue   = Y
 SR-IOV   = Y
 Multiprocess aware   = Y
+RSS hash = Y
+Inner RSS= Y
 Linux= Y
 ARMv8= Y
 Usage doc= Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini 
b/doc/guides/nics/features/cnxk_vf.ini
index bc2eb8a..a0bd2f1 100644
--- a/doc/guides/nics/features/cnxk_vf.ini
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -7,6 +7,8 @@
 Speed capabilities   = Y
 Lock-free Tx queue   = Y
 Multiprocess aware   = Y
+RSS hash = Y
+Inner RSS= Y
 Linux= Y
 ARMv8= Y
 Usage doc= Y
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 54711ea..9cf0f9e 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -4,6 +4,38 @@
 #include "cn10k_ethdev.h"
 
 static int
+cn10k_nix_configure(struct rte_eth_dev *eth_dev)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   int rc;
+
+   /* Common nix configure */
+   rc = cnxk_nix_configure(eth_dev);
+   if (rc)
+   return rc;
+
+   plt_nix_dbg("Configured port%d platform specific rx_offload_flags=%x"
+   " tx_offload_flags=0x%x",
+   eth_dev->data->port_id, dev->rx_offload_flags,
+   dev->tx_offload_flags);
+   return 0;
+}
+
+/* Update platform specific eth dev ops */
+static void
+nix_eth_dev_ops_override(void)
+{
+   static int init_once;
+
+   if (init_once)
+   return;
+   init_once = 1;
+
+   /* Update platform specific ops */
+   cnxk_eth_dev_ops.dev_configure = cn10k_nix_configure;
+}
+
+static int
 cn10k_nix_remove(struct rte_pci_device *pci_dev)
 {
return cnxk_nix_remove(pci_dev);
@@ -26,6 +58,8 @@ cn10k_nix_probe(struct rte_pci_driver *pci_drv, struct 
rte_pci_device *pci_dev)
return rc;
}
 
+   nix_eth_dev_ops_override();
+
/* Common probe */
rc = cnxk_nix_probe(pci_drv, pci_dev);
if (rc)
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index bd97d5f..4f50949 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -4,6 +4,49 @@
 #include "cn9k_ethdev.h"
 
 static int
+cn9k_nix_configure(struct rte_eth_dev *eth_dev)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   struct rte_eth_conf *conf = ð_dev->data->dev_conf;
+   struct rte_eth_txmode *txmode = &conf->txmode;
+   int rc;
+
+   /* Platform specific checks */
+   if ((roc_model_is_cn96_A0() || roc_model_is_cn95_A0()) &&
+   (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
+   ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+(txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+   plt_err("Outer IP and SCTP checksum unsupported");
+   return -EINVAL;
+   }
+
+   /* Common nix configure */
+   rc = cnxk_nix_configure(eth_dev);
+   if (rc)
+   return rc;
+
+   plt_nix_dbg("Configured port%d platform specific rx_offload_flags=%x"
+   " tx_offload_flags=0x%x",
+   eth_dev->data->port_id, dev->rx_offload_flags,
+   dev->tx_off

[dpdk-dev] [PATCH 06/44] net/cnxk: add link status update support

2021-03-06 Thread Nithin Dabilpuram
Add link status update callback to get current
link status.

Signed-off-by: Nithin Dabilpuram 
---
 doc/guides/nics/cnxk.rst  |   1 +
 doc/guides/nics/features/cnxk.ini |   2 +
 doc/guides/nics/features/cnxk_vec.ini |   2 +
 doc/guides/nics/features/cnxk_vf.ini  |   2 +
 drivers/net/cnxk/cnxk_ethdev.c|   7 +++
 drivers/net/cnxk/cnxk_ethdev.h|   8 +++
 drivers/net/cnxk/cnxk_link.c  | 102 ++
 drivers/net/cnxk/meson.build  |   3 +-
 8 files changed, 126 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/cnxk/cnxk_link.c

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 73eb62a..a982450 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -20,6 +20,7 @@ Features of the CNXK Ethdev PMD are:
 - Lock-free Tx queue
 - Multiple queues for TX and RX
 - Receiver Side Scaling (RSS)
+- Link state information
 
 Prerequisites
 -
diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index 96dba2a..affbbd9 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -8,6 +8,8 @@ Speed capabilities   = Y
 Lock-free Tx queue   = Y
 SR-IOV   = Y
 Multiprocess aware   = Y
+Link status  = Y
+Link status event= Y
 RSS hash = Y
 Inner RSS= Y
 Linux= Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index 616991c..836cc9f 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -8,6 +8,8 @@ Speed capabilities   = Y
 Lock-free Tx queue   = Y
 SR-IOV   = Y
 Multiprocess aware   = Y
+Link status  = Y
+Link status event= Y
 RSS hash = Y
 Inner RSS= Y
 Linux= Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini 
b/doc/guides/nics/features/cnxk_vf.ini
index a0bd2f1..29bb24f 100644
--- a/doc/guides/nics/features/cnxk_vf.ini
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -7,6 +7,8 @@
 Speed capabilities   = Y
 Lock-free Tx queue   = Y
 Multiprocess aware   = Y
+Link status  = Y
+Link status event= Y
 RSS hash = Y
 Inner RSS= Y
 Linux= Y
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index f141027..c07827c 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -554,6 +554,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 /* CNXK platform independent eth dev ops */
 struct eth_dev_ops cnxk_eth_dev_ops = {
.dev_infos_get = cnxk_nix_info_get,
+   .link_update = cnxk_nix_link_update,
 };
 
 static int
@@ -589,6 +590,9 @@ cnxk_eth_dev_init(struct rte_eth_dev *eth_dev)
goto error;
}
 
+   /* Register up msg callbacks */
+   roc_nix_mac_link_cb_register(nix, cnxk_eth_dev_link_status_cb);
+
dev->eth_dev = eth_dev;
dev->configured = 0;
 
@@ -677,6 +681,9 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool 
mbox_close)
 
roc_nix_npc_rx_ena_dis(nix, false);
 
+   /* Disable link status events */
+   roc_nix_mac_link_event_start_stop(nix, false);
+
/* Free up SQs */
for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
dev_ops->tx_queue_release(eth_dev->data->tx_queues[i]);
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 55da1da..6dad8ac 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -15,6 +15,9 @@
 
 #define CNXK_ETH_DEV_PMD_VERSION "1.0"
 
+/* Used for struct cnxk_eth_dev::flags */
+#define CNXK_LINK_CFG_IN_PROGRESS_F BIT_ULL(0)
+
 /* VLAN tag inserted by NIX_TX_VTAG_ACTION.
  * In Tx space is always reserved for this in FRS.
  */
@@ -181,6 +184,11 @@ int cnxk_nix_configure(struct rte_eth_dev *eth_dev);
 uint32_t cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
uint8_t rss_level);
 
+/* Link */
+void cnxk_eth_dev_link_status_cb(struct roc_nix *nix,
+struct roc_nix_link_info *link);
+int cnxk_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete);
+
 /* Devargs */
 int cnxk_ethdev_parse_devargs(struct rte_devargs *devargs,
  struct cnxk_eth_dev *dev);
diff --git a/drivers/net/cnxk/cnxk_link.c b/drivers/net/cnxk/cnxk_link.c
new file mode 100644
index 000..0223d68
--- /dev/null
+++ b/drivers/net/cnxk/cnxk_link.c
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_ethdev.h"
+
+static inline int
+nix_wait_for_link_cfg(struct cnxk_eth_dev *dev)
+{
+   uint16_t wait = 1000;
+
+   do {
+   rte_rmb();
+   if (!(dev->flags & CNXK_LINK_CFG_IN_PROGRESS_F))
+   break;
+   wait--;
+   rte_delay_ms(1);
+

[dpdk-dev] [PATCH 07/44] net/cnxk: add Rx queue setup and release

2021-03-06 Thread Nithin Dabilpuram
Add Rx queue setup and release op for CN9K and CN10K
SoC. Release is completely common while setup is platform
dependent due to fast path Rx queue structure variation.
Fastpath is platform dependent partly due to core cacheline
size difference.

Signed-off-by: Nithin Dabilpuram 
---
 doc/guides/nics/features/cnxk.ini |   1 +
 doc/guides/nics/features/cnxk_vec.ini |   1 +
 doc/guides/nics/features/cnxk_vf.ini  |   1 +
 drivers/net/cnxk/cn10k_ethdev.c   |  44 +
 drivers/net/cnxk/cn10k_ethdev.h   |  14 +++
 drivers/net/cnxk/cn9k_ethdev.c|  44 +
 drivers/net/cnxk/cn9k_ethdev.h|  14 +++
 drivers/net/cnxk/cnxk_ethdev.c| 172 ++
 drivers/net/cnxk/cnxk_ethdev.h|   9 ++
 9 files changed, 300 insertions(+)

diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index affbbd9..a9d2b03 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -10,6 +10,7 @@ SR-IOV   = Y
 Multiprocess aware   = Y
 Link status  = Y
 Link status event= Y
+Runtime Rx queue setup = Y
 RSS hash = Y
 Inner RSS= Y
 Linux= Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index 836cc9f..6a8ca1f 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -10,6 +10,7 @@ SR-IOV   = Y
 Multiprocess aware   = Y
 Link status  = Y
 Link status event= Y
+Runtime Rx queue setup = Y
 RSS hash = Y
 Inner RSS= Y
 Linux= Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini 
b/doc/guides/nics/features/cnxk_vf.ini
index 29bb24f..f761638 100644
--- a/doc/guides/nics/features/cnxk_vf.ini
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -9,6 +9,7 @@ Lock-free Tx queue   = Y
 Multiprocess aware   = Y
 Link status  = Y
 Link status event= Y
+Runtime Rx queue setup = Y
 RSS hash = Y
 Inner RSS= Y
 Linux= Y
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 9cf0f9e..f7e2f7b 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -4,6 +4,49 @@
 #include "cn10k_ethdev.h"
 
 static int
+cn10k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
+uint16_t nb_desc, unsigned int socket,
+const struct rte_eth_rxconf *rx_conf,
+struct rte_mempool *mp)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   struct cn10k_eth_rxq *rxq;
+   struct roc_nix_rq *rq;
+   struct roc_nix_cq *cq;
+   int rc;
+
+   RTE_SET_USED(socket);
+
+   /* CQ Errata needs min 4K ring */
+   if (dev->cq_min_4k && nb_desc < 4096)
+   nb_desc = 4096;
+
+   /* Common Rx queue setup */
+   rc = cnxk_nix_rx_queue_setup(eth_dev, qid, nb_desc,
+sizeof(struct cn10k_eth_rxq), rx_conf, mp);
+   if (rc)
+   return rc;
+
+   rq = &dev->rqs[qid];
+   cq = &dev->cqs[qid];
+
+   /* Update fast path queue */
+   rxq = eth_dev->data->rx_queues[qid];
+   rxq->rq = qid;
+   rxq->desc = (uintptr_t)cq->desc_base;
+   rxq->cq_door = cq->door;
+   rxq->cq_status = cq->status;
+   rxq->wdata = cq->wdata;
+   rxq->head = cq->head;
+   rxq->qmask = cq->qmask;
+
+   /* Data offset from data to start of mbuf is first_skip */
+   rxq->data_off = rq->first_skip;
+   rxq->mbuf_initializer = cnxk_nix_rxq_mbuf_setup(dev);
+   return 0;
+}
+
+static int
 cn10k_nix_configure(struct rte_eth_dev *eth_dev)
 {
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
@@ -33,6 +76,7 @@ nix_eth_dev_ops_override(void)
 
/* Update platform specific ops */
cnxk_eth_dev_ops.dev_configure = cn10k_nix_configure;
+   cnxk_eth_dev_ops.rx_queue_setup = cn10k_nix_rx_queue_setup;
 }
 
 static int
diff --git a/drivers/net/cnxk/cn10k_ethdev.h b/drivers/net/cnxk/cn10k_ethdev.h
index 1bf4a65..08e11bb 100644
--- a/drivers/net/cnxk/cn10k_ethdev.h
+++ b/drivers/net/cnxk/cn10k_ethdev.h
@@ -6,4 +6,18 @@
 
 #include 
 
+struct cn10k_eth_rxq {
+   uint64_t mbuf_initializer;
+   uintptr_t desc;
+   void *lookup_mem;
+   uintptr_t cq_door;
+   uint64_t wdata;
+   int64_t *cq_status;
+   uint32_t head;
+   uint32_t qmask;
+   uint32_t available;
+   uint16_t data_off;
+   uint16_t rq;
+} __plt_cache_aligned;
+
 #endif /* __CN10K_ETHDEV_H__ */
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 4f50949..79c30aa 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -4,6 +4,49 @@
 #include "cn9k_ethdev.h"
 
 static int
+cn9k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
+   uint16_t nb_

[dpdk-dev] [PATCH 08/44] net/cnxk: add Tx queue setup and release

2021-03-06 Thread Nithin Dabilpuram
Add Tx queue setup and release for CN9K and CN10K.
Release is common while setup is platform dependent due
to differences in fast path Tx queue structures.

Signed-off-by: Nithin Dabilpuram 
---
 doc/guides/nics/features/cnxk.ini |  1 +
 doc/guides/nics/features/cnxk_vec.ini |  1 +
 doc/guides/nics/features/cnxk_vf.ini  |  1 +
 drivers/net/cnxk/cn10k_ethdev.c   | 71 +
 drivers/net/cnxk/cn10k_ethdev.h   | 12 +
 drivers/net/cnxk/cn10k_tx.h   | 13 +
 drivers/net/cnxk/cn9k_ethdev.c| 69 
 drivers/net/cnxk/cn9k_ethdev.h| 10 
 drivers/net/cnxk/cn9k_tx.h| 13 +
 drivers/net/cnxk/cnxk_ethdev.c| 98 +++
 drivers/net/cnxk/cnxk_ethdev.h|  3 ++
 11 files changed, 292 insertions(+)
 create mode 100644 drivers/net/cnxk/cn10k_tx.h
 create mode 100644 drivers/net/cnxk/cn9k_tx.h

diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index a9d2b03..462d7c4 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -11,6 +11,7 @@ Multiprocess aware   = Y
 Link status  = Y
 Link status event= Y
 Runtime Rx queue setup = Y
+Runtime Tx queue setup = Y
 RSS hash = Y
 Inner RSS= Y
 Linux= Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index 6a8ca1f..09e0d3a 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -11,6 +11,7 @@ Multiprocess aware   = Y
 Link status  = Y
 Link status event= Y
 Runtime Rx queue setup = Y
+Runtime Tx queue setup = Y
 RSS hash = Y
 Inner RSS= Y
 Linux= Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini 
b/doc/guides/nics/features/cnxk_vf.ini
index f761638..4a93a35 100644
--- a/doc/guides/nics/features/cnxk_vf.ini
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -10,6 +10,7 @@ Multiprocess aware   = Y
 Link status  = Y
 Link status event= Y
 Runtime Rx queue setup = Y
+Runtime Tx queue setup = Y
 RSS hash = Y
 Inner RSS= Y
 Linux= Y
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index f7e2f7b..e194b13 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -2,6 +2,76 @@
  * Copyright(C) 2021 Marvell.
  */
 #include "cn10k_ethdev.h"
+#include "cn10k_tx.h"
+
+static void
+nix_form_default_desc(struct cnxk_eth_dev *dev, struct cn10k_eth_txq *txq,
+ uint16_t qid)
+{
+   struct nix_send_ext_s *send_hdr_ext;
+   union nix_send_hdr_w0_u send_hdr_w0;
+   union nix_send_sg_s sg_w0;
+
+   RTE_SET_USED(dev);
+
+   /* Initialize the fields based on basic single segment packet */
+   memset(&txq->cmd, 0, sizeof(txq->cmd));
+   send_hdr_w0.u = 0;
+   sg_w0.u = 0;
+
+   if (dev->tx_offload_flags & NIX_TX_NEED_EXT_HDR) {
+   /* 2(HDR) + 2(EXT_HDR) + 1(SG) + 1(IOVA) = 6/2 - 1 = 2 */
+   send_hdr_w0.sizem1 = 2;
+
+   send_hdr_ext = (struct nix_send_ext_s *)&txq->cmd[0];
+   send_hdr_ext->w0.subdc = NIX_SUBDC_EXT;
+   } else {
+   /* 2(HDR) + 1(SG) + 1(IOVA) = 4/2 - 1 = 1 */
+   send_hdr_w0.sizem1 = 1;
+   }
+
+   send_hdr_w0.sq = qid;
+   sg_w0.subdc = NIX_SUBDC_SG;
+   sg_w0.segs = 1;
+   sg_w0.ld_type = NIX_SENDLDTYPE_LDD;
+
+   txq->send_hdr_w0 = send_hdr_w0.u;
+   txq->sg_w0 = sg_w0.u;
+
+   rte_wmb();
+}
+
+static int
+cn10k_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
+uint16_t nb_desc, unsigned int socket,
+const struct rte_eth_txconf *tx_conf)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   struct cn10k_eth_txq *txq;
+   struct roc_nix_sq *sq;
+   int rc;
+
+   RTE_SET_USED(socket);
+
+   /* Common Tx queue setup */
+   rc = cnxk_nix_tx_queue_setup(eth_dev, qid, nb_desc,
+sizeof(struct cn10k_eth_txq), tx_conf);
+   if (rc)
+   return rc;
+
+   sq = &dev->sqs[qid];
+   /* Update fast path queue */
+   txq = eth_dev->data->tx_queues[qid];
+   txq->fc_mem = sq->fc;
+   /* Store lmt base in tx queue for easy access */
+   txq->lmt_base = dev->nix.lmt_base;
+   txq->io_addr = sq->io_addr;
+   txq->nb_sqb_bufs_adj = sq->nb_sqb_bufs_adj;
+   txq->sqes_per_sqb_log2 = sq->sqes_per_sqb_log2;
+
+   nix_form_default_desc(dev, txq, qid);
+   return 0;
+}
 
 static int
 cn10k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
@@ -76,6 +146,7 @@ nix_eth_dev_ops_override(void)
 
/* Update platform specific ops */
cnxk_eth_dev_ops.dev_configure = cn10k_nix_configure;
+   cnxk_eth_dev_ops.tx_queue_setup = cn10k_nix_tx_queu

[dpdk-dev] [PATCH 09/44] net/cnxk: add packet type support

2021-03-06 Thread Nithin Dabilpuram
Add support for packet type lookup on Rx to translate HW
specific types to  RTE_PTYPE_* defines

Signed-off-by: Nithin Dabilpuram 
---
 doc/guides/nics/cnxk.rst  |   1 +
 doc/guides/nics/features/cnxk.ini |   1 +
 doc/guides/nics/features/cnxk_vec.ini |   1 +
 doc/guides/nics/features/cnxk_vf.ini  |   1 +
 drivers/net/cnxk/cn10k_ethdev.c   |  21 +++
 drivers/net/cnxk/cn10k_rx.h   |  11 ++
 drivers/net/cnxk/cn9k_ethdev.c|  21 +++
 drivers/net/cnxk/cn9k_rx.h|  12 ++
 drivers/net/cnxk/cnxk_ethdev.c|   2 +
 drivers/net/cnxk/cnxk_ethdev.h|  14 ++
 drivers/net/cnxk/cnxk_lookup.c| 326 ++
 drivers/net/cnxk/meson.build  |   3 +-
 12 files changed, 413 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/cnxk/cn10k_rx.h
 create mode 100644 drivers/net/cnxk/cn9k_rx.h
 create mode 100644 drivers/net/cnxk/cnxk_lookup.c

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index a982450..4f1b58c 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -16,6 +16,7 @@ Features
 
 Features of the CNXK Ethdev PMD are:
 
+- Packet type information
 - SR-IOV VF
 - Lock-free Tx queue
 - Multiple queues for TX and RX
diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index 462d7c4..503582c 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -14,6 +14,7 @@ Runtime Rx queue setup = Y
 Runtime Tx queue setup = Y
 RSS hash = Y
 Inner RSS= Y
+Packet type parsing  = Y
 Linux= Y
 ARMv8= Y
 Usage doc= Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index 09e0d3a..9ad225a 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -14,6 +14,7 @@ Runtime Rx queue setup = Y
 Runtime Tx queue setup = Y
 RSS hash = Y
 Inner RSS= Y
+Packet type parsing  = Y
 Linux= Y
 ARMv8= Y
 Usage doc= Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini 
b/doc/guides/nics/features/cnxk_vf.ini
index 4a93a35..8c93ba7 100644
--- a/doc/guides/nics/features/cnxk_vf.ini
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -13,6 +13,7 @@ Runtime Rx queue setup = Y
 Runtime Tx queue setup = Y
 RSS hash = Y
 Inner RSS= Y
+Packet type parsing  = Y
 Linux= Y
 ARMv8= Y
 Usage doc= Y
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index e194b13..efd5b67 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -2,8 +2,25 @@
  * Copyright(C) 2021 Marvell.
  */
 #include "cn10k_ethdev.h"
+#include "cn10k_rx.h"
 #include "cn10k_tx.h"
 
+static int
+cn10k_nix_ptypes_set(struct rte_eth_dev *eth_dev, uint32_t ptype_mask)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+
+   if (ptype_mask) {
+   dev->rx_offload_flags |= NIX_RX_OFFLOAD_PTYPE_F;
+   dev->ptype_disable = 0;
+   } else {
+   dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_PTYPE_F;
+   dev->ptype_disable = 1;
+   }
+
+   return 0;
+}
+
 static void
 nix_form_default_desc(struct cnxk_eth_dev *dev, struct cn10k_eth_txq *txq,
  uint16_t qid)
@@ -113,6 +130,9 @@ cn10k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, 
uint16_t qid,
/* Data offset from data to start of mbuf is first_skip */
rxq->data_off = rq->first_skip;
rxq->mbuf_initializer = cnxk_nix_rxq_mbuf_setup(dev);
+
+   /* Lookup mem */
+   rxq->lookup_mem = cnxk_nix_fastpath_lookup_mem_get();
return 0;
 }
 
@@ -148,6 +168,7 @@ nix_eth_dev_ops_override(void)
cnxk_eth_dev_ops.dev_configure = cn10k_nix_configure;
cnxk_eth_dev_ops.tx_queue_setup = cn10k_nix_tx_queue_setup;
cnxk_eth_dev_ops.rx_queue_setup = cn10k_nix_rx_queue_setup;
+   cnxk_eth_dev_ops.dev_ptypes_set = cn10k_nix_ptypes_set;
 }
 
 static int
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
new file mode 100644
index 000..d3d1661
--- /dev/null
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+#ifndef __CN10K_RX_H__
+#define __CN10K_RX_H__
+
+#include 
+
+#define NIX_RX_OFFLOAD_PTYPE_F  BIT(1)
+
+#endif /* __CN10K_RX_H__ */
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index e97ce15..3f3de4f 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -2,8 +2,25 @@
  * Copyright(C) 2021 Marvell.
  */
 #include "cn9k_ethdev.h"
+#include "cn9k_rx.h"
 #include "cn9k_tx.h"
 
+static int
+cn9k_nix_ptypes_set(struct rte_eth_dev *eth_dev, uint32_t ptype_mask)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+
+   if

[dpdk-dev] [PATCH 10/44] net/cnxk: add queue start and stop support

2021-03-06 Thread Nithin Dabilpuram
Add Rx/Tx queue start and stop callbacks for
CN9K and CN10K.

Signed-off-by: Nithin Dabilpuram 
---
 doc/guides/nics/features/cnxk.ini |  1 +
 doc/guides/nics/features/cnxk_vec.ini |  1 +
 doc/guides/nics/features/cnxk_vf.ini  |  1 +
 drivers/net/cnxk/cn10k_ethdev.c   | 16 ++
 drivers/net/cnxk/cn9k_ethdev.c| 16 ++
 drivers/net/cnxk/cnxk_ethdev.c| 92 +++
 drivers/net/cnxk/cnxk_ethdev.h|  1 +
 7 files changed, 128 insertions(+)

diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index 503582c..712f8d5 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -12,6 +12,7 @@ Link status  = Y
 Link status event= Y
 Runtime Rx queue setup = Y
 Runtime Tx queue setup = Y
+Queue start/stop = Y
 RSS hash = Y
 Inner RSS= Y
 Packet type parsing  = Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index 9ad225a..82f2af0 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -12,6 +12,7 @@ Link status  = Y
 Link status event= Y
 Runtime Rx queue setup = Y
 Runtime Tx queue setup = Y
+Queue start/stop = Y
 RSS hash = Y
 Inner RSS= Y
 Packet type parsing  = Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini 
b/doc/guides/nics/features/cnxk_vf.ini
index 8c93ba7..61fed11 100644
--- a/doc/guides/nics/features/cnxk_vf.ini
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -11,6 +11,7 @@ Link status  = Y
 Link status event= Y
 Runtime Rx queue setup = Y
 Runtime Tx queue setup = Y
+Queue start/stop = Y
 RSS hash = Y
 Inner RSS= Y
 Packet type parsing  = Y
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index efd5b67..1a9fcbb 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -137,6 +137,21 @@ cn10k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, 
uint16_t qid,
 }
 
 static int
+cn10k_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
+{
+   struct cn10k_eth_txq *txq = eth_dev->data->tx_queues[qidx];
+   int rc;
+
+   rc = cnxk_nix_tx_queue_stop(eth_dev, qidx);
+   if (rc)
+   return rc;
+
+   /* Clear fc cache pkts to trigger worker stop */
+   txq->fc_cache_pkts = 0;
+   return 0;
+}
+
+static int
 cn10k_nix_configure(struct rte_eth_dev *eth_dev)
 {
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
@@ -168,6 +183,7 @@ nix_eth_dev_ops_override(void)
cnxk_eth_dev_ops.dev_configure = cn10k_nix_configure;
cnxk_eth_dev_ops.tx_queue_setup = cn10k_nix_tx_queue_setup;
cnxk_eth_dev_ops.rx_queue_setup = cn10k_nix_rx_queue_setup;
+   cnxk_eth_dev_ops.tx_queue_stop = cn10k_nix_tx_queue_stop;
cnxk_eth_dev_ops.dev_ptypes_set = cn10k_nix_ptypes_set;
 }
 
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 3f3de4f..3561632 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -135,6 +135,21 @@ cn9k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, 
uint16_t qid,
 }
 
 static int
+cn9k_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
+{
+   struct cn9k_eth_txq *txq = eth_dev->data->tx_queues[qidx];
+   int rc;
+
+   rc = cnxk_nix_tx_queue_stop(eth_dev, qidx);
+   if (rc)
+   return rc;
+
+   /* Clear fc cache pkts to trigger worker stop */
+   txq->fc_cache_pkts = 0;
+   return 0;
+}
+
+static int
 cn9k_nix_configure(struct rte_eth_dev *eth_dev)
 {
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
@@ -177,6 +192,7 @@ nix_eth_dev_ops_override(void)
cnxk_eth_dev_ops.dev_configure = cn9k_nix_configure;
cnxk_eth_dev_ops.tx_queue_setup = cn9k_nix_tx_queue_setup;
cnxk_eth_dev_ops.rx_queue_setup = cn9k_nix_rx_queue_setup;
+   cnxk_eth_dev_ops.tx_queue_stop = cn9k_nix_tx_queue_stop;
cnxk_eth_dev_ops.dev_ptypes_set = cn9k_nix_ptypes_set;
 }
 
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 96acf90..f1ba04f 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -819,12 +819,104 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
return rc;
 }
 
+static int
+cnxk_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qid)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   struct rte_eth_dev_data *data = eth_dev->data;
+   struct roc_nix_sq *sq = &dev->sqs[qid];
+   int rc = -EINVAL;
+
+   if (data->tx_queue_state[qid] == RTE_ETH_QUEUE_STATE_STARTED)
+   return 0;
+
+   rc = roc_nix_tm_sq_aura_fc(sq, true);
+   if (rc) {
+   plt_err("Failed to enable sq aura fc, txq=%u, rc=%d", qid, rc);
+   goto done;
+   }
+
+   data->tx_queue_state[qid] = 

[dpdk-dev] [PATCH 11/44] net/cnxk: add Rx support for cn9k

2021-03-06 Thread Nithin Dabilpuram
From: Jerin Jacob 

Add Rx burst scalar version for CN9K.

Signed-off-by: Jerin Jacob 
---
 drivers/net/cnxk/cn9k_ethdev.h |   3 +
 drivers/net/cnxk/cn9k_rx.c | 124 +
 drivers/net/cnxk/cn9k_rx.h | 152 +
 drivers/net/cnxk/cnxk_ethdev.h |   3 +
 drivers/net/cnxk/meson.build   |   3 +-
 5 files changed, 284 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/cnxk/cn9k_rx.c

diff --git a/drivers/net/cnxk/cn9k_ethdev.h b/drivers/net/cnxk/cn9k_ethdev.h
index 9ebf68f..84dcc2c 100644
--- a/drivers/net/cnxk/cn9k_ethdev.h
+++ b/drivers/net/cnxk/cn9k_ethdev.h
@@ -30,4 +30,7 @@ struct cn9k_eth_rxq {
uint16_t rq;
 } __plt_cache_aligned;
 
+/* Rx and Tx routines */
+void cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev);
+
 #endif /* __CN9K_ETHDEV_H__ */
diff --git a/drivers/net/cnxk/cn9k_rx.c b/drivers/net/cnxk/cn9k_rx.c
new file mode 100644
index 000..1c05cf3
--- /dev/null
+++ b/drivers/net/cnxk/cn9k_rx.c
@@ -0,0 +1,124 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cn9k_ethdev.h"
+#include "cn9k_rx.h"
+
+#define CNXK_NIX_CQ_ENTRY_SZ 128
+#define NIX_DESCS_PER_LOOP   4
+#define CQE_CAST(x) ((struct nix_cqe_hdr_s *)(x))
+#define CQE_SZ(x)   ((x) * CNXK_NIX_CQ_ENTRY_SZ)
+
+static inline uint16_t
+nix_rx_nb_pkts(struct cn9k_eth_rxq *rxq, const uint64_t wdata,
+  const uint16_t pkts, const uint32_t qmask)
+{
+   uint32_t available = rxq->available;
+
+   /* Update the available count if cached value is not enough */
+   if (unlikely(available < pkts)) {
+   uint64_t reg, head, tail;
+
+   /* Use LDADDA version to avoid reorder */
+   reg = roc_atomic64_add_sync(wdata, rxq->cq_status);
+   /* CQ_OP_STATUS operation error */
+   if (reg & BIT_ULL(NIX_CQ_OP_STAT_OP_ERR) ||
+   reg & BIT_ULL(NIX_CQ_OP_STAT_CQ_ERR))
+   return 0;
+
+   tail = reg & 0xF;
+   head = (reg >> 20) & 0xF;
+   if (tail < head)
+   available = tail - head + qmask + 1;
+   else
+   available = tail - head;
+
+   rxq->available = available;
+   }
+
+   return RTE_MIN(pkts, available);
+}
+
+static __rte_always_inline uint16_t
+nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts,
+ const uint16_t flags)
+{
+   struct cn9k_eth_rxq *rxq = rx_queue;
+   const uint64_t mbuf_init = rxq->mbuf_initializer;
+   const void *lookup_mem = rxq->lookup_mem;
+   const uint64_t data_off = rxq->data_off;
+   const uintptr_t desc = rxq->desc;
+   const uint64_t wdata = rxq->wdata;
+   const uint32_t qmask = rxq->qmask;
+   uint16_t packets = 0, nb_pkts;
+   uint32_t head = rxq->head;
+   struct nix_cqe_hdr_s *cq;
+   struct rte_mbuf *mbuf;
+
+   nb_pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
+
+   while (packets < nb_pkts) {
+   /* Prefetch N desc ahead */
+   rte_prefetch_non_temporal(
+   (void *)(desc + (CQE_SZ((head + 2) & qmask;
+   cq = (struct nix_cqe_hdr_s *)(desc + CQE_SZ(head));
+
+   mbuf = nix_get_mbuf_from_cqe(cq, data_off);
+
+   cn9k_nix_cqe_to_mbuf(cq, cq->tag, mbuf, lookup_mem, mbuf_init,
+flags);
+   rx_pkts[packets++] = mbuf;
+   roc_prefetch_store_keep(mbuf);
+   head++;
+   head &= qmask;
+   }
+
+   rxq->head = head;
+   rxq->available -= nb_pkts;
+
+   /* Free all the CQs that we've processed */
+   plt_write64((wdata | nb_pkts), rxq->cq_door);
+
+   return nb_pkts;
+}
+
+#define R(name, f3, f2, f1, f0, flags)\
+   static uint16_t __rte_noinline __rte_hot cn9k_nix_recv_pkts_##name(\
+   void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)  \
+   {  \
+   return nix_recv_pkts(rx_queue, rx_pkts, pkts, (flags));\
+   }
+
+NIX_RX_FASTPATH_MODES
+#undef R
+
+static inline void
+pick_rx_func(struct rte_eth_dev *eth_dev,
+const eth_rx_burst_t rx_burst[2][2][2][2])
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+
+   /* [MARK] [CKSUM] [PTYPE] [RSS] */
+   eth_dev->rx_pkt_burst = rx_burst
+   [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+   [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_CHECKSUM_F)]
+   [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)]
+   [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_RSS_F)];
+}
+
+void
+cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
+{
+   const eth_rx_burst_t nix_eth_rx_burst[2][2][2

[dpdk-dev] [PATCH 12/44] net/cnxk: add Rx multi-segmented version for cn9k

2021-03-06 Thread Nithin Dabilpuram
Add Rx burst multi-segmented version for CN9K.

Signed-off-by: Nithin Dabilpuram 
Signed-off-by: Pavan Nikhilesh 
---
 drivers/net/cnxk/cn9k_rx.c | 26 
 drivers/net/cnxk/cn9k_rx.h | 55 --
 drivers/net/cnxk/cnxk_ethdev.h |  3 +++
 3 files changed, 82 insertions(+), 2 deletions(-)

diff --git a/drivers/net/cnxk/cn9k_rx.c b/drivers/net/cnxk/cn9k_rx.c
index 1c05cf3..5535735 100644
--- a/drivers/net/cnxk/cn9k_rx.c
+++ b/drivers/net/cnxk/cn9k_rx.c
@@ -88,6 +88,15 @@ nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, 
uint16_t pkts,
void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)  \
{  \
return nix_recv_pkts(rx_queue, rx_pkts, pkts, (flags));\
+   }  \
+  \
+   static uint16_t __rte_noinline __rte_hot   \
+   cn9k_nix_recv_pkts_mseg_##name(void *rx_queue, \
+  struct rte_mbuf **rx_pkts,  \
+  uint16_t pkts)  \
+   {  \
+   return nix_recv_pkts(rx_queue, rx_pkts, pkts,  \
+(flags) | NIX_RX_MULTI_SEG_F);\
}
 
 NIX_RX_FASTPATH_MODES
@@ -110,6 +119,8 @@ pick_rx_func(struct rte_eth_dev *eth_dev,
 void
 cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 {
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+
const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2] = {
 #define R(name, f3, f2, f1, f0, flags) \
[f3][f2][f1][f0] = cn9k_nix_recv_pkts_##name,
@@ -118,7 +129,22 @@ cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 #undef R
};
 
+   const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+   [f3][f2][f1][f0] = cn9k_nix_recv_pkts_mseg_##name,
+
+   NIX_RX_FASTPATH_MODES
+#undef R
+   };
+
pick_rx_func(eth_dev, nix_eth_rx_burst);
 
+   if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+   pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
+
+   /* Copy multi seg version with no offload for tear down sequence */
+   if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+   dev->rx_pkt_burst_no_offload =
+   nix_eth_rx_burst_mseg[0][0][0][0];
rte_mb();
 }
diff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h
index 949fd95..a6b245f 100644
--- a/drivers/net/cnxk/cn9k_rx.h
+++ b/drivers/net/cnxk/cn9k_rx.h
@@ -99,6 +99,53 @@ nix_update_match_id(const uint16_t match_id, uint64_t 
ol_flags,
 }
 
 static __rte_always_inline void
+nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf,
+   uint64_t rearm)
+{
+   const rte_iova_t *iova_list;
+   struct rte_mbuf *head;
+   const rte_iova_t *eol;
+   uint8_t nb_segs;
+   uint64_t sg;
+
+   sg = *(const uint64_t *)(rx + 1);
+   nb_segs = (sg >> 48) & 0x3;
+   mbuf->nb_segs = nb_segs;
+   mbuf->data_len = sg & 0x;
+   sg = sg >> 16;
+
+   eol = ((const rte_iova_t *)(rx + 1) +
+  ((rx->cn9k.desc_sizem1 + 1) << 1));
+   /* Skip SG_S and first IOVA*/
+   iova_list = ((const rte_iova_t *)(rx + 1)) + 2;
+   nb_segs--;
+
+   rearm = rearm & ~0x;
+
+   head = mbuf;
+   while (nb_segs) {
+   mbuf->next = ((struct rte_mbuf *)*iova_list) - 1;
+   mbuf = mbuf->next;
+
+   __mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+
+   mbuf->data_len = sg & 0x;
+   sg = sg >> 16;
+   *(uint64_t *)(&mbuf->rearm_data) = rearm;
+   nb_segs--;
+   iova_list++;
+
+   if (!nb_segs && (iova_list + 1 < eol)) {
+   sg = *(const uint64_t *)(iova_list);
+   nb_segs = (sg >> 48) & 0x3;
+   head->nb_segs += nb_segs;
+   iova_list = (const rte_iova_t *)(iova_list + 1);
+   }
+   }
+   mbuf->next = NULL;
+}
+
+static __rte_always_inline void
 cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 struct rte_mbuf *mbuf, const void *lookup_mem,
 const uint64_t val, const uint16_t flag)
@@ -133,8 +180,12 @@ cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const 
uint32_t tag,
*(uint64_t *)(&mbuf->rearm_data) = val;
mbuf->pkt_len = len;
 
-   mbuf->data_len = len;
-   mbuf->next = NULL;
+   

[dpdk-dev] [PATCH 13/44] net/cnxk: add Rx vector version for cn9k

2021-03-06 Thread Nithin Dabilpuram
From: Jerin Jacob 

Add Rx burst vector version for CN9K.

Signed-off-by: Jerin Jacob 
Signed-off-by: Nithin Dabilpuram 
---
 drivers/net/cnxk/cn9k_rx.c | 240 -
 1 file changed, 239 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cnxk/cn9k_rx.c b/drivers/net/cnxk/cn9k_rx.c
index 5535735..391f1e2 100644
--- a/drivers/net/cnxk/cn9k_rx.c
+++ b/drivers/net/cnxk/cn9k_rx.c
@@ -2,6 +2,8 @@
  * Copyright(C) 2021 Marvell.
  */
 
+#include 
+
 #include "cn9k_ethdev.h"
 #include "cn9k_rx.h"
 
@@ -83,6 +85,223 @@ nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, 
uint16_t pkts,
return nb_pkts;
 }
 
+#if defined(RTE_ARCH_ARM64)
+
+static __rte_always_inline uint16_t
+nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts,
+const uint16_t flags)
+{
+   struct cn9k_eth_rxq *rxq = rx_queue;
+   uint16_t packets = 0;
+   uint64x2_t cq0_w8, cq1_w8, cq2_w8, cq3_w8, mbuf01, mbuf23;
+   const uint64_t mbuf_initializer = rxq->mbuf_initializer;
+   const uint64x2_t data_off = vdupq_n_u64(rxq->data_off);
+   uint64_t ol_flags0, ol_flags1, ol_flags2, ol_flags3;
+   uint64x2_t rearm0 = vdupq_n_u64(mbuf_initializer);
+   uint64x2_t rearm1 = vdupq_n_u64(mbuf_initializer);
+   uint64x2_t rearm2 = vdupq_n_u64(mbuf_initializer);
+   uint64x2_t rearm3 = vdupq_n_u64(mbuf_initializer);
+   struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
+   const uint16_t *lookup_mem = rxq->lookup_mem;
+   const uint32_t qmask = rxq->qmask;
+   const uint64_t wdata = rxq->wdata;
+   const uintptr_t desc = rxq->desc;
+   uint8x16_t f0, f1, f2, f3;
+   uint32_t head = rxq->head;
+   uint16_t pkts_left;
+
+   pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
+   pkts_left = pkts & (NIX_DESCS_PER_LOOP - 1);
+
+   /* Packets has to be floor-aligned to NIX_DESCS_PER_LOOP */
+   pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP);
+
+   while (packets < pkts) {
+   /* Exit loop if head is about to wrap and become unaligned */
+   if (((head + NIX_DESCS_PER_LOOP - 1) & qmask) <
+   NIX_DESCS_PER_LOOP) {
+   pkts_left += (pkts - packets);
+   break;
+   }
+
+   const uintptr_t cq0 = desc + CQE_SZ(head);
+
+   /* Prefetch N desc ahead */
+   rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(8)));
+   rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(9)));
+   rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(10)));
+   rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(11)));
+
+   /* Get NIX_RX_SG_S for size and buffer pointer */
+   cq0_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(0) + 64));
+   cq1_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(1) + 64));
+   cq2_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(2) + 64));
+   cq3_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(3) + 64));
+
+   /* Extract mbuf from NIX_RX_SG_S */
+   mbuf01 = vzip2q_u64(cq0_w8, cq1_w8);
+   mbuf23 = vzip2q_u64(cq2_w8, cq3_w8);
+   mbuf01 = vqsubq_u64(mbuf01, data_off);
+   mbuf23 = vqsubq_u64(mbuf23, data_off);
+
+   /* Move mbufs to scalar registers for future use */
+   mbuf0 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 0);
+   mbuf1 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 1);
+   mbuf2 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 0);
+   mbuf3 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 1);
+
+   /* Mask to get packet len from NIX_RX_SG_S */
+   const uint8x16_t shuf_msk = {
+   0xFF, 0xFF, /* pkt_type set as unknown */
+   0xFF, 0xFF, /* pkt_type set as unknown */
+   0,1,/* octet 1~0, low 16 bits pkt_len */
+   0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
+   0,1,/* octet 1~0, 16 bits data_len */
+   0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF};
+
+   /* Form the rx_descriptor_fields1 with pkt_len and data_len */
+   f0 = vqtbl1q_u8(cq0_w8, shuf_msk);
+   f1 = vqtbl1q_u8(cq1_w8, shuf_msk);
+   f2 = vqtbl1q_u8(cq2_w8, shuf_msk);
+   f3 = vqtbl1q_u8(cq3_w8, shuf_msk);
+
+   /* Load CQE word0 and word 1 */
+   uint64_t cq0_w0 = ((uint64_t *)(cq0 + CQE_SZ(0)))[0];
+   uint64_t cq0_w1 = ((uint64_t *)(cq0 + CQE_SZ(0)))[1];
+   uint64_t cq1_w0 = ((uint64_t *)(cq0 + CQE_SZ(1)))[0];
+   uint64_t cq1_w1 = ((uint64_t *)(cq0 + CQE_SZ(1)))[1];
+   uint64_t cq2_w0 = ((uint64_t *)(cq0 + CQE_SZ(2)))[0];
+   uint64_t cq2_w1 = ((uint64_t *)(cq0 + CQE_SZ(2)))[1];
+   uint64_t cq3_w0 = (

[dpdk-dev] [PATCH 14/44] net/cnxk: add Tx support for cn9k

2021-03-06 Thread Nithin Dabilpuram
From: Jerin Jacob 

Add Tx burst scalar version for CN9K.

Signed-off-by: Jerin Jacob 
Signed-off-by: Nithin Dabilpuram 
Signed-off-by: Pavan Nikhilesh 
Signed-off-by: Harman Kalra 
---
 drivers/net/cnxk/cn9k_ethdev.h |   1 +
 drivers/net/cnxk/cn9k_tx.c | 103 
 drivers/net/cnxk/cn9k_tx.h | 357 +
 drivers/net/cnxk/cnxk_ethdev.h |  71 
 drivers/net/cnxk/meson.build   |   3 +-
 5 files changed, 534 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/cnxk/cn9k_tx.c

diff --git a/drivers/net/cnxk/cn9k_ethdev.h b/drivers/net/cnxk/cn9k_ethdev.h
index 84dcc2c..cd0938f 100644
--- a/drivers/net/cnxk/cn9k_ethdev.h
+++ b/drivers/net/cnxk/cn9k_ethdev.h
@@ -32,5 +32,6 @@ struct cn9k_eth_rxq {
 
 /* Rx and Tx routines */
 void cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev);
+void cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev);
 
 #endif /* __CN9K_ETHDEV_H__ */
diff --git a/drivers/net/cnxk/cn9k_tx.c b/drivers/net/cnxk/cn9k_tx.c
new file mode 100644
index 000..06e9618
--- /dev/null
+++ b/drivers/net/cnxk/cn9k_tx.c
@@ -0,0 +1,103 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cn9k_ethdev.h"
+#include "cn9k_tx.h"
+
+#define NIX_XMIT_FC_OR_RETURN(txq, pkts)   
\
+   do {   \
+   /* Cached value is low, Update the fc_cache_pkts */\
+   if (unlikely((txq)->fc_cache_pkts < (pkts))) { \
+   /* Multiply with sqe_per_sqb to express in pkts */ \
+   (txq)->fc_cache_pkts = \
+   ((txq)->nb_sqb_bufs_adj - *(txq)->fc_mem)  \
+   << (txq)->sqes_per_sqb_log2;   \
+   /* Check it again for the room */  \
+   if (unlikely((txq)->fc_cache_pkts < (pkts)))   \
+   return 0;  \
+   }  \
+   } while (0)
+
+static __rte_always_inline uint16_t
+nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,
+ uint64_t *cmd, const uint16_t flags)
+{
+   struct cn9k_eth_txq *txq = tx_queue;
+   uint16_t i;
+   const rte_iova_t io_addr = txq->io_addr;
+   void *lmt_addr = txq->lmt_addr;
+
+   NIX_XMIT_FC_OR_RETURN(txq, pkts);
+
+   roc_lmt_mov(cmd, &txq->cmd[0], cn9k_nix_tx_ext_subs(flags));
+
+   /* Perform header writes before barrier for TSO */
+   if (flags & NIX_TX_OFFLOAD_TSO_F) {
+   for (i = 0; i < pkts; i++)
+   cn9k_nix_xmit_prepare_tso(tx_pkts[i], flags);
+   }
+
+   /* Lets commit any changes in the packet here as no further changes
+* to the packet will be done unless no fast free is enabled.
+*/
+   if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
+   rte_io_wmb();
+
+   for (i = 0; i < pkts; i++) {
+   cn9k_nix_xmit_prepare(tx_pkts[i], cmd, flags);
+   cn9k_nix_xmit_one(cmd, lmt_addr, io_addr, flags);
+   }
+
+   /* Reduce the cached count */
+   txq->fc_cache_pkts -= pkts;
+
+   return pkts;
+}
+
+#define T(name, f4, f3, f2, f1, f0, sz, flags)\
+   static uint16_t __rte_noinline __rte_hot cn9k_nix_xmit_pkts_##name(\
+   void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts)  \
+   {  \
+   uint64_t cmd[sz];  \
+  \
+   /* For TSO inner checksum is a must */ \
+   if (((flags) & NIX_TX_OFFLOAD_TSO_F) &&\
+   !((flags) & NIX_TX_OFFLOAD_L3_L4_CSUM_F))  \
+   return 0;  \
+   return nix_xmit_pkts(tx_queue, tx_pkts, pkts, cmd, flags); \
+   }
+
+NIX_TX_FASTPATH_MODES
+#undef T
+
+static inline void
+pick_tx_func(struct rte_eth_dev *eth_dev,
+const eth_tx_burst_t tx_burst[2][2][2][2][2])
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+
+   /* [TSO] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */
+   eth_dev->tx_pkt_burst = tx_burst
+   [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSO_F)]
+   [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
+   [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
+   [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
+   [!!(dev->tx_offload_flag

[dpdk-dev] [PATCH 15/44] net/cnxk: add Tx multi-segment version for cn9k

2021-03-06 Thread Nithin Dabilpuram
Add Tx burst multi-segment version for CN9K.

Signed-off-by: Nithin Dabilpuram 
Signed-off-by: Pavan Nikhilesh 
---
 drivers/net/cnxk/cn9k_tx.c |  70 +++
 drivers/net/cnxk/cn9k_tx.h | 105 +
 drivers/net/cnxk/cnxk_ethdev.h |   4 ++
 3 files changed, 179 insertions(+)

diff --git a/drivers/net/cnxk/cn9k_tx.c b/drivers/net/cnxk/cn9k_tx.c
index 06e9618..a474eb5 100644
--- a/drivers/net/cnxk/cn9k_tx.c
+++ b/drivers/net/cnxk/cn9k_tx.c
@@ -55,6 +55,44 @@ nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, 
uint16_t pkts,
return pkts;
 }
 
+static __rte_always_inline uint16_t
+nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,
+  uint64_t *cmd, const uint16_t flags)
+{
+   struct cn9k_eth_txq *txq = tx_queue;
+   uint64_t i;
+   const rte_iova_t io_addr = txq->io_addr;
+   void *lmt_addr = txq->lmt_addr;
+   uint16_t segdw;
+
+   NIX_XMIT_FC_OR_RETURN(txq, pkts);
+
+   roc_lmt_mov(cmd, &txq->cmd[0], cn9k_nix_tx_ext_subs(flags));
+
+   /* Perform header writes before barrier for TSO */
+   if (flags & NIX_TX_OFFLOAD_TSO_F) {
+   for (i = 0; i < pkts; i++)
+   cn9k_nix_xmit_prepare_tso(tx_pkts[i], flags);
+   }
+
+   /* Lets commit any changes in the packet here as no further changes
+* to the packet will be done unless no fast free is enabled.
+*/
+   if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
+   rte_io_wmb();
+
+   for (i = 0; i < pkts; i++) {
+   cn9k_nix_xmit_prepare(tx_pkts[i], cmd, flags);
+   segdw = cn9k_nix_prepare_mseg(tx_pkts[i], cmd, flags);
+   cn9k_nix_xmit_mseg_one(cmd, lmt_addr, io_addr, segdw);
+   }
+
+   /* Reduce the cached count */
+   txq->fc_cache_pkts -= pkts;
+
+   return pkts;
+}
+
 #define T(name, f4, f3, f2, f1, f0, sz, flags)\
static uint16_t __rte_noinline __rte_hot cn9k_nix_xmit_pkts_##name(\
void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts)  \
@@ -71,6 +109,25 @@ nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, 
uint16_t pkts,
 NIX_TX_FASTPATH_MODES
 #undef T
 
+#define T(name, f4, f3, f2, f1, f0, sz, flags)\
+   static uint16_t __rte_noinline __rte_hot   \
+   cn9k_nix_xmit_pkts_mseg_##name(void *tx_queue, \
+  struct rte_mbuf **tx_pkts,  \
+  uint16_t pkts)  \
+   {  \
+   uint64_t cmd[(sz) + CNXK_NIX_TX_MSEG_SG_DWORDS - 2];   \
+  \
+   /* For TSO inner checksum is a must */ \
+   if (((flags) & NIX_TX_OFFLOAD_TSO_F) &&\
+   !((flags) & NIX_TX_OFFLOAD_L3_L4_CSUM_F))  \
+   return 0;  \
+   return nix_xmit_pkts_mseg(tx_queue, tx_pkts, pkts, cmd,\
+ (flags) | NIX_TX_MULTI_SEG_F);   \
+   }
+
+NIX_TX_FASTPATH_MODES
+#undef T
+
 static inline void
 pick_tx_func(struct rte_eth_dev *eth_dev,
 const eth_tx_burst_t tx_burst[2][2][2][2][2])
@@ -89,6 +146,8 @@ pick_tx_func(struct rte_eth_dev *eth_dev,
 void
 cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
 {
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+
const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2] = {
 #define T(name, f4, f3, f2, f1, f0, sz, flags) 
\
[f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_##name,
@@ -97,7 +156,18 @@ cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
 #undef T
};
 
+   const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2] = {
+#define T(name, f4, f3, f2, f1, f0, sz, flags) 
\
+   [f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_mseg_##name,
+
+   NIX_TX_FASTPATH_MODES
+#undef T
+   };
+
pick_tx_func(eth_dev, nix_eth_tx_burst);
 
+   if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+   pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
+
rte_mb();
 }
diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h
index 5f915e8..d653b3c 100644
--- a/drivers/net/cnxk/cn9k_tx.h
+++ b/drivers/net/cnxk/cn9k_tx.h
@@ -294,6 +294,111 @@ cn9k_nix_xmit_submit_lmt_release(const rte_iova_t io_addr)
return roc_lmt_submit_ldeorl(io_addr);
 }
 
+static __rte_always_inline uint16_t
+cn9k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
+{
+   struct nix_send_

[dpdk-dev] [PATCH 16/44] net/cnxk: add Tx vector version for cn9k

2021-03-06 Thread Nithin Dabilpuram
Add Tx burst vector version for CN9K.

Signed-off-by: Nithin Dabilpuram 
Signed-off-by: Pavan Nikhilesh 
---
 drivers/net/cnxk/cn9k_tx.c | 951 -
 1 file changed, 950 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cnxk/cn9k_tx.c b/drivers/net/cnxk/cn9k_tx.c
index a474eb5..300ccd2 100644
--- a/drivers/net/cnxk/cn9k_tx.c
+++ b/drivers/net/cnxk/cn9k_tx.c
@@ -2,6 +2,8 @@
  * Copyright(C) 2021 Marvell.
  */
 
+#include 
+
 #include "cn9k_ethdev.h"
 #include "cn9k_tx.h"
 
@@ -93,6 +95,921 @@ nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf 
**tx_pkts, uint16_t pkts,
return pkts;
 }
 
+#if defined(RTE_ARCH_ARM64)
+
+#define NIX_DESCS_PER_LOOP 4
+static __rte_always_inline uint16_t
+nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,
+uint64_t *cmd, const uint16_t flags)
+{
+   uint64x2_t dataoff_iova0, dataoff_iova1, dataoff_iova2, dataoff_iova3;
+   uint64x2_t len_olflags0, len_olflags1, len_olflags2, len_olflags3;
+   uint64_t *mbuf0, *mbuf1, *mbuf2, *mbuf3;
+   uint64x2_t senddesc01_w0, senddesc23_w0;
+   uint64x2_t senddesc01_w1, senddesc23_w1;
+   uint64x2_t sgdesc01_w0, sgdesc23_w0;
+   uint64x2_t sgdesc01_w1, sgdesc23_w1;
+   struct cn9k_eth_txq *txq = tx_queue;
+   uint64_t *lmt_addr = txq->lmt_addr;
+   rte_iova_t io_addr = txq->io_addr;
+   uint64x2_t ltypes01, ltypes23;
+   uint64x2_t xtmp128, ytmp128;
+   uint64x2_t xmask01, xmask23;
+   uint64x2_t cmd00, cmd01;
+   uint64x2_t cmd10, cmd11;
+   uint64x2_t cmd20, cmd21;
+   uint64x2_t cmd30, cmd31;
+   uint64_t lmt_status, i;
+   uint16_t pkts_left;
+
+   NIX_XMIT_FC_OR_RETURN(txq, pkts);
+
+   pkts_left = pkts & (NIX_DESCS_PER_LOOP - 1);
+   pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP);
+
+   /* Reduce the cached count */
+   txq->fc_cache_pkts -= pkts;
+
+   /* Lets commit any changes in the packet here as no further changes
+* to the packet will be done unless no fast free is enabled.
+*/
+   if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
+   rte_io_wmb();
+
+   senddesc01_w0 = vld1q_dup_u64(&txq->cmd[0]);
+   senddesc23_w0 = senddesc01_w0;
+   senddesc01_w1 = vdupq_n_u64(0);
+   senddesc23_w1 = senddesc01_w1;
+   sgdesc01_w0 = vld1q_dup_u64(&txq->cmd[2]);
+   sgdesc23_w0 = sgdesc01_w0;
+
+   for (i = 0; i < pkts; i += NIX_DESCS_PER_LOOP) {
+   /* Clear lower 32bit of SEND_HDR_W0 and SEND_SG_W0 */
+   senddesc01_w0 =
+   vbicq_u64(senddesc01_w0, vdupq_n_u64(0x));
+   sgdesc01_w0 = vbicq_u64(sgdesc01_w0, vdupq_n_u64(0x));
+
+   senddesc23_w0 = senddesc01_w0;
+   sgdesc23_w0 = sgdesc01_w0;
+
+   /* Move mbufs to iova */
+   mbuf0 = (uint64_t *)tx_pkts[0];
+   mbuf1 = (uint64_t *)tx_pkts[1];
+   mbuf2 = (uint64_t *)tx_pkts[2];
+   mbuf3 = (uint64_t *)tx_pkts[3];
+
+   mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
+offsetof(struct rte_mbuf, buf_iova));
+   mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
+offsetof(struct rte_mbuf, buf_iova));
+   mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
+offsetof(struct rte_mbuf, buf_iova));
+   mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
+offsetof(struct rte_mbuf, buf_iova));
+   /*
+* Get mbuf's, olflags, iova, pktlen, dataoff
+* dataoff_iovaX.D[0] = iova,
+* dataoff_iovaX.D[1](15:0) = mbuf->dataoff
+* len_olflagsX.D[0] = ol_flags,
+* len_olflagsX.D[1](63:32) = mbuf->pkt_len
+*/
+   dataoff_iova0 = vld1q_u64(mbuf0);
+   len_olflags0 = vld1q_u64(mbuf0 + 2);
+   dataoff_iova1 = vld1q_u64(mbuf1);
+   len_olflags1 = vld1q_u64(mbuf1 + 2);
+   dataoff_iova2 = vld1q_u64(mbuf2);
+   len_olflags2 = vld1q_u64(mbuf2 + 2);
+   dataoff_iova3 = vld1q_u64(mbuf3);
+   len_olflags3 = vld1q_u64(mbuf3 + 2);
+
+   if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
+   struct rte_mbuf *mbuf;
+   /* Set don't free bit if reference count > 1 */
+   xmask01 = vdupq_n_u64(0);
+   xmask23 = xmask01;
+
+   mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 -
+  offsetof(struct rte_mbuf,
+   buf_iova));
+
+   if (cnxk_nix_prefree_seg(mbuf))
+   vsetq_lane_u64(0x8, xmask01, 0);
+   else
+   

[dpdk-dev] [PATCH 17/44] net/cnxk: add Rx support for cn10k

2021-03-06 Thread Nithin Dabilpuram
From: Jerin Jacob 

Add Rx burst support for CN10K SoC.

Signed-off-by: Jerin Jacob 
Signed-off-by: Nithin Dabilpuram 
Signed-off-by: Pavan Nikhilesh 
Signed-off-by: Harman Kalra 
---
 drivers/net/cnxk/cn10k_ethdev.h |   3 +
 drivers/net/cnxk/cn10k_rx.c | 123 
 drivers/net/cnxk/cn10k_rx.h | 151 
 drivers/net/cnxk/meson.build|   3 +-
 4 files changed, 279 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/cnxk/cn10k_rx.c

diff --git a/drivers/net/cnxk/cn10k_ethdev.h b/drivers/net/cnxk/cn10k_ethdev.h
index 2157b16..e4332d3 100644
--- a/drivers/net/cnxk/cn10k_ethdev.h
+++ b/drivers/net/cnxk/cn10k_ethdev.h
@@ -32,4 +32,7 @@ struct cn10k_eth_rxq {
uint16_t rq;
 } __plt_cache_aligned;
 
+/* Rx and Tx routines */
+void cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev);
+
 #endif /* __CN10K_ETHDEV_H__ */
diff --git a/drivers/net/cnxk/cn10k_rx.c b/drivers/net/cnxk/cn10k_rx.c
new file mode 100644
index 000..1ff1b04
--- /dev/null
+++ b/drivers/net/cnxk/cn10k_rx.c
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cn10k_ethdev.h"
+#include "cn10k_rx.h"
+
+#define CNXK_NIX_CQ_ENTRY_SZ 128
+#define NIX_DESCS_PER_LOOP   4
+#define CQE_CAST(x) ((struct nix_cqe_hdr_s *)(x))
+#define CQE_SZ(x)   ((x) * CNXK_NIX_CQ_ENTRY_SZ)
+
+static inline uint16_t
+nix_rx_nb_pkts(struct cn10k_eth_rxq *rxq, const uint64_t wdata,
+  const uint16_t pkts, const uint32_t qmask)
+{
+   uint32_t available = rxq->available;
+
+   /* Update the available count if cached value is not enough */
+   if (unlikely(available < pkts)) {
+   uint64_t reg, head, tail;
+
+   /* Use LDADDA version to avoid reorder */
+   reg = roc_atomic64_add_sync(wdata, rxq->cq_status);
+   /* CQ_OP_STATUS operation error */
+   if (reg & BIT_ULL(NIX_CQ_OP_STAT_OP_ERR) ||
+   reg & BIT_ULL(NIX_CQ_OP_STAT_CQ_ERR))
+   return 0;
+
+   tail = reg & 0xF;
+   head = (reg >> 20) & 0xF;
+   if (tail < head)
+   available = tail - head + qmask + 1;
+   else
+   available = tail - head;
+
+   rxq->available = available;
+   }
+
+   return RTE_MIN(pkts, available);
+}
+
+static __rte_always_inline uint16_t
+nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts,
+ const uint16_t flags)
+{
+   struct cn10k_eth_rxq *rxq = rx_queue;
+   const uint64_t mbuf_init = rxq->mbuf_initializer;
+   const void *lookup_mem = rxq->lookup_mem;
+   const uint64_t data_off = rxq->data_off;
+   const uintptr_t desc = rxq->desc;
+   const uint64_t wdata = rxq->wdata;
+   const uint32_t qmask = rxq->qmask;
+   uint16_t packets = 0, nb_pkts;
+   uint32_t head = rxq->head;
+   struct nix_cqe_hdr_s *cq;
+   struct rte_mbuf *mbuf;
+
+   nb_pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
+
+   while (packets < nb_pkts) {
+   /* Prefetch N desc ahead */
+   rte_prefetch_non_temporal(
+   (void *)(desc + (CQE_SZ((head + 2) & qmask;
+   cq = (struct nix_cqe_hdr_s *)(desc + CQE_SZ(head));
+
+   mbuf = nix_get_mbuf_from_cqe(cq, data_off);
+
+   cn10k_nix_cqe_to_mbuf(cq, cq->tag, mbuf, lookup_mem, mbuf_init,
+ flags);
+   rx_pkts[packets++] = mbuf;
+   roc_prefetch_store_keep(mbuf);
+   head++;
+   head &= qmask;
+   }
+
+   rxq->head = head;
+   rxq->available -= nb_pkts;
+
+   /* Free all the CQs that we've processed */
+   plt_write64((wdata | nb_pkts), rxq->cq_door);
+
+   return nb_pkts;
+}
+
+#define R(name, f3, f2, f1, f0, flags)\
+   static uint16_t __rte_noinline __rte_hot cn10k_nix_recv_pkts_##name(   \
+   void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)  \
+   {  \
+   return nix_recv_pkts(rx_queue, rx_pkts, pkts, (flags));\
+   }
+
+NIX_RX_FASTPATH_MODES
+#undef R
+
+static inline void
+pick_rx_func(struct rte_eth_dev *eth_dev,
+const eth_rx_burst_t rx_burst[2][2][2][2])
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+
+   /* [MARK] [CKSUM] [PTYPE] [RSS] */
+   eth_dev->rx_pkt_burst = rx_burst
+   [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+   [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_CHECKSUM_F)]
+   [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)]
+   [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_RSS_F)];
+}
+
+void
+cn10k_eth_set_rx_function(struct rte_e

[dpdk-dev] [PATCH 18/44] net/cnxk: add Rx multi-segment version for cn10k

2021-03-06 Thread Nithin Dabilpuram
Add Rx burst multi-segment version for CN10K.

Signed-off-by: Nithin Dabilpuram 
Signed-off-by: Pavan Nikhilesh 
---
 doc/guides/nics/cnxk.rst  |  2 ++
 doc/guides/nics/features/cnxk.ini |  2 ++
 doc/guides/nics/features/cnxk_vec.ini |  1 +
 doc/guides/nics/features/cnxk_vf.ini  |  2 ++
 drivers/net/cnxk/cn10k_rx.c   | 27 ++
 drivers/net/cnxk/cn10k_rx.h   | 54 +--
 6 files changed, 86 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 4f1b58c..789ec29 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -17,11 +17,13 @@ Features
 Features of the CNXK Ethdev PMD are:
 
 - Packet type information
+- Jumbo frames
 - SR-IOV VF
 - Lock-free Tx queue
 - Multiple queues for TX and RX
 - Receiver Side Scaling (RSS)
 - Link state information
+- Scatter-Gather IO support
 
 Prerequisites
 -
diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index 712f8d5..23564b7 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -15,6 +15,8 @@ Runtime Tx queue setup = Y
 Queue start/stop = Y
 RSS hash = Y
 Inner RSS= Y
+Jumbo frame  = Y
+Scattered Rx = Y
 Packet type parsing  = Y
 Linux= Y
 ARMv8= Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index 82f2af0..421048d 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -15,6 +15,7 @@ Runtime Tx queue setup = Y
 Queue start/stop = Y
 RSS hash = Y
 Inner RSS= Y
+Jumbo frame  = Y
 Packet type parsing  = Y
 Linux= Y
 ARMv8= Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini 
b/doc/guides/nics/features/cnxk_vf.ini
index 61fed11..e901fa2 100644
--- a/doc/guides/nics/features/cnxk_vf.ini
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -14,6 +14,8 @@ Runtime Tx queue setup = Y
 Queue start/stop = Y
 RSS hash = Y
 Inner RSS= Y
+Jumbo frame  = Y
+Scattered Rx = Y
 Packet type parsing  = Y
 Linux= Y
 ARMv8= Y
diff --git a/drivers/net/cnxk/cn10k_rx.c b/drivers/net/cnxk/cn10k_rx.c
index 1ff1b04..b98e7a1 100644
--- a/drivers/net/cnxk/cn10k_rx.c
+++ b/drivers/net/cnxk/cn10k_rx.c
@@ -88,6 +88,15 @@ nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, 
uint16_t pkts,
void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)  \
{  \
return nix_recv_pkts(rx_queue, rx_pkts, pkts, (flags));\
+   }  \
+  \
+   static uint16_t __rte_noinline __rte_hot   \
+   cn10k_nix_recv_pkts_mseg_##name(void *rx_queue,\
+   struct rte_mbuf **rx_pkts, \
+   uint16_t pkts) \
+   {  \
+   return nix_recv_pkts(rx_queue, rx_pkts, pkts,  \
+(flags) | NIX_RX_MULTI_SEG_F);\
}
 
 NIX_RX_FASTPATH_MODES
@@ -110,6 +119,8 @@ pick_rx_func(struct rte_eth_dev *eth_dev,
 void
 cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 {
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+
const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2] = {
 #define R(name, f3, f2, f1, f0, flags)   \
[f3][f2][f1][f0] = cn10k_nix_recv_pkts_##name,
@@ -118,6 +129,22 @@ cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 #undef R
};
 
+   const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags)   \
+   [f3][f2][f1][f0] = cn10k_nix_recv_pkts_mseg_##name,
+
+   NIX_RX_FASTPATH_MODES
+#undef R
+   };
+
pick_rx_func(eth_dev, nix_eth_rx_burst);
+
+   if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+   pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
+
+   /* Copy multi seg version with no offload for tear down sequence */
+   if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+   dev->rx_pkt_burst_no_offload =
+   nix_eth_rx_burst_mseg[0][0][0][0];
rte_mb();
 }
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
index f43f320..7887a81 100644
--- a/drivers/net/cnxk/cn10k_rx.h
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -98,6 +98,52 @@ nix_update_match_id(const uint16_t match_id, uint64_t 
ol_f

[dpdk-dev] [PATCH 19/44] net/cnxk: add Rx vector version for cn10k

2021-03-06 Thread Nithin Dabilpuram
From: Jerin Jacob 

Add Rx burst vector version for CN10K.

Signed-off-by: Jerin Jacob 
Signed-off-by: Nithin Dabilpuram 
---
 doc/guides/nics/cnxk.rst|   1 +
 drivers/net/cnxk/cn10k_rx.c | 240 +++-
 2 files changed, 240 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 789ec29..4187e9d 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -24,6 +24,7 @@ Features of the CNXK Ethdev PMD are:
 - Receiver Side Scaling (RSS)
 - Link state information
 - Scatter-Gather IO support
+- Vector Poll mode driver
 
 Prerequisites
 -
diff --git a/drivers/net/cnxk/cn10k_rx.c b/drivers/net/cnxk/cn10k_rx.c
index b98e7a1..2bc952d 100644
--- a/drivers/net/cnxk/cn10k_rx.c
+++ b/drivers/net/cnxk/cn10k_rx.c
@@ -2,6 +2,8 @@
  * Copyright(C) 2021 Marvell.
  */
 
+#include 
+
 #include "cn10k_ethdev.h"
 #include "cn10k_rx.h"
 
@@ -83,6 +85,223 @@ nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, 
uint16_t pkts,
return nb_pkts;
 }
 
+#if defined(RTE_ARCH_ARM64)
+
+static __rte_always_inline uint16_t
+nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts,
+const uint16_t flags)
+{
+   struct cn10k_eth_rxq *rxq = rx_queue;
+   uint16_t packets = 0;
+   uint64x2_t cq0_w8, cq1_w8, cq2_w8, cq3_w8, mbuf01, mbuf23;
+   const uint64_t mbuf_initializer = rxq->mbuf_initializer;
+   const uint64x2_t data_off = vdupq_n_u64(rxq->data_off);
+   uint64_t ol_flags0, ol_flags1, ol_flags2, ol_flags3;
+   uint64x2_t rearm0 = vdupq_n_u64(mbuf_initializer);
+   uint64x2_t rearm1 = vdupq_n_u64(mbuf_initializer);
+   uint64x2_t rearm2 = vdupq_n_u64(mbuf_initializer);
+   uint64x2_t rearm3 = vdupq_n_u64(mbuf_initializer);
+   struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
+   const uint16_t *lookup_mem = rxq->lookup_mem;
+   const uint32_t qmask = rxq->qmask;
+   const uint64_t wdata = rxq->wdata;
+   const uintptr_t desc = rxq->desc;
+   uint8x16_t f0, f1, f2, f3;
+   uint32_t head = rxq->head;
+   uint16_t pkts_left;
+
+   pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
+   pkts_left = pkts & (NIX_DESCS_PER_LOOP - 1);
+
+   /* Packets has to be floor-aligned to NIX_DESCS_PER_LOOP */
+   pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP);
+
+   while (packets < pkts) {
+   /* Exit loop if head is about to wrap and become unaligned */
+   if (((head + NIX_DESCS_PER_LOOP - 1) & qmask) <
+   NIX_DESCS_PER_LOOP) {
+   pkts_left += (pkts - packets);
+   break;
+   }
+
+   const uintptr_t cq0 = desc + CQE_SZ(head);
+
+   /* Prefetch N desc ahead */
+   rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(8)));
+   rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(9)));
+   rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(10)));
+   rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(11)));
+
+   /* Get NIX_RX_SG_S for size and buffer pointer */
+   cq0_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(0) + 64));
+   cq1_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(1) + 64));
+   cq2_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(2) + 64));
+   cq3_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(3) + 64));
+
+   /* Extract mbuf from NIX_RX_SG_S */
+   mbuf01 = vzip2q_u64(cq0_w8, cq1_w8);
+   mbuf23 = vzip2q_u64(cq2_w8, cq3_w8);
+   mbuf01 = vqsubq_u64(mbuf01, data_off);
+   mbuf23 = vqsubq_u64(mbuf23, data_off);
+
+   /* Move mbufs to scalar registers for future use */
+   mbuf0 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 0);
+   mbuf1 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 1);
+   mbuf2 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 0);
+   mbuf3 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 1);
+
+   /* Mask to get packet len from NIX_RX_SG_S */
+   const uint8x16_t shuf_msk = {
+   0xFF, 0xFF, /* pkt_type set as unknown */
+   0xFF, 0xFF, /* pkt_type set as unknown */
+   0,1,/* octet 1~0, low 16 bits pkt_len */
+   0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
+   0,1,/* octet 1~0, 16 bits data_len */
+   0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF};
+
+   /* Form the rx_descriptor_fields1 with pkt_len and data_len */
+   f0 = vqtbl1q_u8(cq0_w8, shuf_msk);
+   f1 = vqtbl1q_u8(cq1_w8, shuf_msk);
+   f2 = vqtbl1q_u8(cq2_w8, shuf_msk);
+   f3 = vqtbl1q_u8(cq3_w8, shuf_msk);
+
+   /* Load CQE word0 and word 1 */
+   uint64_t cq0_w0 = ((uint64_t *)(cq0 

[dpdk-dev] [PATCH 20/44] net/cnxk: add Tx support for cn10k

2021-03-06 Thread Nithin Dabilpuram
From: Jerin Jacob 

Add Tx burst scalar version for CN10K.

Signed-off-by: Jerin Jacob 
Signed-off-by: Nithin Dabilpuram 
Signed-off-by: Pavan Nikhilesh 
Signed-off-by: Harman Kalra 
---
 doc/guides/nics/cnxk.rst  |   1 +
 doc/guides/nics/features/cnxk.ini |   7 +
 doc/guides/nics/features/cnxk_vec.ini |   6 +
 doc/guides/nics/features/cnxk_vf.ini  |   7 +
 drivers/net/cnxk/cn10k_ethdev.h   |   1 +
 drivers/net/cnxk/cn10k_tx.c   | 174 +
 drivers/net/cnxk/cn10k_tx.h   | 358 ++
 drivers/net/cnxk/meson.build  |   3 +-
 8 files changed, 556 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/cnxk/cn10k_tx.c

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 4187e9d..555730d 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -22,6 +22,7 @@ Features of the CNXK Ethdev PMD are:
 - Lock-free Tx queue
 - Multiple queues for TX and RX
 - Receiver Side Scaling (RSS)
+- Inner and Outer Checksum offload
 - Link state information
 - Scatter-Gather IO support
 - Vector Poll mode driver
diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index 23564b7..02be26b 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -12,11 +12,18 @@ Link status  = Y
 Link status event= Y
 Runtime Rx queue setup = Y
 Runtime Tx queue setup = Y
+Fast mbuf free   = Y
+Free Tx mbuf on demand = Y
 Queue start/stop = Y
+TSO  = Y
 RSS hash = Y
 Inner RSS= Y
 Jumbo frame  = Y
 Scattered Rx = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
+Inner L3 checksum= Y
+Inner L4 checksum= Y
 Packet type parsing  = Y
 Linux= Y
 ARMv8= Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index 421048d..8c63853 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -12,10 +12,16 @@ Link status  = Y
 Link status event= Y
 Runtime Rx queue setup = Y
 Runtime Tx queue setup = Y
+Fast mbuf free   = Y
+Free Tx mbuf on demand = Y
 Queue start/stop = Y
 RSS hash = Y
 Inner RSS= Y
 Jumbo frame  = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
+Inner L3 checksum= Y
+Inner L4 checksum= Y
 Packet type parsing  = Y
 Linux= Y
 ARMv8= Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini 
b/doc/guides/nics/features/cnxk_vf.ini
index e901fa2..a1bd49b 100644
--- a/doc/guides/nics/features/cnxk_vf.ini
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -11,11 +11,18 @@ Link status  = Y
 Link status event= Y
 Runtime Rx queue setup = Y
 Runtime Tx queue setup = Y
+Fast mbuf free   = Y
+Free Tx mbuf on demand = Y
 Queue start/stop = Y
+TSO  = Y
 RSS hash = Y
 Inner RSS= Y
 Jumbo frame  = Y
 Scattered Rx = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
+Inner L3 checksum= Y
+Inner L4 checksum= Y
 Packet type parsing  = Y
 Linux= Y
 ARMv8= Y
diff --git a/drivers/net/cnxk/cn10k_ethdev.h b/drivers/net/cnxk/cn10k_ethdev.h
index e4332d3..58c51ab 100644
--- a/drivers/net/cnxk/cn10k_ethdev.h
+++ b/drivers/net/cnxk/cn10k_ethdev.h
@@ -34,5 +34,6 @@ struct cn10k_eth_rxq {
 
 /* Rx and Tx routines */
 void cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev);
+void cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev);
 
 #endif /* __CN10K_ETHDEV_H__ */
diff --git a/drivers/net/cnxk/cn10k_tx.c b/drivers/net/cnxk/cn10k_tx.c
new file mode 100644
index 000..0fad4c0
--- /dev/null
+++ b/drivers/net/cnxk/cn10k_tx.c
@@ -0,0 +1,174 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cn10k_ethdev.h"
+#include "cn10k_tx.h"
+
+#define NIX_XMIT_FC_OR_RETURN(txq, pkts)   
\
+   do {   \
+   /* Cached value is low, Update the fc_cache_pkts */\
+   if (unlikely((txq)->fc_cache_pkts < (pkts))) { \
+   /* Multiply with sqe_per_sqb to express in pkts */ \
+   (txq)->fc_cache_pkts = \
+   ((txq)->nb_sqb_bufs_adj - *(txq)->fc_mem)  \
+   << (txq)->sqes_per_sqb_log2;   \
+   /* Check it again for the room */  \
+   if (unlikely((txq)->fc_cache_pkts < (pkts)))   \
+   return 0;  \
+   }  \
+   } while (0)
+
+static __rte_always_inline uint6

[dpdk-dev] [PATCH 21/44] net/cnxk: add Tx multi-segment version for cn10k

2021-03-06 Thread Nithin Dabilpuram
Add Tx burst multi-segment version for CN10K.

Signed-off-by: Nithin Dabilpuram 
Signed-off-by: Pavan Nikhilesh 
---
 drivers/net/cnxk/cn10k_tx.c | 124 
 drivers/net/cnxk/cn10k_tx.h |  71 +
 2 files changed, 195 insertions(+)

diff --git a/drivers/net/cnxk/cn10k_tx.c b/drivers/net/cnxk/cn10k_tx.c
index 0fad4c0..d170f31 100644
--- a/drivers/net/cnxk/cn10k_tx.c
+++ b/drivers/net/cnxk/cn10k_tx.c
@@ -125,6 +125,98 @@ nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, 
uint16_t pkts,
return pkts;
 }
 
+static __rte_always_inline uint16_t
+nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,
+  uint64_t *cmd, const uint16_t flags)
+{
+   struct cn10k_eth_txq *txq = tx_queue;
+   uintptr_t pa0, pa1, lmt_addr = txq->lmt_base;
+   const rte_iova_t io_addr = txq->io_addr;
+   uint16_t segdw, lmt_id, burst, left, i;
+   uint64_t data0, data1;
+   __uint128_t data128;
+   uint16_t shft;
+
+   NIX_XMIT_FC_OR_RETURN(txq, pkts);
+
+   cn10k_nix_tx_skeleton(txq, cmd, flags);
+
+   /* Reduce the cached count */
+   txq->fc_cache_pkts -= pkts;
+
+   /* Get LMT base address and LMT ID as lcore id */
+   ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id);
+   left = pkts;
+again:
+   burst = left > 32 ? 32 : left;
+   shft = 16;
+   data128 = 0;
+   for (i = 0; i < burst; i++) {
+   /* Perform header writes for TSO, barrier at
+* lmt steorl will suffice.
+*/
+   if (flags & NIX_TX_OFFLOAD_TSO_F)
+   cn10k_nix_xmit_prepare_tso(tx_pkts[i], flags);
+
+   cn10k_nix_xmit_prepare(tx_pkts[i], cmd, lmt_addr, flags);
+   /* Store sg list directly on lmt line */
+   segdw = cn10k_nix_prepare_mseg(tx_pkts[i], (uint64_t *)lmt_addr,
+  flags);
+   lmt_addr += (1ULL << ROC_LMT_LINE_SIZE_LOG2);
+   data128 |= (((__uint128_t)(segdw - 1)) << shft);
+   shft += 3;
+   }
+
+   data0 = (uint64_t)data128;
+   data1 = (uint64_t)(data128 >> 64);
+   /* Make data0 similar to data1 */
+   data0 >>= 16;
+   /* Trigger LMTST */
+   if (burst > 16) {
+   pa0 = io_addr | (data0 & 0x7) << 4;
+   data0 &= ~0x7ULL;
+   /* Move lmtst1..15 sz to bits 63:19 */
+   data0 <<= 16;
+   data0 |= (15ULL << 12);
+   data0 |= (uint64_t)lmt_id;
+
+   /* STEOR0 */
+   roc_lmt_submit_steorl(data0, pa0);
+
+   pa1 = io_addr | (data1 & 0x7) << 4;
+   data1 &= ~0x7ULL;
+   data1 <<= 16;
+   data1 |= ((uint64_t)(burst - 17)) << 12;
+   data1 |= (uint64_t)(lmt_id + 16);
+
+   /* STEOR1 */
+   roc_lmt_submit_steorl(data1, pa1);
+   } else if (burst) {
+   pa0 = io_addr | (data0 & 0x7) << 4;
+   data0 &= ~0x7ULL;
+   /* Move lmtst1..15 sz to bits 63:19 */
+   data0 <<= 16;
+   data0 |= ((burst - 1) << 12);
+   data0 |= (uint64_t)lmt_id;
+
+   /* STEOR0 */
+   roc_lmt_submit_steorl(data0, pa0);
+   }
+
+   left -= burst;
+   rte_io_wmb();
+   if (left) {
+   /* Start processing another burst */
+   tx_pkts += burst;
+   /* Reset lmt base addr */
+   lmt_addr -= (1ULL << ROC_LMT_LINE_SIZE_LOG2);
+   lmt_addr &= (~(BIT_ULL(ROC_LMT_BASE_PER_CORE_LOG2) - 1));
+   goto again;
+   }
+
+   return pkts;
+}
+
 
 #define T(name, f4, f3, f2, f1, f0, sz, flags)\
static uint16_t __rte_noinline __rte_hot cn10k_nix_xmit_pkts_##name(   \
@@ -142,6 +234,25 @@ nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, 
uint16_t pkts,
 NIX_TX_FASTPATH_MODES
 #undef T
 
+#define T(name, f4, f3, f2, f1, f0, sz, flags)\
+   static uint16_t __rte_noinline __rte_hot   \
+   cn10k_nix_xmit_pkts_mseg_##name(void *tx_queue,\
+   struct rte_mbuf **tx_pkts, \
+   uint16_t pkts) \
+   {  \
+   uint64_t cmd[(sz)];\
+  \
+   /* For TSO inner checksum is a must */ \
+   if (((flags) & NIX_TX_OFFLOAD_TSO_F) &&\
+   !((flags) & NIX_TX_OFFLOAD_L3_L4_CSUM_F))  \
+   return 0;   

[dpdk-dev] [PATCH 22/44] net/cnxk: add Tx vector version for cn10k

2021-03-06 Thread Nithin Dabilpuram
Add Tx burst vector version for CN10K.

Signed-off-by: Nithin Dabilpuram 
Signed-off-by: Pavan Nikhilesh 
---
 drivers/net/cnxk/cn10k_tx.c | 988 +++-
 1 file changed, 987 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cnxk/cn10k_tx.c b/drivers/net/cnxk/cn10k_tx.c
index d170f31..c487c83 100644
--- a/drivers/net/cnxk/cn10k_tx.c
+++ b/drivers/net/cnxk/cn10k_tx.c
@@ -2,6 +2,8 @@
  * Copyright(C) 2021 Marvell.
  */
 
+#include 
+
 #include "cn10k_ethdev.h"
 #include "cn10k_tx.h"
 
@@ -217,6 +219,958 @@ nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf 
**tx_pkts, uint16_t pkts,
return pkts;
 }
 
+#if defined(RTE_ARCH_ARM64)
+
+#define NIX_DESCS_PER_LOOP 4
+static __rte_always_inline uint16_t
+nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,
+uint64_t *cmd, const uint16_t flags)
+{
+   uint64x2_t dataoff_iova0, dataoff_iova1, dataoff_iova2, dataoff_iova3;
+   uint64x2_t len_olflags0, len_olflags1, len_olflags2, len_olflags3;
+   uint64_t *mbuf0, *mbuf1, *mbuf2, *mbuf3, data, pa;
+   uint64x2_t senddesc01_w0, senddesc23_w0;
+   uint64x2_t senddesc01_w1, senddesc23_w1;
+   uint16_t left, scalar, burst, i, lmt_id;
+   uint64x2_t sgdesc01_w0, sgdesc23_w0;
+   uint64x2_t sgdesc01_w1, sgdesc23_w1;
+   struct cn10k_eth_txq *txq = tx_queue;
+   uintptr_t lmt_addr = txq->lmt_base;
+   rte_iova_t io_addr = txq->io_addr;
+   uint64x2_t ltypes01, ltypes23;
+   uint64x2_t xtmp128, ytmp128;
+   uint64x2_t xmask01, xmask23;
+   uint64x2_t cmd00, cmd01;
+   uint64x2_t cmd10, cmd11;
+   uint64x2_t cmd20, cmd21;
+   uint64x2_t cmd30, cmd31;
+
+   NIX_XMIT_FC_OR_RETURN(txq, pkts);
+
+   scalar = pkts & (NIX_DESCS_PER_LOOP - 1);
+   pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP);
+
+   /* Reduce the cached count */
+   txq->fc_cache_pkts -= pkts;
+
+   senddesc01_w0 = vld1q_dup_u64(&txq->send_hdr_w0);
+   senddesc23_w0 = senddesc01_w0;
+   senddesc01_w1 = vdupq_n_u64(0);
+   senddesc23_w1 = senddesc01_w1;
+   sgdesc01_w0 = vld1q_dup_u64(&txq->sg_w0);
+   sgdesc23_w0 = sgdesc01_w0;
+
+   /* Get LMT base address and LMT ID as lcore id */
+   ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id);
+   left = pkts;
+again:
+   burst = left > 32 ? 32 : left;
+   for (i = 0; i < burst; i += NIX_DESCS_PER_LOOP) {
+   /* Clear lower 32bit of SEND_HDR_W0 and SEND_SG_W0 */
+   senddesc01_w0 =
+   vbicq_u64(senddesc01_w0, vdupq_n_u64(0x));
+   sgdesc01_w0 = vbicq_u64(sgdesc01_w0, vdupq_n_u64(0x));
+
+   senddesc23_w0 = senddesc01_w0;
+   sgdesc23_w0 = sgdesc01_w0;
+
+   /* Move mbufs to iova */
+   mbuf0 = (uint64_t *)tx_pkts[0];
+   mbuf1 = (uint64_t *)tx_pkts[1];
+   mbuf2 = (uint64_t *)tx_pkts[2];
+   mbuf3 = (uint64_t *)tx_pkts[3];
+
+   mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
+offsetof(struct rte_mbuf, buf_iova));
+   mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
+offsetof(struct rte_mbuf, buf_iova));
+   mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
+offsetof(struct rte_mbuf, buf_iova));
+   mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
+offsetof(struct rte_mbuf, buf_iova));
+   /*
+* Get mbuf's, olflags, iova, pktlen, dataoff
+* dataoff_iovaX.D[0] = iova,
+* dataoff_iovaX.D[1](15:0) = mbuf->dataoff
+* len_olflagsX.D[0] = ol_flags,
+* len_olflagsX.D[1](63:32) = mbuf->pkt_len
+*/
+   dataoff_iova0 = vld1q_u64(mbuf0);
+   len_olflags0 = vld1q_u64(mbuf0 + 2);
+   dataoff_iova1 = vld1q_u64(mbuf1);
+   len_olflags1 = vld1q_u64(mbuf1 + 2);
+   dataoff_iova2 = vld1q_u64(mbuf2);
+   len_olflags2 = vld1q_u64(mbuf2 + 2);
+   dataoff_iova3 = vld1q_u64(mbuf3);
+   len_olflags3 = vld1q_u64(mbuf3 + 2);
+
+   if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
+   struct rte_mbuf *mbuf;
+   /* Set don't free bit if reference count > 1 */
+   xmask01 = vdupq_n_u64(0);
+   xmask23 = xmask01;
+
+   mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 -
+  offsetof(struct rte_mbuf,
+   buf_iova));
+
+   if (cnxk_nix_prefree_seg(mbuf))
+   vsetq_lane_u64(0x8, xmask01, 0);
+   else
+   __mempool_check_cookies(mbuf->poo

[dpdk-dev] [PATCH 23/44] net/cnxk: add device start and stop operations

2021-03-06 Thread Nithin Dabilpuram
Add device start and stop operation callbacks for
CN9K and CN10K. Device stop is common for both platforms
while device start as some platform dependent portion where
the platform specific offload flags are recomputed and
the right Rx/Tx burst function is chosen.

Signed-off-by: Nithin Dabilpuram 
---
 doc/guides/nics/cnxk.rst|  84 ++
 drivers/net/cnxk/cn10k_ethdev.c | 124 +++
 drivers/net/cnxk/cn9k_ethdev.c  | 127 
 drivers/net/cnxk/cnxk_ethdev.c  |  90 
 drivers/net/cnxk/cnxk_ethdev.h  |   2 +
 drivers/net/cnxk/cnxk_link.c|  11 
 6 files changed, 438 insertions(+)

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 555730d..42aa7a5 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -39,6 +39,58 @@ Driver compilation and testing
 Refer to the document :ref:`compiling and testing a PMD for a NIC 
`
 for details.
 
+#. Running testpmd:
+
+   Follow instructions available in the document
+   :ref:`compiling and testing a PMD for a NIC `
+   to run testpmd.
+
+   Example output:
+
+   .. code-block:: console
+
+  .//app/dpdk-testpmd -c 0xc -a 0002:02:00.0 -- --portmask=0x1 
--nb-cores=1 --port-topology=loop --rxq=1 --txq=1
+  EAL: Detected 4 lcore(s)
+  EAL: Detected 1 NUMA nodes
+  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
+  EAL: Selected IOVA mode 'VA'
+  EAL: No available hugepages reported in hugepages-16777216kB
+  EAL: No available hugepages reported in hugepages-2048kB
+  EAL: Probing VFIO support...
+  EAL: VFIO support initialized
+  EAL:   using IOMMU type 1 (Type 1)
+  [ 2003.202721] vfio-pci 0002:02:00.0: vfio_cap_init: hiding cap 0x14@0x98
+  EAL: Probe PCI driver: net_cn10k (177d:a063) device: 0002:02:00.0 
(socket 0)
+  PMD: RoC Model: cn10k
+  EAL: No legacy callbacks, legacy socket not created
+  testpmd: create a new mbuf pool : n=155456, size=2176, 
socket=0
+  testpmd: preferred mempool ops selected: cn10k_mempool_ops
+  Configuring Port 0 (socket 0)
+  PMD: Port 0: Link Up - speed 25000 Mbps - full-duplex
+
+  Port 0: link state change event
+  Port 0: 96:D4:99:72:A5:BF
+  Checking link statuses...
+  Done
+  No commandline core given, start packet forwarding
+  io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support 
enabled, MP allocation mode: native
+  Logical Core 3 (socket 0) forwards packets on 1 streams:
+RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
+
+io packet forwarding packets/burst=32
+nb forwarding cores=1 - nb forwarding ports=1
+port 0: RX queue number: 1 Tx queue number: 1
+  Rx offloads=0x0 Tx offloads=0x1
+  RX queue: 0
+RX desc=4096 - RX free threshold=0
+RX threshold registers: pthresh=0 hthresh=0  wthresh=0
+RX Offloads=0x0
+  TX queue: 0
+TX desc=512 - TX free threshold=0
+TX threshold registers: pthresh=0 hthresh=0  wthresh=0
+TX offloads=0x0 - TX RS bit threshold=0
+  Press enter to exit
+
 Runtime Config Options
 --
 
@@ -132,3 +184,35 @@ Runtime Config Options
Above devarg parameters are configurable per device, user needs to pass the
parameters to all the PCIe devices if application requires to configure on
all the ethdev ports.
+
+Limitations
+---
+
+``mempool_cnxk`` external mempool handler dependency
+~
+
+The OCTEON CN9K/CN10K SoC family NIC has inbuilt HW assisted external mempool 
manager.
+``net_cnxk`` pmd only works with ``mempool_cnxk`` mempool handler
+as it is performance wise most effective way for packet allocation and Tx 
buffer
+recycling on OCTEON TX2 SoC platform.
+
+CRC stripping
+~
+
+The OCTEON CN9K/CN10K SoC family NICs strip the CRC for every packet being 
received by
+the host interface irrespective of the offload configuration.
+
+Debugging Options
+-
+
+.. _table_cnxk_ethdev_debug_options:
+
+.. table:: cnxk ethdev debug options
+
+   +---++---+
+   | # | Component  | EAL log command   |
+   +===++===+
+   | 1 | NIX| --log-level='pmd\.net.cnxk,8' |
+   +---++---+
+   | 2 | NPC| --log-level='pmd\.net.cnxk\.flow,8'   |
+   +---++---+
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 1a9fcbb..f9e0274 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.

[dpdk-dev] [PATCH 24/44] net/cnxk: add MAC address set ops

2021-03-06 Thread Nithin Dabilpuram
From: Sunil Kumar Kori 

Default mac address set operation is implemented for
cn9k and cn10k platforms.

Signed-off-by: Sunil Kumar Kori 
---
 drivers/net/cnxk/cnxk_ethdev.c |  1 +
 drivers/net/cnxk/cnxk_ethdev.h |  2 ++
 drivers/net/cnxk/cnxk_ethdev_ops.c | 29 +
 3 files changed, 32 insertions(+)

diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index ba05711..ed01087 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -999,6 +999,7 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
 
 /* CNXK platform independent eth dev ops */
 struct eth_dev_ops cnxk_eth_dev_ops = {
+   .mac_addr_set = cnxk_nix_mac_addr_set,
.dev_infos_get = cnxk_nix_info_get,
.link_update = cnxk_nix_link_update,
.tx_queue_release = cnxk_nix_tx_queue_release,
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 984f4fe..717a8d8 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -203,6 +203,8 @@ extern struct eth_dev_ops cnxk_eth_dev_ops;
 int cnxk_nix_probe(struct rte_pci_driver *pci_drv,
   struct rte_pci_device *pci_dev);
 int cnxk_nix_remove(struct rte_pci_device *pci_dev);
+int cnxk_nix_mac_addr_set(struct rte_eth_dev *eth_dev,
+ struct rte_ether_addr *addr);
 int cnxk_nix_info_get(struct rte_eth_dev *eth_dev,
  struct rte_eth_dev_info *dev_info);
 int cnxk_nix_configure(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c 
b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 4a45956..87cf4ee 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -69,3 +69,32 @@ cnxk_nix_info_get(struct rte_eth_dev *eth_dev, struct 
rte_eth_dev_info *devinfo)
RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
return 0;
 }
+
+int
+cnxk_nix_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   struct roc_nix *nix = &dev->nix;
+   int rc;
+
+   /* Update mac address at NPC */
+   rc = roc_nix_npc_mac_addr_set(nix, addr->addr_bytes);
+   if (rc)
+   goto exit;
+
+   /* Update mac address at CGX for PFs only */
+   if (!roc_nix_is_vf_or_sdp(nix)) {
+   rc = roc_nix_mac_addr_set(nix, addr->addr_bytes);
+   if (rc) {
+   /* Rollback to previous mac address */
+   roc_nix_npc_mac_addr_set(nix, dev->mac_addr);
+   goto exit;
+   }
+   }
+
+   /* Update mac address to cnxk ethernet device */
+   rte_memcpy(dev->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
+
+exit:
+   return rc;
+}
-- 
2.8.4



[dpdk-dev] [PATCH 25/44] net/cnxk: add MTU set device operation

2021-03-06 Thread Nithin Dabilpuram
From: Sunil Kumar Kori 

This Patch implements mtu set dev op for cn9k and cn10k platforms.

Signed-off-by: Sunil Kumar Kori 
---
 doc/guides/nics/cnxk.rst  |  1 +
 doc/guides/nics/features/cnxk.ini |  1 +
 doc/guides/nics/features/cnxk_vec.ini |  1 +
 doc/guides/nics/features/cnxk_vf.ini  |  1 +
 drivers/net/cnxk/cnxk_ethdev.c| 51 +++
 drivers/net/cnxk/cnxk_ethdev.h|  5 ++-
 drivers/net/cnxk/cnxk_ethdev_ops.c| 77 ++-
 7 files changed, 135 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 42aa7a5..6cb90a7 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -24,6 +24,7 @@ Features of the CNXK Ethdev PMD are:
 - Receiver Side Scaling (RSS)
 - Inner and Outer Checksum offload
 - Link state information
+- MTU update
 - Scatter-Gather IO support
 - Vector Poll mode driver
 
diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index 02be26b..6fef725 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -15,6 +15,7 @@ Runtime Tx queue setup = Y
 Fast mbuf free   = Y
 Free Tx mbuf on demand = Y
 Queue start/stop = Y
+MTU update   = Y
 TSO  = Y
 RSS hash = Y
 Inner RSS= Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index 8c63853..79cb1e2 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -15,6 +15,7 @@ Runtime Tx queue setup = Y
 Fast mbuf free   = Y
 Free Tx mbuf on demand = Y
 Queue start/stop = Y
+MTU update   = Y
 RSS hash = Y
 Inner RSS= Y
 Jumbo frame  = Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini 
b/doc/guides/nics/features/cnxk_vf.ini
index a1bd49b..5cc9f3f 100644
--- a/doc/guides/nics/features/cnxk_vf.ini
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -14,6 +14,7 @@ Runtime Tx queue setup = Y
 Fast mbuf free   = Y
 Free Tx mbuf on demand = Y
 Queue start/stop = Y
+MTU update   = Y
 TSO  = Y
 RSS hash = Y
 Inner RSS= Y
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index ed01087..9040ce6 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -37,6 +37,50 @@ nix_get_speed_capa(struct cnxk_eth_dev *dev)
return speed_capa;
 }
 
+static void
+nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
+{
+   struct rte_pktmbuf_pool_private *mbp_priv;
+   struct rte_eth_dev *eth_dev;
+   struct cnxk_eth_dev *dev;
+   uint32_t buffsz;
+
+   dev = rxq->dev;
+   eth_dev = dev->eth_dev;
+
+   /* Get rx buffer size */
+   mbp_priv = rte_mempool_get_priv(rxq->qconf.mp);
+   buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
+
+   if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+   dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+   dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+   }
+}
+
+static int
+nix_recalc_mtu(struct rte_eth_dev *eth_dev)
+{
+   struct rte_eth_dev_data *data = eth_dev->data;
+   struct cnxk_eth_rxq_sp *rxq;
+   uint16_t mtu;
+   int rc;
+
+   rxq = ((struct cnxk_eth_rxq_sp *)data->rx_queues[0]) - 1;
+   /* Setup scatter mode if needed by jumbo */
+   nix_enable_mseg_on_jumbo(rxq);
+
+   /* Setup MTU based on max_rx_pkt_len */
+   mtu = data->dev_conf.rxmode.max_rx_pkt_len - CNXK_NIX_L2_OVERHEAD +
+   CNXK_NIX_MAX_VTAG_ACT_SIZE;
+
+   rc = cnxk_nix_mtu_set(eth_dev, mtu);
+   if (rc)
+   plt_err("Failed to set default MTU size, rc=%d", rc);
+
+   return rc;
+}
+
 uint64_t
 cnxk_nix_rxq_mbuf_setup(struct cnxk_eth_dev *dev)
 {
@@ -955,6 +999,12 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
int rc, i;
 
+   if (eth_dev->data->nb_rx_queues != 0) {
+   rc = nix_recalc_mtu(eth_dev);
+   if (rc)
+   return rc;
+   }
+
/* Start rx queues */
for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
rc = cnxk_nix_rx_queue_start(eth_dev, i);
@@ -999,6 +1049,7 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
 
 /* CNXK platform independent eth dev ops */
 struct eth_dev_ops cnxk_eth_dev_ops = {
+   .mtu_set = cnxk_nix_mtu_set,
.mac_addr_set = cnxk_nix_mac_addr_set,
.dev_infos_get = cnxk_nix_info_get,
.link_update = cnxk_nix_link_update,
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 717a8d8..3838573 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -28,7 +28,9 @@
 #define CNXK_NIX_MAX_VTAG_ACT_SIZE (4 * CNXK_NIX_MAX_VTAG_INS)
 
 /* ETH_HLEN+ETH_F

[dpdk-dev] [PATCH 26/44] net/cnxk: add promiscuous mode enable and disable

2021-03-06 Thread Nithin Dabilpuram
From: Sunil Kumar Kori 

Add device operations to enable and disable promisc mode
for cn9k and cn10k.

Signed-off-by: Sunil Kumar Kori 
---
 doc/guides/nics/cnxk.rst  |  1 +
 doc/guides/nics/features/cnxk.ini |  1 +
 doc/guides/nics/features/cnxk_vec.ini |  1 +
 drivers/net/cnxk/cnxk_ethdev.c|  2 ++
 drivers/net/cnxk/cnxk_ethdev.h|  2 ++
 drivers/net/cnxk/cnxk_ethdev_ops.c| 56 +++
 6 files changed, 63 insertions(+)

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 6cb90a7..364e511 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -17,6 +17,7 @@ Features
 Features of the CNXK Ethdev PMD are:
 
 - Packet type information
+- Promiscuous mode
 - Jumbo frames
 - SR-IOV VF
 - Lock-free Tx queue
diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index 6fef725..9b2e163 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -17,6 +17,7 @@ Free Tx mbuf on demand = Y
 Queue start/stop = Y
 MTU update   = Y
 TSO  = Y
+Promiscuous mode = Y
 RSS hash = Y
 Inner RSS= Y
 Jumbo frame  = Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index 79cb1e2..31471e0 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -16,6 +16,7 @@ Fast mbuf free   = Y
 Free Tx mbuf on demand = Y
 Queue start/stop = Y
 MTU update   = Y
+Promiscuous mode = Y
 RSS hash = Y
 Inner RSS= Y
 Jumbo frame  = Y
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 9040ce6..8d16dec 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1060,6 +1060,8 @@ struct eth_dev_ops cnxk_eth_dev_ops = {
.rx_queue_start = cnxk_nix_rx_queue_start,
.rx_queue_stop = cnxk_nix_rx_queue_stop,
.dev_supported_ptypes_get = cnxk_nix_supported_ptypes_get,
+   .promiscuous_enable = cnxk_nix_promisc_enable,
+   .promiscuous_disable = cnxk_nix_promisc_disable,
 };
 
 static int
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 3838573..73aef34 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -208,6 +208,8 @@ int cnxk_nix_remove(struct rte_pci_device *pci_dev);
 int cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu);
 int cnxk_nix_mac_addr_set(struct rte_eth_dev *eth_dev,
  struct rte_ether_addr *addr);
+int cnxk_nix_promisc_enable(struct rte_eth_dev *eth_dev);
+int cnxk_nix_promisc_disable(struct rte_eth_dev *eth_dev);
 int cnxk_nix_info_get(struct rte_eth_dev *eth_dev,
  struct rte_eth_dev_info *dev_info);
 int cnxk_nix_configure(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c 
b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 21b55c4..6feb3a9 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -173,3 +173,59 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 exit:
return rc;
 }
+
+int
+cnxk_nix_promisc_enable(struct rte_eth_dev *eth_dev)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   struct roc_nix *nix = &dev->nix;
+   int rc = 0;
+
+   if (roc_nix_is_vf_or_sdp(nix))
+   return rc;
+
+   rc = roc_nix_npc_promisc_ena_dis(nix, true);
+   if (rc) {
+   plt_err("Failed to setup promisc mode in npc, rc=%d(%s)", rc,
+   roc_error_msg_get(rc));
+   return rc;
+   }
+
+   rc = roc_nix_mac_promisc_mode_enable(nix, true);
+   if (rc) {
+   plt_err("Failed to setup promisc mode in mac, rc=%d(%s)", rc,
+   roc_error_msg_get(rc));
+   roc_nix_npc_promisc_ena_dis(nix, false);
+   return rc;
+   }
+
+   return 0;
+}
+
+int
+cnxk_nix_promisc_disable(struct rte_eth_dev *eth_dev)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   struct roc_nix *nix = &dev->nix;
+   int rc = 0;
+
+   if (roc_nix_is_vf_or_sdp(nix))
+   return rc;
+
+   rc = roc_nix_npc_promisc_ena_dis(nix, false);
+   if (rc < 0) {
+   plt_err("Failed to setup promisc mode in npc, rc=%d(%s)", rc,
+   roc_error_msg_get(rc));
+   return rc;
+   }
+
+   rc = roc_nix_mac_promisc_mode_enable(nix, false);
+   if (rc) {
+   plt_err("Failed to setup promisc mode in mac, rc=%d(%s)", rc,
+   roc_error_msg_get(rc));
+   roc_nix_npc_promisc_ena_dis(nix, true);
+   return rc;
+   }
+
+   return 0;
+}
-- 
2.8.4



[dpdk-dev] [PATCH 27/44] net/cnxk: add DMAC filter support

2021-03-06 Thread Nithin Dabilpuram
From: Sunil Kumar Kori 

DMAC filter support is added for cn9k and cn10k platforms.

Signed-off-by: Sunil Kumar Kori 
---
 doc/guides/nics/cnxk.rst  |  1 +
 doc/guides/nics/features/cnxk.ini |  1 +
 doc/guides/nics/features/cnxk_vec.ini |  1 +
 drivers/net/cnxk/cnxk_ethdev.c|  2 ++
 drivers/net/cnxk/cnxk_ethdev.h|  5 
 drivers/net/cnxk/cnxk_ethdev_ops.c| 44 ---
 6 files changed, 51 insertions(+), 3 deletions(-)

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 364e511..ce33f17 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -23,6 +23,7 @@ Features of the CNXK Ethdev PMD are:
 - Lock-free Tx queue
 - Multiple queues for TX and RX
 - Receiver Side Scaling (RSS)
+- MAC filtering
 - Inner and Outer Checksum offload
 - Link state information
 - MTU update
diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index 9b2e163..20d4d12 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -18,6 +18,7 @@ Queue start/stop = Y
 MTU update   = Y
 TSO  = Y
 Promiscuous mode = Y
+Unicast MAC filter   = Y
 RSS hash = Y
 Inner RSS= Y
 Jumbo frame  = Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index 31471e0..e1de8ab 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -17,6 +17,7 @@ Free Tx mbuf on demand = Y
 Queue start/stop = Y
 MTU update   = Y
 Promiscuous mode = Y
+Unicast MAC filter   = Y
 RSS hash = Y
 Inner RSS= Y
 Jumbo frame  = Y
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 8d16dec..171418a 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1050,6 +1050,8 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
 /* CNXK platform independent eth dev ops */
 struct eth_dev_ops cnxk_eth_dev_ops = {
.mtu_set = cnxk_nix_mtu_set,
+   .mac_addr_add = cnxk_nix_mac_addr_add,
+   .mac_addr_remove = cnxk_nix_mac_addr_del,
.mac_addr_set = cnxk_nix_mac_addr_set,
.dev_infos_get = cnxk_nix_info_get,
.link_update = cnxk_nix_link_update,
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 73aef34..38ac654 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -139,6 +139,7 @@ struct cnxk_eth_dev {
 
/* Max macfilter entries */
uint8_t max_mac_entries;
+   bool dmac_filter_enable;
 
uint16_t flags;
uint8_t ptype_disable;
@@ -206,6 +207,10 @@ int cnxk_nix_probe(struct rte_pci_driver *pci_drv,
   struct rte_pci_device *pci_dev);
 int cnxk_nix_remove(struct rte_pci_device *pci_dev);
 int cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu);
+int cnxk_nix_mac_addr_add(struct rte_eth_dev *eth_dev,
+ struct rte_ether_addr *addr, uint32_t index,
+ uint32_t pool);
+void cnxk_nix_mac_addr_del(struct rte_eth_dev *eth_dev, uint32_t index);
 int cnxk_nix_mac_addr_set(struct rte_eth_dev *eth_dev,
  struct rte_ether_addr *addr);
 int cnxk_nix_promisc_enable(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c 
b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 6feb3a9..fc60576 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -101,6 +101,43 @@ cnxk_nix_mac_addr_set(struct rte_eth_dev *eth_dev, struct 
rte_ether_addr *addr)
 }
 
 int
+cnxk_nix_mac_addr_add(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr,
+ uint32_t index, uint32_t pool)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   struct roc_nix *nix = &dev->nix;
+   int rc;
+
+   PLT_SET_USED(index);
+   PLT_SET_USED(pool);
+
+   rc = roc_nix_mac_addr_add(nix, addr->addr_bytes);
+   if (rc < 0) {
+   plt_err("Failed to add mac address, rc=%d", rc);
+   return rc;
+   }
+
+   /* Enable promiscuous mode at NIX level */
+   roc_nix_npc_promisc_ena_dis(nix, true);
+   dev->dmac_filter_enable = true;
+   eth_dev->data->promiscuous = false;
+
+   return 0;
+}
+
+void
+cnxk_nix_mac_addr_del(struct rte_eth_dev *eth_dev, uint32_t index)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   struct roc_nix *nix = &dev->nix;
+   int rc;
+
+   rc = roc_nix_mac_addr_del(nix, index);
+   if (rc)
+   plt_err("Failed to delete mac address, rc=%d", rc);
+}
+
+int
 cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 {
uint32_t old_frame_size, frame_size = mtu + CNXK_NIX_L2_OVERHEAD;
@@ -212,8 +249,8 @@ cnxk_nix_promisc_disable(struct rte_eth_dev *eth_dev)
if (roc_nix_is_vf_or_sdp(nix))

[dpdk-dev] [PATCH 28/44] net/cnxk: add all multicast enable/disable ethops

2021-03-06 Thread Nithin Dabilpuram
From: Sunil Kumar Kori 

L2 multicast packets can be allowed or blocked. Patch implements
corresponding ethops.

Signed-off-by: Sunil Kumar Kori 
---
 doc/guides/nics/features/cnxk.ini |  1 +
 doc/guides/nics/features/cnxk_vec.ini |  1 +
 drivers/net/cnxk/cnxk_ethdev.c|  2 ++
 drivers/net/cnxk/cnxk_ethdev.h|  2 ++
 drivers/net/cnxk/cnxk_ethdev_ops.c| 17 +
 5 files changed, 23 insertions(+)

diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index 20d4d12..b41af2d 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -18,6 +18,7 @@ Queue start/stop = Y
 MTU update   = Y
 TSO  = Y
 Promiscuous mode = Y
+Allmulticast mode= Y
 Unicast MAC filter   = Y
 RSS hash = Y
 Inner RSS= Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index e1de8ab..7fe8018 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -17,6 +17,7 @@ Free Tx mbuf on demand = Y
 Queue start/stop = Y
 MTU update   = Y
 Promiscuous mode = Y
+Allmulticast mode= Y
 Unicast MAC filter   = Y
 RSS hash = Y
 Inner RSS= Y
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 171418a..77a8c09 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1064,6 +1064,8 @@ struct eth_dev_ops cnxk_eth_dev_ops = {
.dev_supported_ptypes_get = cnxk_nix_supported_ptypes_get,
.promiscuous_enable = cnxk_nix_promisc_enable,
.promiscuous_disable = cnxk_nix_promisc_disable,
+   .allmulticast_enable = cnxk_nix_allmulticast_enable,
+   .allmulticast_disable = cnxk_nix_allmulticast_disable,
 };
 
 static int
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 38ac654..09031e9 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -215,6 +215,8 @@ int cnxk_nix_mac_addr_set(struct rte_eth_dev *eth_dev,
  struct rte_ether_addr *addr);
 int cnxk_nix_promisc_enable(struct rte_eth_dev *eth_dev);
 int cnxk_nix_promisc_disable(struct rte_eth_dev *eth_dev);
+int cnxk_nix_allmulticast_enable(struct rte_eth_dev *eth_dev);
+int cnxk_nix_allmulticast_disable(struct rte_eth_dev *eth_dev);
 int cnxk_nix_info_get(struct rte_eth_dev *eth_dev,
  struct rte_eth_dev_info *dev_info);
 int cnxk_nix_configure(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c 
b/drivers/net/cnxk/cnxk_ethdev_ops.c
index fc60576..61ecbab 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -267,3 +267,20 @@ cnxk_nix_promisc_disable(struct rte_eth_dev *eth_dev)
dev->dmac_filter_enable = false;
return 0;
 }
+
+int
+cnxk_nix_allmulticast_enable(struct rte_eth_dev *eth_dev)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+
+   return roc_nix_npc_mcast_config(&dev->nix, true, false);
+}
+
+int
+cnxk_nix_allmulticast_disable(struct rte_eth_dev *eth_dev)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+
+   return roc_nix_npc_mcast_config(&dev->nix, false,
+   eth_dev->data->promiscuous);
+}
-- 
2.8.4



[dpdk-dev] [PATCH 29/44] net/cnxk: add Rx/Tx burst mode get ops

2021-03-06 Thread Nithin Dabilpuram
From: Sunil Kumar Kori 

Patch implements ethdev operations to get Rx and Tx burst
mode.

Signed-off-by: Sunil Kumar Kori 
---
 doc/guides/nics/features/cnxk.ini |   1 +
 doc/guides/nics/features/cnxk_vec.ini |   1 +
 doc/guides/nics/features/cnxk_vf.ini  |   1 +
 drivers/net/cnxk/cnxk_ethdev.c|   2 +
 drivers/net/cnxk/cnxk_ethdev.h|   4 ++
 drivers/net/cnxk/cnxk_ethdev_ops.c| 127 ++
 6 files changed, 136 insertions(+)

diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index b41af2d..298f167 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -12,6 +12,7 @@ Link status  = Y
 Link status event= Y
 Runtime Rx queue setup = Y
 Runtime Tx queue setup = Y
+Burst mode info  = Y
 Fast mbuf free   = Y
 Free Tx mbuf on demand = Y
 Queue start/stop = Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index 7fe8018..a673cc1 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -12,6 +12,7 @@ Link status  = Y
 Link status event= Y
 Runtime Rx queue setup = Y
 Runtime Tx queue setup = Y
+Burst mode info  = Y
 Fast mbuf free   = Y
 Free Tx mbuf on demand = Y
 Queue start/stop = Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini 
b/doc/guides/nics/features/cnxk_vf.ini
index 5cc9f3f..335d082 100644
--- a/doc/guides/nics/features/cnxk_vf.ini
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -11,6 +11,7 @@ Link status  = Y
 Link status event= Y
 Runtime Rx queue setup = Y
 Runtime Tx queue setup = Y
+Burst mode info  = Y
 Fast mbuf free   = Y
 Free Tx mbuf on demand = Y
 Queue start/stop = Y
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 77a8c09..28fcf8c 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1066,6 +1066,8 @@ struct eth_dev_ops cnxk_eth_dev_ops = {
.promiscuous_disable = cnxk_nix_promisc_disable,
.allmulticast_enable = cnxk_nix_allmulticast_enable,
.allmulticast_disable = cnxk_nix_allmulticast_disable,
+   .rx_burst_mode_get = cnxk_nix_rx_burst_mode_get,
+   .tx_burst_mode_get = cnxk_nix_tx_burst_mode_get,
 };
 
 static int
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 09031e9..481ede9 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -219,6 +219,10 @@ int cnxk_nix_allmulticast_enable(struct rte_eth_dev 
*eth_dev);
 int cnxk_nix_allmulticast_disable(struct rte_eth_dev *eth_dev);
 int cnxk_nix_info_get(struct rte_eth_dev *eth_dev,
  struct rte_eth_dev_info *dev_info);
+int cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
+  struct rte_eth_burst_mode *mode);
+int cnxk_nix_tx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
+  struct rte_eth_burst_mode *mode);
 int cnxk_nix_configure(struct rte_eth_dev *eth_dev);
 int cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
uint16_t nb_desc, uint16_t fp_tx_q_sz,
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c 
b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 61ecbab..7ae961a 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -72,6 +72,133 @@ cnxk_nix_info_get(struct rte_eth_dev *eth_dev, struct 
rte_eth_dev_info *devinfo)
 }
 
 int
+cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
+  struct rte_eth_burst_mode *mode)
+{
+   ssize_t bytes = 0, str_size = RTE_ETH_BURST_MODE_INFO_SIZE, rc;
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   const struct burst_info {
+   uint64_t flags;
+   const char *output;
+   } rx_offload_map[] = {
+   {DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
+   {DEV_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+   {DEV_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+   {DEV_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+   {DEV_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
+   {DEV_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
+   {DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+   {DEV_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
+   {DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
+   {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
+   {DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
+   {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"},
+   {DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
+   {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+   {DEV_RX_OFFLOAD_SECURITY, " Security,"},
+   {DEV_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
+   {D

[dpdk-dev] [PATCH 30/44] net/cnxk: add flow ctrl set/get ops

2021-03-06 Thread Nithin Dabilpuram
From: Sunil Kumar Kori 

Patch implements set and get operations for flow control.

Signed-off-by: Sunil Kumar Kori 
---
 doc/guides/nics/cnxk.rst  |  1 +
 doc/guides/nics/features/cnxk.ini |  1 +
 doc/guides/nics/features/cnxk_vec.ini |  1 +
 drivers/net/cnxk/cnxk_ethdev.c| 74 +++
 drivers/net/cnxk/cnxk_ethdev.h| 13 +
 drivers/net/cnxk/cnxk_ethdev_ops.c| 95 +++
 6 files changed, 185 insertions(+)

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index ce33f17..96b2c5d 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -26,6 +26,7 @@ Features of the CNXK Ethdev PMD are:
 - MAC filtering
 - Inner and Outer Checksum offload
 - Link state information
+- Link flow control
 - MTU update
 - Scatter-Gather IO support
 - Vector Poll mode driver
diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index 298f167..afd0f01 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -23,6 +23,7 @@ Allmulticast mode= Y
 Unicast MAC filter   = Y
 RSS hash = Y
 Inner RSS= Y
+Flow control = Y
 Jumbo frame  = Y
 Scattered Rx = Y
 L3 checksum offload  = Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index a673cc1..4bd11ce 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -22,6 +22,7 @@ Allmulticast mode= Y
 Unicast MAC filter   = Y
 RSS hash = Y
 Inner RSS= Y
+Flow control = Y
 Jumbo frame  = Y
 L3 checksum offload  = Y
 L4 checksum offload  = Y
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 28fcf8c..0ffc45b 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -81,6 +81,55 @@ nix_recalc_mtu(struct rte_eth_dev *eth_dev)
return rc;
 }
 
+static int
+nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   struct cnxk_fc_cfg *fc = &dev->fc_cfg;
+   struct rte_eth_fc_conf fc_conf = {0};
+   int rc;
+
+   /* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+* by AF driver, update those info in PMD structure.
+*/
+   rc = cnxk_nix_flow_ctrl_get(eth_dev, &fc_conf);
+   if (rc)
+   goto exit;
+
+   fc->mode = fc_conf.mode;
+   fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
+   (fc_conf.mode == RTE_FC_RX_PAUSE);
+   fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
+   (fc_conf.mode == RTE_FC_TX_PAUSE);
+
+exit:
+   return rc;
+}
+
+static int
+nix_update_flow_ctrl_config(struct rte_eth_dev *eth_dev)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   struct cnxk_fc_cfg *fc = &dev->fc_cfg;
+   struct rte_eth_fc_conf fc_cfg = {0};
+
+   if (roc_nix_is_vf_or_sdp(&dev->nix))
+   return 0;
+
+   fc_cfg.mode = fc->mode;
+
+   /* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
+   if (roc_model_is_cn96_Ax() &&
+   (fc_cfg.mode == RTE_FC_FULL || fc_cfg.mode == RTE_FC_RX_PAUSE)) {
+   fc_cfg.mode =
+   (fc_cfg.mode == RTE_FC_FULL ||
+   fc_cfg.mode == RTE_FC_TX_PAUSE) ?
+   RTE_FC_TX_PAUSE : RTE_FC_NONE;
+   }
+
+   return cnxk_nix_flow_ctrl_set(eth_dev, &fc_cfg);
+}
+
 uint64_t
 cnxk_nix_rxq_mbuf_setup(struct cnxk_eth_dev *dev)
 {
@@ -640,6 +689,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
struct rte_eth_rxmode *rxmode = &conf->rxmode;
struct rte_eth_txmode *txmode = &conf->txmode;
char ea_fmt[RTE_ETHER_ADDR_FMT_SIZE];
+   struct roc_nix_fc_cfg fc_cfg = {0};
struct roc_nix *nix = &dev->nix;
struct rte_ether_addr *ea;
uint8_t nb_rxq, nb_txq;
@@ -820,6 +870,21 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
goto cq_fini;
}
 
+   /* Init flow control configuration */
+   fc_cfg.cq_cfg_valid = false;
+   fc_cfg.rxchan_cfg.enable = true;
+   rc = roc_nix_fc_config_set(nix, &fc_cfg);
+   if (rc) {
+   plt_err("Failed to initialize flow control rc=%d", rc);
+   goto cq_fini;
+   }
+
+   /* Update flow control configuration to PMD */
+   rc = nix_init_flow_ctrl_config(eth_dev);
+   if (rc) {
+   plt_err("Failed to initialize flow control rc=%d", rc);
+   goto cq_fini;
+   }
/*
 * Restore queue config when reconfigure followed by
 * reconfigure and no queue configure invoked from application case.
@@ -1019,6 +1084,13 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
return rc;
}
 
+   /* Update Flow contro

[dpdk-dev] [PATCH 31/44] net/cnxk: add link up/down operations

2021-03-06 Thread Nithin Dabilpuram
From: Sunil Kumar Kori 

Patch implements link up/down ethdev operations for
cn9k and cn10k platform.

Signed-off-by: Sunil Kumar Kori 
---
 drivers/net/cnxk/cnxk_ethdev.c |  4 +++-
 drivers/net/cnxk/cnxk_ethdev.h |  4 
 drivers/net/cnxk/cnxk_ethdev_ops.c | 47 ++
 3 files changed, 54 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 0ffc45b..cb7404f 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -928,7 +928,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
return rc;
 }
 
-static int
+int
 cnxk_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qid)
 {
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
@@ -1142,6 +1142,8 @@ struct eth_dev_ops cnxk_eth_dev_ops = {
.tx_burst_mode_get = cnxk_nix_tx_burst_mode_get,
.flow_ctrl_get = cnxk_nix_flow_ctrl_get,
.flow_ctrl_set = cnxk_nix_flow_ctrl_set,
+   .dev_set_link_up = cnxk_nix_set_link_up,
+   .dev_set_link_down = cnxk_nix_set_link_down,
 };
 
 static int
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 77139d0..6500433 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -236,6 +236,9 @@ int cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
   struct rte_eth_fc_conf *fc_conf);
 int cnxk_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
   struct rte_eth_fc_conf *fc_conf);
+int cnxk_nix_set_link_up(struct rte_eth_dev *eth_dev);
+int cnxk_nix_set_link_down(struct rte_eth_dev *eth_dev);
+
 int cnxk_nix_configure(struct rte_eth_dev *eth_dev);
 int cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
uint16_t nb_desc, uint16_t fp_tx_q_sz,
@@ -244,6 +247,7 @@ int cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, 
uint16_t qid,
uint16_t nb_desc, uint16_t fp_rx_q_sz,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mp);
+int cnxk_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qid);
 int cnxk_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qid);
 int cnxk_nix_dev_start(struct rte_eth_dev *eth_dev);
 
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c 
b/drivers/net/cnxk/cnxk_ethdev_ops.c
index eac50a2..37ba211 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -506,3 +506,50 @@ cnxk_nix_allmulticast_disable(struct rte_eth_dev *eth_dev)
return roc_nix_npc_mcast_config(&dev->nix, false,
eth_dev->data->promiscuous);
 }
+
+int
+cnxk_nix_set_link_up(struct rte_eth_dev *eth_dev)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   struct roc_nix *nix = &dev->nix;
+   int rc, i;
+
+   if (roc_nix_is_vf_or_sdp(nix))
+   return -ENOTSUP;
+
+   rc = roc_nix_mac_link_state_set(nix, true);
+   if (rc)
+   goto exit;
+
+   /* Start tx queues  */
+   for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+   rc = cnxk_nix_tx_queue_start(eth_dev, i);
+   if (rc)
+   goto exit;
+   }
+
+exit:
+   return rc;
+}
+
+int
+cnxk_nix_set_link_down(struct rte_eth_dev *eth_dev)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   struct roc_nix *nix = &dev->nix;
+   int rc, i;
+
+   if (roc_nix_is_vf_or_sdp(nix))
+   return -ENOTSUP;
+
+   /* Stop tx queues  */
+   for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+   rc = cnxk_nix_tx_queue_stop(eth_dev, i);
+   if (rc)
+   goto exit;
+   }
+
+   rc = roc_nix_mac_link_state_set(nix, false);
+exit:
+   return rc;
+}
-- 
2.8.4



[dpdk-dev] [PATCH 32/44] net/cnxk: add EEPROM module info get operations

2021-03-06 Thread Nithin Dabilpuram
From: Sunil Kumar Kori 

Patch implements eeprom module info get ethops for cn9k and
cn10k platforms.

Signed-off-by: Sunil Kumar Kori 
---
 doc/guides/nics/features/cnxk.ini |  1 +
 doc/guides/nics/features/cnxk_vec.ini |  1 +
 doc/guides/nics/features/cnxk_vf.ini  |  1 +
 drivers/net/cnxk/cnxk_ethdev.c|  2 ++
 drivers/net/cnxk/cnxk_ethdev.h|  4 
 drivers/net/cnxk/cnxk_ethdev_ops.c| 39 +++
 6 files changed, 48 insertions(+)

diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index afd0f01..b1e8641 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -31,6 +31,7 @@ L4 checksum offload  = Y
 Inner L3 checksum= Y
 Inner L4 checksum= Y
 Packet type parsing  = Y
+Module EEPROM dump   = Y
 Linux= Y
 ARMv8= Y
 Usage doc= Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index 4bd11ce..0f99634 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -29,6 +29,7 @@ L4 checksum offload  = Y
 Inner L3 checksum= Y
 Inner L4 checksum= Y
 Packet type parsing  = Y
+Module EEPROM dump   = Y
 Linux= Y
 ARMv8= Y
 Usage doc= Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini 
b/doc/guides/nics/features/cnxk_vf.ini
index 335d082..cecced9 100644
--- a/doc/guides/nics/features/cnxk_vf.ini
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -26,6 +26,7 @@ L4 checksum offload  = Y
 Inner L3 checksum= Y
 Inner L4 checksum= Y
 Packet type parsing  = Y
+Module EEPROM dump   = Y
 Linux= Y
 ARMv8= Y
 Usage doc= Y
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index cb7404f..97d8e6d 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1144,6 +1144,8 @@ struct eth_dev_ops cnxk_eth_dev_ops = {
.flow_ctrl_set = cnxk_nix_flow_ctrl_set,
.dev_set_link_up = cnxk_nix_set_link_up,
.dev_set_link_down = cnxk_nix_set_link_down,
+   .get_module_info = cnxk_nix_get_module_info,
+   .get_module_eeprom = cnxk_nix_get_module_eeprom,
 };
 
 static int
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 6500433..c4a562b 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -238,6 +238,10 @@ int cnxk_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
   struct rte_eth_fc_conf *fc_conf);
 int cnxk_nix_set_link_up(struct rte_eth_dev *eth_dev);
 int cnxk_nix_set_link_down(struct rte_eth_dev *eth_dev);
+int cnxk_nix_get_module_info(struct rte_eth_dev *eth_dev,
+struct rte_eth_dev_module_info *modinfo);
+int cnxk_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
+  struct rte_dev_eeprom_info *info);
 
 int cnxk_nix_configure(struct rte_eth_dev *eth_dev);
 int cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c 
b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 37ba211..a1a963a 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -553,3 +553,42 @@ cnxk_nix_set_link_down(struct rte_eth_dev *eth_dev)
 exit:
return rc;
 }
+
+int
+cnxk_nix_get_module_info(struct rte_eth_dev *eth_dev,
+struct rte_eth_dev_module_info *modinfo)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   struct roc_nix_eeprom_info eeprom_info = {0};
+   struct roc_nix *nix = &dev->nix;
+   int rc;
+
+   rc = roc_nix_eeprom_info_get(nix, &eeprom_info);
+   if (rc)
+   return rc;
+
+   modinfo->type = eeprom_info.sff_id;
+   modinfo->eeprom_len = ROC_NIX_EEPROM_SIZE;
+   return 0;
+}
+
+int
+cnxk_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
+  struct rte_dev_eeprom_info *info)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   struct roc_nix_eeprom_info eeprom_info = {0};
+   struct roc_nix *nix = &dev->nix;
+   int rc = -EINVAL;
+
+   if (!info->data || !info->length ||
+   (info->offset + info->length > ROC_NIX_EEPROM_SIZE))
+   return rc;
+
+   rc = roc_nix_eeprom_info_get(nix, &eeprom_info);
+   if (rc)
+   return rc;
+
+   rte_memcpy(info->data, eeprom_info.buf + info->offset, info->length);
+   return 0;
+}
-- 
2.8.4



[dpdk-dev] [PATCH 33/44] net/cnxk: add Rx queue interrupt enable/disable ops

2021-03-06 Thread Nithin Dabilpuram
From: Sunil Kumar Kori 

Application may choose to enable/disable interrupts on Rx queues
so that application can select its processing if no packets are
available on queues for a longer period.

Signed-off-by: Sunil Kumar Kori 
---
 doc/guides/nics/cnxk.rst  |  1 +
 doc/guides/nics/features/cnxk.ini |  1 +
 doc/guides/nics/features/cnxk_vec.ini |  1 +
 doc/guides/nics/features/cnxk_vf.ini  |  1 +
 drivers/net/cnxk/cnxk_ethdev.c|  2 ++
 drivers/net/cnxk/cnxk_ethdev.h|  4 
 drivers/net/cnxk/cnxk_ethdev_ops.c| 19 +++
 7 files changed, 29 insertions(+)

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 96b2c5d..6a001d9 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -30,6 +30,7 @@ Features of the CNXK Ethdev PMD are:
 - MTU update
 - Scatter-Gather IO support
 - Vector Poll mode driver
+- Support Rx interrupt
 
 Prerequisites
 -
diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index b1e8641..e5669f5 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -5,6 +5,7 @@
 ;
 [Features]
 Speed capabilities   = Y
+Rx interrupt = Y
 Lock-free Tx queue   = Y
 SR-IOV   = Y
 Multiprocess aware   = Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index 0f99634..dff0c9b 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -5,6 +5,7 @@
 ;
 [Features]
 Speed capabilities   = Y
+Rx interrupt = Y
 Lock-free Tx queue   = Y
 SR-IOV   = Y
 Multiprocess aware   = Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini 
b/doc/guides/nics/features/cnxk_vf.ini
index cecced9..b950d2f 100644
--- a/doc/guides/nics/features/cnxk_vf.ini
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -5,6 +5,7 @@
 ;
 [Features]
 Speed capabilities   = Y
+Rx interrupt = Y
 Lock-free Tx queue   = Y
 Multiprocess aware   = Y
 Link status  = Y
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 97d8e6d..bfcc456 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1146,6 +1146,8 @@ struct eth_dev_ops cnxk_eth_dev_ops = {
.dev_set_link_down = cnxk_nix_set_link_down,
.get_module_info = cnxk_nix_get_module_info,
.get_module_eeprom = cnxk_nix_get_module_eeprom,
+   .rx_queue_intr_enable = cnxk_nix_rx_queue_intr_enable,
+   .rx_queue_intr_disable = cnxk_nix_rx_queue_intr_disable,
 };
 
 static int
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index c4a562b..76e1049 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -242,6 +242,10 @@ int cnxk_nix_get_module_info(struct rte_eth_dev *eth_dev,
 struct rte_eth_dev_module_info *modinfo);
 int cnxk_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
   struct rte_dev_eeprom_info *info);
+int cnxk_nix_rx_queue_intr_enable(struct rte_eth_dev *eth_dev,
+ uint16_t rx_queue_id);
+int cnxk_nix_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
+  uint16_t rx_queue_id);
 
 int cnxk_nix_configure(struct rte_eth_dev *eth_dev);
 int cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c 
b/drivers/net/cnxk/cnxk_ethdev_ops.c
index a1a963a..34d4a42 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -592,3 +592,22 @@ cnxk_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
rte_memcpy(info->data, eeprom_info.buf + info->offset, info->length);
return 0;
 }
+
+int
+cnxk_nix_rx_queue_intr_enable(struct rte_eth_dev *eth_dev, uint16_t 
rx_queue_id)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+
+   roc_nix_rx_queue_intr_enable(&dev->nix, rx_queue_id);
+   return 0;
+}
+
+int
+cnxk_nix_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
+  uint16_t rx_queue_id)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+
+   roc_nix_rx_queue_intr_disable(&dev->nix, rx_queue_id);
+   return 0;
+}
-- 
2.8.4



[dpdk-dev] [PATCH 34/44] net/cnxk: add validation API for mempool ops

2021-03-06 Thread Nithin Dabilpuram
From: Sunil Kumar Kori 

cn9k and cn10k supports platform specific mempool ops.
This patch implements API to validate whether given mempool
ops is supported or not.

Signed-off-by: Sunil Kumar Kori 
---
 drivers/net/cnxk/cnxk_ethdev.c |  1 +
 drivers/net/cnxk/cnxk_ethdev.h |  1 +
 drivers/net/cnxk/cnxk_ethdev_ops.c | 11 +++
 3 files changed, 13 insertions(+)

diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index bfcc456..8a76486 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1148,6 +1148,7 @@ struct eth_dev_ops cnxk_eth_dev_ops = {
.get_module_eeprom = cnxk_nix_get_module_eeprom,
.rx_queue_intr_enable = cnxk_nix_rx_queue_intr_enable,
.rx_queue_intr_disable = cnxk_nix_rx_queue_intr_disable,
+   .pool_ops_supported = cnxk_nix_pool_ops_supported,
 };
 
 static int
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 76e1049..0b501f6 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -246,6 +246,7 @@ int cnxk_nix_rx_queue_intr_enable(struct rte_eth_dev 
*eth_dev,
  uint16_t rx_queue_id);
 int cnxk_nix_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
   uint16_t rx_queue_id);
+int cnxk_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool);
 
 int cnxk_nix_configure(struct rte_eth_dev *eth_dev);
 int cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c 
b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 34d4a42..5b8bc53 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -611,3 +611,14 @@ cnxk_nix_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
roc_nix_rx_queue_intr_disable(&dev->nix, rx_queue_id);
return 0;
 }
+
+int
+cnxk_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool)
+{
+   RTE_SET_USED(eth_dev);
+
+   if (!strcmp(pool, rte_mbuf_platform_mempool_ops()))
+   return 0;
+
+   return -ENOTSUP;
+}
-- 
2.8.4



[dpdk-dev] [PATCH 36/44] net/cnxk: add xstats apis

2021-03-06 Thread Nithin Dabilpuram
From: Satha Rao 

Initial implementation of xstats operations.

Signed-off-by: Satha Rao 
---
 doc/guides/nics/features/cnxk.ini |   1 +
 doc/guides/nics/features/cnxk_vec.ini |   1 +
 doc/guides/nics/features/cnxk_vf.ini  |   1 +
 drivers/net/cnxk/cnxk_ethdev.c|   5 ++
 drivers/net/cnxk/cnxk_ethdev.h|  11 +++
 drivers/net/cnxk/cnxk_stats.c | 132 ++
 6 files changed, 151 insertions(+)

diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index 40952a9..192c15a 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -34,6 +34,7 @@ Inner L4 checksum= Y
 Packet type parsing  = Y
 Basic stats  = Y
 Stats per queue  = Y
+Extended stats   = Y
 Module EEPROM dump   = Y
 Linux= Y
 ARMv8= Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index 32035bb..e990480 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -32,6 +32,7 @@ Inner L4 checksum= Y
 Packet type parsing  = Y
 Basic stats  = Y
 Stats per queue  = Y
+Extended stats   = Y
 Module EEPROM dump   = Y
 Linux= Y
 ARMv8= Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini 
b/doc/guides/nics/features/cnxk_vf.ini
index 8060a68..3a4417c 100644
--- a/doc/guides/nics/features/cnxk_vf.ini
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -29,6 +29,7 @@ Inner L4 checksum= Y
 Packet type parsing  = Y
 Basic stats  = Y
 Stats per queue  = Y
+Extended stats   = Y
 Module EEPROM dump   = Y
 Linux= Y
 ARMv8= Y
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index a798b14..a145aaa 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1152,6 +1152,11 @@ struct eth_dev_ops cnxk_eth_dev_ops = {
.queue_stats_mapping_set = cnxk_nix_queue_stats_mapping,
.stats_get = cnxk_nix_stats_get,
.stats_reset = cnxk_nix_stats_reset,
+   .xstats_get = cnxk_nix_xstats_get,
+   .xstats_get_names = cnxk_nix_xstats_get_names,
+   .xstats_reset = cnxk_nix_xstats_reset,
+   .xstats_get_by_id = cnxk_nix_xstats_get_by_id,
+   .xstats_get_names_by_id = cnxk_nix_xstats_get_names_by_id,
 };
 
 static int
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 5075e7c..dd05c24 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -279,6 +279,17 @@ int cnxk_nix_queue_stats_mapping(struct rte_eth_dev *dev, 
uint16_t queue_id,
 uint8_t stat_idx, uint8_t is_rx);
 int cnxk_nix_stats_reset(struct rte_eth_dev *dev);
 int cnxk_nix_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats);
+int cnxk_nix_xstats_get(struct rte_eth_dev *eth_dev,
+   struct rte_eth_xstat *xstats, unsigned int n);
+int cnxk_nix_xstats_get_names(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int limit);
+int cnxk_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
+   struct rte_eth_xstat_name *xstats_names,
+   const uint64_t *ids, unsigned int limit);
+int cnxk_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev, const uint64_t *ids,
+ uint64_t *values, unsigned int n);
+int cnxk_nix_xstats_reset(struct rte_eth_dev *eth_dev);
 
 /* Lookup configuration */
 const uint32_t *cnxk_nix_supported_ptypes_get(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/cnxk/cnxk_stats.c b/drivers/net/cnxk/cnxk_stats.c
index 24bff0b..ce9f9f4 100644
--- a/drivers/net/cnxk/cnxk_stats.c
+++ b/drivers/net/cnxk/cnxk_stats.c
@@ -83,3 +83,135 @@ cnxk_nix_queue_stats_mapping(struct rte_eth_dev *eth_dev, 
uint16_t queue_id,
 
return 0;
 }
+
+int
+cnxk_nix_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *xstats,
+   unsigned int n)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   struct roc_nix_xstat roc_xstats[n];
+   int size, i;
+
+   size = roc_nix_xstats_get(&dev->nix, roc_xstats, n);
+
+   /* If requested array do not have space then return with count */
+   if (size < 0 || size > (int)n)
+   return size;
+
+   for (i = 0; i < size; i++) {
+   xstats[i].id = roc_xstats[i].id;
+   xstats[i].value = roc_xstats[i].value;
+   }
+
+   return size;
+}
+
+int
+cnxk_nix_xstats_get_names(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int limit)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   struct roc_nix_xstat_name roc_xstats_name[limit];
+   struct roc_nix *nix = &

[dpdk-dev] [PATCH 35/44] net/cnxk: add port/queue stats

2021-03-06 Thread Nithin Dabilpuram
From: Satha Rao 

This patch implements regular port statistics and queue mapping set
api to get queue statistics

Signed-off-by: Satha Rao 
---
 doc/guides/nics/cnxk.rst  |  1 +
 doc/guides/nics/features/cnxk.ini |  2 +
 doc/guides/nics/features/cnxk_vec.ini |  2 +
 doc/guides/nics/features/cnxk_vf.ini  |  2 +
 drivers/net/cnxk/cnxk_ethdev.c|  3 ++
 drivers/net/cnxk/cnxk_ethdev.h|  8 
 drivers/net/cnxk/cnxk_stats.c | 85 +++
 drivers/net/cnxk/meson.build  |  3 +-
 8 files changed, 105 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/cnxk/cnxk_stats.c

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 6a001d9..c2a6fbb 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -25,6 +25,7 @@ Features of the CNXK Ethdev PMD are:
 - Receiver Side Scaling (RSS)
 - MAC filtering
 - Inner and Outer Checksum offload
+- Port hardware statistics
 - Link state information
 - Link flow control
 - MTU update
diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index e5669f5..40952a9 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -32,6 +32,8 @@ L4 checksum offload  = Y
 Inner L3 checksum= Y
 Inner L4 checksum= Y
 Packet type parsing  = Y
+Basic stats  = Y
+Stats per queue  = Y
 Module EEPROM dump   = Y
 Linux= Y
 ARMv8= Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index dff0c9b..32035bb 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -30,6 +30,8 @@ L4 checksum offload  = Y
 Inner L3 checksum= Y
 Inner L4 checksum= Y
 Packet type parsing  = Y
+Basic stats  = Y
+Stats per queue  = Y
 Module EEPROM dump   = Y
 Linux= Y
 ARMv8= Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini 
b/doc/guides/nics/features/cnxk_vf.ini
index b950d2f..8060a68 100644
--- a/doc/guides/nics/features/cnxk_vf.ini
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -27,6 +27,8 @@ L4 checksum offload  = Y
 Inner L3 checksum= Y
 Inner L4 checksum= Y
 Packet type parsing  = Y
+Basic stats  = Y
+Stats per queue  = Y
 Module EEPROM dump   = Y
 Linux= Y
 ARMv8= Y
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 8a76486..a798b14 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1149,6 +1149,9 @@ struct eth_dev_ops cnxk_eth_dev_ops = {
.rx_queue_intr_enable = cnxk_nix_rx_queue_intr_enable,
.rx_queue_intr_disable = cnxk_nix_rx_queue_intr_disable,
.pool_ops_supported = cnxk_nix_pool_ops_supported,
+   .queue_stats_mapping_set = cnxk_nix_queue_stats_mapping,
+   .stats_get = cnxk_nix_stats_get,
+   .stats_reset = cnxk_nix_stats_reset,
 };
 
 static int
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 0b501f6..5075e7c 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -188,6 +188,10 @@ struct cnxk_eth_dev {
 
/* Default mac address */
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
+
+   /* Per queue statistics counters */
+   uint32_t txq_stat_map[RTE_ETHDEV_QUEUE_STAT_CNTRS];
+   uint32_t rxq_stat_map[RTE_ETHDEV_QUEUE_STAT_CNTRS];
 };
 
 struct cnxk_eth_rxq_sp {
@@ -271,6 +275,10 @@ void cnxk_nix_toggle_flag_link_cfg(struct cnxk_eth_dev 
*dev, bool set);
 void cnxk_eth_dev_link_status_cb(struct roc_nix *nix,
 struct roc_nix_link_info *link);
 int cnxk_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete);
+int cnxk_nix_queue_stats_mapping(struct rte_eth_dev *dev, uint16_t queue_id,
+uint8_t stat_idx, uint8_t is_rx);
+int cnxk_nix_stats_reset(struct rte_eth_dev *dev);
+int cnxk_nix_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats);
 
 /* Lookup configuration */
 const uint32_t *cnxk_nix_supported_ptypes_get(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/cnxk/cnxk_stats.c b/drivers/net/cnxk/cnxk_stats.c
new file mode 100644
index 000..24bff0b
--- /dev/null
+++ b/drivers/net/cnxk/cnxk_stats.c
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_ethdev.h"
+
+int
+cnxk_nix_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   struct roc_nix *nix = &dev->nix;
+   struct roc_nix_stats nix_stats;
+   int rc = 0, i;
+
+   rc = roc_nix_stats_get(nix, &nix_stats);
+   if (rc)
+   goto exit;
+
+   stats->opackets = nix_stats.tx_ucast;
+   stats->opackets += nix_stats.tx_mcast;
+   stats->opackets += nix_stats.tx_bcast;
+   stats->oerrors = nix_stats.

[dpdk-dev] [PATCH 37/44] net/cnxk: add rxq/txq info get operations

2021-03-06 Thread Nithin Dabilpuram
From: Satha Rao 

Initial apis to get default queue information.

Signed-off-by: Satha Rao 
---
 drivers/net/cnxk/cnxk_ethdev.c |  2 ++
 drivers/net/cnxk/cnxk_ethdev.h |  4 
 drivers/net/cnxk/cnxk_ethdev_ops.c | 30 ++
 3 files changed, 36 insertions(+)

diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index a145aaa..24c51b4 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1157,6 +1157,8 @@ struct eth_dev_ops cnxk_eth_dev_ops = {
.xstats_reset = cnxk_nix_xstats_reset,
.xstats_get_by_id = cnxk_nix_xstats_get_by_id,
.xstats_get_names_by_id = cnxk_nix_xstats_get_names_by_id,
+   .rxq_info_get = cnxk_nix_rxq_info_get,
+   .txq_info_get = cnxk_nix_txq_info_get,
 };
 
 static int
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index dd05c24..eeb6a53 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -290,6 +290,10 @@ int cnxk_nix_xstats_get_names_by_id(struct rte_eth_dev 
*eth_dev,
 int cnxk_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev, const uint64_t *ids,
  uint64_t *values, unsigned int n);
 int cnxk_nix_xstats_reset(struct rte_eth_dev *eth_dev);
+void cnxk_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t qid,
+  struct rte_eth_rxq_info *qinfo);
+void cnxk_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t qid,
+  struct rte_eth_txq_info *qinfo);
 
 /* Lookup configuration */
 const uint32_t *cnxk_nix_supported_ptypes_get(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c 
b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 5b8bc53..0bcba0c 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -622,3 +622,33 @@ cnxk_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, 
const char *pool)
 
return -ENOTSUP;
 }
+
+void
+cnxk_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t qid,
+ struct rte_eth_rxq_info *qinfo)
+{
+   struct cnxk_eth_rxq_sp *rxq_sp =
+   ((struct cnxk_eth_rxq_sp *)eth_dev->data->rx_queues[qid]) - 1;
+
+   memset(qinfo, 0, sizeof(*qinfo));
+
+   qinfo->mp = rxq_sp->qconf.mp;
+   qinfo->scattered_rx = eth_dev->data->scattered_rx;
+   qinfo->nb_desc = rxq_sp->qconf.nb_desc;
+
+   memcpy(&qinfo->conf, &rxq_sp->qconf.conf.rx, sizeof(qinfo->conf));
+}
+
+void
+cnxk_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t qid,
+ struct rte_eth_txq_info *qinfo)
+{
+   struct cnxk_eth_txq_sp *txq_sp =
+   ((struct cnxk_eth_txq_sp *)eth_dev->data->tx_queues[qid]) - 1;
+
+   memset(qinfo, 0, sizeof(*qinfo));
+
+   qinfo->nb_desc = txq_sp->qconf.nb_desc;
+
+   memcpy(&qinfo->conf, &txq_sp->qconf.conf.tx, sizeof(qinfo->conf));
+}
-- 
2.8.4



[dpdk-dev] [PATCH 38/44] net/cnxk: add device close and reset operations

2021-03-06 Thread Nithin Dabilpuram
From: Sunil Kumar Kori 

Patch implements device close and reset operations for cn9k
and cn10k platforms.

Signed-off-by: Sunil Kumar Kori 
---
 drivers/net/cnxk/cnxk_ethdev.c | 35 ---
 1 file changed, 28 insertions(+), 7 deletions(-)

diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 24c51b4..86dabad 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1119,6 +1119,9 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
return rc;
 }
 
+static int cnxk_nix_dev_reset(struct rte_eth_dev *eth_dev);
+static int cnxk_nix_dev_close(struct rte_eth_dev *eth_dev);
+
 /* CNXK platform independent eth dev ops */
 struct eth_dev_ops cnxk_eth_dev_ops = {
.mtu_set = cnxk_nix_mtu_set,
@@ -1130,6 +1133,8 @@ struct eth_dev_ops cnxk_eth_dev_ops = {
.tx_queue_release = cnxk_nix_tx_queue_release,
.rx_queue_release = cnxk_nix_rx_queue_release,
.dev_stop = cnxk_nix_dev_stop,
+   .dev_close = cnxk_nix_dev_close,
+   .dev_reset = cnxk_nix_dev_reset,
.tx_queue_start = cnxk_nix_tx_queue_start,
.rx_queue_start = cnxk_nix_rx_queue_start,
.rx_queue_stop = cnxk_nix_rx_queue_stop,
@@ -1270,7 +1275,7 @@ cnxk_eth_dev_init(struct rte_eth_dev *eth_dev)
 }
 
 static int
-cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
+cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool reset)
 {
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
const struct eth_dev_ops *dev_ops = eth_dev->dev_ops;
@@ -1324,14 +1329,11 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool 
mbox_close)
rte_free(eth_dev->data->mac_addrs);
eth_dev->data->mac_addrs = NULL;
 
-   /* Check if mbox close is needed */
-   if (!mbox_close)
-   return 0;
-
rc = roc_nix_dev_fini(nix);
/* Can be freed later by PMD if NPA LF is in use */
if (rc == -EAGAIN) {
-   eth_dev->data->dev_private = NULL;
+   if (!reset)
+   eth_dev->data->dev_private = NULL;
return 0;
} else if (rc) {
plt_err("Failed in nix dev fini, rc=%d", rc);
@@ -1340,6 +1342,25 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool 
mbox_close)
return rc;
 }
 
+static int
+cnxk_nix_dev_close(struct rte_eth_dev *eth_dev)
+{
+   cnxk_eth_dev_uninit(eth_dev, false);
+   return 0;
+}
+
+static int
+cnxk_nix_dev_reset(struct rte_eth_dev *eth_dev)
+{
+   int rc;
+
+   rc = cnxk_eth_dev_uninit(eth_dev, true);
+   if (rc)
+   return rc;
+
+   return cnxk_eth_dev_init(eth_dev);
+}
+
 int
 cnxk_nix_remove(struct rte_pci_device *pci_dev)
 {
@@ -1350,7 +1371,7 @@ cnxk_nix_remove(struct rte_pci_device *pci_dev)
eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
if (eth_dev) {
/* Cleanup eth dev */
-   rc = cnxk_eth_dev_uninit(eth_dev, true);
+   rc = cnxk_eth_dev_uninit(eth_dev, false);
if (rc)
return rc;
 
-- 
2.8.4



[dpdk-dev] [PATCH 39/44] net/cnxk: add pending Tx mbuf cleanup operation

2021-03-06 Thread Nithin Dabilpuram
From: Sunil Kumar Kori 

Once mbufs are transmitted, mbufs are freed by H/W. No mbufs are
accumalated as a pending mbuf.
Hence operation is NOP for cnxk platform.

Signed-off-by: Sunil Kumar Kori 
---
 drivers/net/cnxk/cnxk_ethdev.c |  1 +
 drivers/net/cnxk/cnxk_ethdev.h |  1 +
 drivers/net/cnxk/cnxk_ethdev_ops.c | 10 ++
 3 files changed, 12 insertions(+)

diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 86dabad..5a2f90b 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1164,6 +1164,7 @@ struct eth_dev_ops cnxk_eth_dev_ops = {
.xstats_get_names_by_id = cnxk_nix_xstats_get_names_by_id,
.rxq_info_get = cnxk_nix_rxq_info_get,
.txq_info_get = cnxk_nix_txq_info_get,
+   .tx_done_cleanup = cnxk_nix_tx_done_cleanup,
 };
 
 static int
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index eeb6a53..1ca52bc 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -251,6 +251,7 @@ int cnxk_nix_rx_queue_intr_enable(struct rte_eth_dev 
*eth_dev,
 int cnxk_nix_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
   uint16_t rx_queue_id);
 int cnxk_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool);
+int cnxk_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
 
 int cnxk_nix_configure(struct rte_eth_dev *eth_dev);
 int cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c 
b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 0bcba0c..ff8afac 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -652,3 +652,13 @@ cnxk_nix_txq_info_get(struct rte_eth_dev *eth_dev, 
uint16_t qid,
 
memcpy(&qinfo->conf, &txq_sp->qconf.conf.tx, sizeof(qinfo->conf));
 }
+
+/* It is a NOP for cnxk as HW frees the buffer on xmit */
+int
+cnxk_nix_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+   RTE_SET_USED(txq);
+   RTE_SET_USED(free_cnt);
+
+   return 0;
+}
-- 
2.8.4



[dpdk-dev] [PATCH 40/44] net/cnxk: add support to configure npc

2021-03-06 Thread Nithin Dabilpuram
From: Kiran Kumar K 

Adding support to configure NPC on device initialization. This involves
reading the MKEX and initializing the necessary data.

Signed-off-by: Kiran Kumar K 
---
 drivers/net/cnxk/cnxk_ethdev.c | 25 ++---
 drivers/net/cnxk/cnxk_ethdev.h |  3 +++
 drivers/net/cnxk/cnxk_ethdev_devargs.c |  3 +++
 3 files changed, 28 insertions(+), 3 deletions(-)

diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 5a2f90b..afe97f1 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -8,7 +8,8 @@ nix_get_rx_offload_capa(struct cnxk_eth_dev *dev)
 {
uint64_t capa = CNXK_NIX_RX_OFFLOAD_CAPA;
 
-   if (roc_nix_is_vf_or_sdp(&dev->nix))
+   if (roc_nix_is_vf_or_sdp(&dev->nix) ||
+   dev->npc.switch_header_type == ROC_PRIV_FLAGS_HIGIG)
capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
 
return capa;
@@ -120,6 +121,7 @@ nix_update_flow_ctrl_config(struct rte_eth_dev *eth_dev)
 
/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
if (roc_model_is_cn96_Ax() &&
+   dev->npc.switch_header_type != ROC_PRIV_FLAGS_HIGIG &&
(fc_cfg.mode == RTE_FC_FULL || fc_cfg.mode == RTE_FC_RX_PAUSE)) {
fc_cfg.mode =
(fc_cfg.mode == RTE_FC_FULL ||
@@ -419,8 +421,10 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t 
ethdev_rss,
 
dev->ethdev_rss_hf = ethdev_rss;
 
-   if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+   if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+   dev->npc.switch_header_type == ROC_PRIV_FLAGS_LEN_90B) {
flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
+   }
 
if (ethdev_rss & ETH_RSS_C_VLAN)
flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
@@ -803,11 +807,18 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
roc_nix_err_intr_ena_dis(nix, true);
roc_nix_ras_intr_ena_dis(nix, true);
 
-   if (nix->rx_ptp_ena) {
+   if (nix->rx_ptp_ena &&
+   dev->npc.switch_header_type == ROC_PRIV_FLAGS_HIGIG) {
plt_err("Both PTP and switch header enabled");
goto free_nix_lf;
}
 
+   rc = roc_nix_switch_hdr_set(nix, dev->npc.switch_header_type);
+   if (rc) {
+   plt_err("Failed to enable switch type nix_lf rc=%d", rc);
+   goto free_nix_lf;
+   }
+
rc = roc_nix_lso_fmt_setup(nix);
if (rc) {
plt_err("failed to setup nix lso format fields, rc=%d", rc);
@@ -1259,6 +1270,11 @@ cnxk_eth_dev_init(struct rte_eth_dev *eth_dev)
dev->speed_capa = nix_get_speed_capa(dev);
 
/* Initialize roc npc */
+   dev->npc.roc_nix = nix;
+   rc = roc_npc_init(&dev->npc);
+   if (rc)
+   goto free_mac_addrs;
+
plt_nix_dbg("Port=%d pf=%d vf=%d ver=%s hwcap=0x%" PRIx64
" rxoffload_capa=0x%" PRIx64 " txoffload_capa=0x%" PRIx64,
eth_dev->data->port_id, roc_nix_get_pf(nix),
@@ -1292,6 +1308,9 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool 
reset)
 
roc_nix_npc_rx_ena_dis(nix, false);
 
+   /* Disable and free rte_flow entries */
+   roc_npc_fini(&dev->npc);
+
/* Disable link status events */
roc_nix_mac_link_event_start_stop(nix, false);
 
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 1ca52bc..e3b0bc1 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -133,6 +133,9 @@ struct cnxk_eth_dev {
/* ROC NIX */
struct roc_nix nix;
 
+   /* ROC NPC */
+   struct roc_npc npc;
+
/* ROC RQs, SQs and CQs */
struct roc_nix_rq *rqs;
struct roc_nix_sq *sqs;
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c 
b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index 4af2803..7fd06eb 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -150,6 +150,9 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, 
struct cnxk_eth_dev *dev)
dev->nix.rss_tag_as_xor = !!rss_tag_as_xor;
dev->nix.max_sqb_count = sqb_count;
dev->nix.reta_sz = reta_sz;
+   dev->npc.flow_prealloc_size = flow_prealloc_size;
+   dev->npc.flow_max_priority = flow_max_priority;
+   dev->npc.switch_header_type = switch_header_type;
return 0;
 
 exit:
-- 
2.8.4



[dpdk-dev] [PATCH 41/44] net/cnxk: add initial version of rte flow support

2021-03-06 Thread Nithin Dabilpuram
From: Kiran Kumar K 

Adding initial version of rte_flow support for cnxk family device.
Supported rte_flow ops are flow_validate, flow_create, flow_crstroy,
flow_flush, flow_query, flow_isolate.

Signed-off-by: Kiran Kumar K 
---
 doc/guides/nics/cnxk.rst  | 118 ++
 doc/guides/nics/features/cnxk.ini |   1 +
 doc/guides/nics/features/cnxk_vec.ini |   1 +
 doc/guides/nics/features/cnxk_vf.ini  |   1 +
 drivers/net/cnxk/cnxk_rte_flow.c  | 280 ++
 drivers/net/cnxk/cnxk_rte_flow.h  |  69 +
 drivers/net/cnxk/meson.build  |   1 +
 7 files changed, 471 insertions(+)
 create mode 100644 drivers/net/cnxk/cnxk_rte_flow.c
 create mode 100644 drivers/net/cnxk/cnxk_rte_flow.h

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index c2a6fbb..87401f0 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -24,6 +24,7 @@ Features of the CNXK Ethdev PMD are:
 - Multiple queues for TX and RX
 - Receiver Side Scaling (RSS)
 - MAC filtering
+- Generic flow API
 - Inner and Outer Checksum offload
 - Port hardware statistics
 - Link state information
@@ -222,3 +223,120 @@ Debugging Options
+---++---+
| 2 | NPC| --log-level='pmd\.net.cnxk\.flow,8'   |
+---++---+
+
+RTE Flow Support
+
+
+The OCTEON CN9K/CN10K SoC family NIC has support for the following patterns and
+actions.
+
+Patterns:
+
+.. _table_cnxk_supported_flow_item_types:
+
+.. table:: Item types
+
+   +++
+   | #  | Pattern Type   |
+   +++
+   | 1  | RTE_FLOW_ITEM_TYPE_ETH |
+   +++
+   | 2  | RTE_FLOW_ITEM_TYPE_VLAN|
+   +++
+   | 3  | RTE_FLOW_ITEM_TYPE_E_TAG   |
+   +++
+   | 4  | RTE_FLOW_ITEM_TYPE_IPV4|
+   +++
+   | 5  | RTE_FLOW_ITEM_TYPE_IPV6|
+   +++
+   | 6  | RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4|
+   +++
+   | 7  | RTE_FLOW_ITEM_TYPE_MPLS|
+   +++
+   | 8  | RTE_FLOW_ITEM_TYPE_ICMP|
+   +++
+   | 9  | RTE_FLOW_ITEM_TYPE_UDP |
+   +++
+   | 10 | RTE_FLOW_ITEM_TYPE_TCP |
+   +++
+   | 11 | RTE_FLOW_ITEM_TYPE_SCTP|
+   +++
+   | 12 | RTE_FLOW_ITEM_TYPE_ESP |
+   +++
+   | 13 | RTE_FLOW_ITEM_TYPE_GRE |
+   +++
+   | 14 | RTE_FLOW_ITEM_TYPE_NVGRE   |
+   +++
+   | 15 | RTE_FLOW_ITEM_TYPE_VXLAN   |
+   +++
+   | 16 | RTE_FLOW_ITEM_TYPE_GTPC|
+   +++
+   | 17 | RTE_FLOW_ITEM_TYPE_GTPU|
+   +++
+   | 18 | RTE_FLOW_ITEM_TYPE_GENEVE  |
+   +++
+   | 19 | RTE_FLOW_ITEM_TYPE_VXLAN_GPE   |
+   +++
+   | 20 | RTE_FLOW_ITEM_TYPE_IPV6_EXT|
+   +++
+   | 21 | RTE_FLOW_ITEM_TYPE_VOID|
+   +++
+   | 22 | RTE_FLOW_ITEM_TYPE_ANY |
+   +++
+   | 23 | RTE_FLOW_ITEM_TYPE_GRE_KEY |
+   +++
+   | 24 | RTE_FLOW_ITEM_TYPE_HIGIG2  |
+   +++
+
+.. note::
+
+   ``RTE_FLOW_ITEM_TYPE_GRE_KEY`` works only when checksum and routing
+   bits in the GRE header are equal to 0.
+
+Actions:
+
+.. _table_cnxk_supported_ingress_action_types:
+
+.. table:: Ingress action types
+
+   ++-+
+   | #  | Action Type |
+   ++=+
+   | 1  | RTE_FLOW_ACTION_TYPE_VOID   |
+   ++-+
+   | 2  | RTE_FLOW_ACTION_TYPE_MARK   |
+   ++-+
+   | 3  | RTE_FLOW_ACTION_TYPE_FLAG   |
+   ++-+
+   | 4  | RTE_FLOW_ACTION_TYPE_COUNT  |
+   ++-+
+   | 5  | RTE_FLOW_ACTION_TYPE_DROP   |
+   ++-+
+   | 6  | RTE_FLOW_ACTION_TYPE_QUEUE  |
+   ++-+
+   | 7  | RTE_FLOW_ACTION_TYPE_RSS|
+   ++---

[dpdk-dev] [PATCH 42/44] net/cnxk: add filter ctrl operation

2021-03-06 Thread Nithin Dabilpuram
From: Satheesh Paul 

This patch adds filter_ctrl operation to enable rte_flow_ops.

Signed-off-by: Satheesh Paul 
---
 drivers/common/cnxk/roc_npc.c  |  2 ++
 drivers/net/cnxk/cnxk_ethdev.c |  3 +++
 drivers/net/cnxk/cnxk_ethdev.h |  6 +-
 drivers/net/cnxk/cnxk_ethdev_ops.c | 21 +
 4 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/drivers/common/cnxk/roc_npc.c b/drivers/common/cnxk/roc_npc.c
index 0efe080..b862e23 100644
--- a/drivers/common/cnxk/roc_npc.c
+++ b/drivers/common/cnxk/roc_npc.c
@@ -645,6 +645,8 @@ roc_npc_flow_create(struct roc_npc *roc_npc, const struct 
roc_npc_attr *attr,
struct npc_flow_list *list;
int rc;
 
+   npc->channel = roc_npc->channel;
+
flow = plt_zmalloc(sizeof(*flow), 0);
if (flow == NULL) {
*errcode = NPC_ERR_NO_MEM;
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index afe97f1..347428e 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -773,6 +773,8 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
goto fail_configure;
}
 
+   dev->npc.channel = roc_nix_get_base_chan(nix);
+
nb_rxq = data->nb_rx_queues;
nb_txq = data->nb_tx_queues;
rc = -ENOMEM;
@@ -1176,6 +1178,7 @@ struct eth_dev_ops cnxk_eth_dev_ops = {
.rxq_info_get = cnxk_nix_rxq_info_get,
.txq_info_get = cnxk_nix_txq_info_get,
.tx_done_cleanup = cnxk_nix_tx_done_cleanup,
+   .filter_ctrl = cnxk_nix_filter_ctrl,
 };
 
 static int
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index e3b0bc1..7cf7cf7 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -218,6 +218,8 @@ cnxk_eth_pmd_priv(struct rte_eth_dev *eth_dev)
 /* Common ethdev ops */
 extern struct eth_dev_ops cnxk_eth_dev_ops;
 
+extern const struct rte_flow_ops cnxk_flow_ops;
+
 /* Ops */
 int cnxk_nix_probe(struct rte_pci_driver *pci_drv,
   struct rte_pci_device *pci_dev);
@@ -255,7 +257,9 @@ int cnxk_nix_rx_queue_intr_disable(struct rte_eth_dev 
*eth_dev,
   uint16_t rx_queue_id);
 int cnxk_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool);
 int cnxk_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
-
+int cnxk_nix_filter_ctrl(struct rte_eth_dev *eth_dev,
+enum rte_filter_type filter_type,
+enum rte_filter_op filter_op, void *arg);
 int cnxk_nix_configure(struct rte_eth_dev *eth_dev);
 int cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
uint16_t nb_desc, uint16_t fp_tx_q_sz,
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c 
b/drivers/net/cnxk/cnxk_ethdev_ops.c
index ff8afac..00f1fe7 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -294,6 +294,27 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 }
 
 int
+cnxk_nix_filter_ctrl(struct rte_eth_dev *eth_dev,
+enum rte_filter_type filter_type,
+enum rte_filter_op filter_op, void *arg)
+{
+   RTE_SET_USED(eth_dev);
+
+   if (filter_type != RTE_ETH_FILTER_GENERIC) {
+   plt_err("Unsupported filter type %d", filter_type);
+   return -ENOTSUP;
+   }
+
+   if (filter_op == RTE_ETH_FILTER_GET) {
+   *(const void **)arg = &cnxk_flow_ops;
+   return 0;
+   }
+
+   plt_err("Invalid filter_op %d", filter_op);
+   return -EINVAL;
+}
+
+int
 cnxk_nix_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr)
 {
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
-- 
2.8.4



[dpdk-dev] [PATCH 43/44] net/cnxk: add ethdev firmware version get

2021-03-06 Thread Nithin Dabilpuram
From: Satha Rao 

Add callback to get ethdev firmware version.

Signed-off-by: Satha Rao 
---
 doc/guides/nics/features/cnxk.ini |  1 +
 doc/guides/nics/features/cnxk_vec.ini |  1 +
 doc/guides/nics/features/cnxk_vf.ini  |  1 +
 drivers/net/cnxk/cnxk_ethdev.c|  1 +
 drivers/net/cnxk/cnxk_ethdev.h|  2 ++
 drivers/net/cnxk/cnxk_ethdev_ops.c| 19 +++
 6 files changed, 25 insertions(+)

diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index 7b6d832..2c83bfb 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -36,6 +36,7 @@ Packet type parsing  = Y
 Basic stats  = Y
 Stats per queue  = Y
 Extended stats   = Y
+FW version   = Y
 Module EEPROM dump   = Y
 Linux= Y
 ARMv8= Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index ef37088..c8ad253 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -34,6 +34,7 @@ Packet type parsing  = Y
 Basic stats  = Y
 Stats per queue  = Y
 Extended stats   = Y
+FW version   = Y
 Module EEPROM dump   = Y
 Linux= Y
 ARMv8= Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini 
b/doc/guides/nics/features/cnxk_vf.ini
index 69419d1..4dbdfcb 100644
--- a/doc/guides/nics/features/cnxk_vf.ini
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -31,6 +31,7 @@ Packet type parsing  = Y
 Basic stats  = Y
 Stats per queue  = Y
 Extended stats   = Y
+FW version   = Y
 Module EEPROM dump   = Y
 Linux= Y
 ARMv8= Y
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 347428e..f006718 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1175,6 +1175,7 @@ struct eth_dev_ops cnxk_eth_dev_ops = {
.xstats_reset = cnxk_nix_xstats_reset,
.xstats_get_by_id = cnxk_nix_xstats_get_by_id,
.xstats_get_names_by_id = cnxk_nix_xstats_get_names_by_id,
+   .fw_version_get = cnxk_nix_fw_version_get,
.rxq_info_get = cnxk_nix_rxq_info_get,
.txq_info_get = cnxk_nix_txq_info_get,
.tx_done_cleanup = cnxk_nix_tx_done_cleanup,
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 7cf7cf7..4b25593 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -298,6 +298,8 @@ int cnxk_nix_xstats_get_names_by_id(struct rte_eth_dev 
*eth_dev,
 int cnxk_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev, const uint64_t *ids,
  uint64_t *values, unsigned int n);
 int cnxk_nix_xstats_reset(struct rte_eth_dev *eth_dev);
+int cnxk_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version,
+   size_t fw_size);
 void cnxk_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t qid,
   struct rte_eth_rxq_info *qinfo);
 void cnxk_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t qid,
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c 
b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 00f1fe7..bf89ede 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -644,6 +644,25 @@ cnxk_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, 
const char *pool)
return -ENOTSUP;
 }
 
+int
+cnxk_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version,
+   size_t fw_size)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   const char *str = roc_npc_profile_name_get(&dev->npc);
+   uint32_t size = strlen(str) + 1;
+
+   if (fw_size > size)
+   fw_size = size;
+
+   strlcpy(fw_version, str, fw_size);
+
+   if (fw_size < size)
+   return size;
+
+   return 0;
+}
+
 void
 cnxk_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t qid,
  struct rte_eth_rxq_info *qinfo)
-- 
2.8.4



[dpdk-dev] [PATCH 44/44] net/cnxk: add get register operation

2021-03-06 Thread Nithin Dabilpuram
From: Satha Rao 

With this patch implemented api to dump platform registers for
debug purposes.

Signed-off-by: Satha Rao 
---
 doc/guides/nics/cnxk.rst  |  1 +
 doc/guides/nics/features/cnxk.ini |  1 +
 doc/guides/nics/features/cnxk_vec.ini |  1 +
 doc/guides/nics/features/cnxk_vf.ini  |  1 +
 drivers/net/cnxk/cnxk_ethdev.c|  1 +
 drivers/net/cnxk/cnxk_ethdev.h|  4 
 drivers/net/cnxk/cnxk_ethdev_ops.c| 25 +
 7 files changed, 34 insertions(+)

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 87401f0..98bcb51 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -32,6 +32,7 @@ Features of the CNXK Ethdev PMD are:
 - MTU update
 - Scatter-Gather IO support
 - Vector Poll mode driver
+- Debug utilities - Context dump and error interrupt support
 - Support Rx interrupt
 
 Prerequisites
diff --git a/doc/guides/nics/features/cnxk.ini 
b/doc/guides/nics/features/cnxk.ini
index 2c83bfb..d1c6f9d 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -38,6 +38,7 @@ Stats per queue  = Y
 Extended stats   = Y
 FW version   = Y
 Module EEPROM dump   = Y
+Registers dump   = Y
 Linux= Y
 ARMv8= Y
 Usage doc= Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini 
b/doc/guides/nics/features/cnxk_vec.ini
index c8ad253..5f2478d 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -36,6 +36,7 @@ Stats per queue  = Y
 Extended stats   = Y
 FW version   = Y
 Module EEPROM dump   = Y
+Registers dump   = Y
 Linux= Y
 ARMv8= Y
 Usage doc= Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini 
b/doc/guides/nics/features/cnxk_vf.ini
index 4dbdfcb..3cbc369 100644
--- a/doc/guides/nics/features/cnxk_vf.ini
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -33,6 +33,7 @@ Stats per queue  = Y
 Extended stats   = Y
 FW version   = Y
 Module EEPROM dump   = Y
+Registers dump   = Y
 Linux= Y
 ARMv8= Y
 Usage doc= Y
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index f006718..99fb091 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1180,6 +1180,7 @@ struct eth_dev_ops cnxk_eth_dev_ops = {
.txq_info_get = cnxk_nix_txq_info_get,
.tx_done_cleanup = cnxk_nix_tx_done_cleanup,
.filter_ctrl = cnxk_nix_filter_ctrl,
+   .get_reg = cnxk_nix_dev_get_reg,
 };
 
 static int
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 4b25593..74573f9 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -313,6 +313,10 @@ void *cnxk_nix_fastpath_lookup_mem_get(void);
 int cnxk_ethdev_parse_devargs(struct rte_devargs *devargs,
  struct cnxk_eth_dev *dev);
 
+/* Debug */
+int cnxk_nix_dev_get_reg(struct rte_eth_dev *eth_dev,
+struct rte_dev_reg_info *regs);
+
 /* Inlines */
 static __rte_always_inline uint64_t
 cnxk_pktmbuf_detach(struct rte_mbuf *m)
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c 
b/drivers/net/cnxk/cnxk_ethdev_ops.c
index bf89ede..41c6d37 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -702,3 +702,28 @@ cnxk_nix_tx_done_cleanup(void *txq, uint32_t free_cnt)
 
return 0;
 }
+
+int
+cnxk_nix_dev_get_reg(struct rte_eth_dev *eth_dev, struct rte_dev_reg_info 
*regs)
+{
+   struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+   struct roc_nix *nix = &dev->nix;
+   uint64_t *data = regs->data;
+   int rc = -ENOTSUP;
+
+   if (data == NULL) {
+   rc = roc_nix_lf_get_reg_count(nix);
+   if (rc > 0) {
+   regs->length = rc;
+   regs->width = 8;
+   rc = 0;
+   }
+   return rc;
+   }
+
+   if (!regs->length ||
+   regs->length == (uint32_t)roc_nix_lf_get_reg_count(nix))
+   return roc_nix_lf_reg_dump(nix, data);
+
+   return rc;
+}
-- 
2.8.4



[dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

This patchset adds support for Marvell CN106XX SoC based on 'common/cnxk'
driver. In future, CN9K a.k.a octeontx2 will also be supported by same
driver when code is ready and 'event/octeontx2' will be deprecated.

Depends-on: series-15508 ("Add Marvell CNXK common driver")
Depends-on: series-15511 ("Add Marvell CNXK mempool driver")
Depends-on: series-15515 ("Marvell CNXK Ethdev Driver")

Pavan Nikhilesh (19):
  event/cnxk: add build infra and device setup
  event/cnxk: add platform specific device probe
  event/cnxk: add common configuration validation
  event/cnxk: allocate event inflight buffers
  event/cnxk: add devargs to configure getwork mode
  event/cnxk: add SSO HW device operations
  event/cnxk: add SSO GWS fastpath enqueue functions
  event/cnxk: add SSO GWS dequeue fastpath functions
  event/cnxk: add SSO selftest and dump
  event/cnxk: add devargs to disable NPA
  event/cnxk: allow adapters to resize inflights
  event/cnxk: add TIM bucket operations
  event/cnxk: add timer arm routine
  event/cnxk: add timer arm timeout burst
  event/cnxk: add timer cancel function
  event/cnxk: add Rx adapter support
  event/cnxk: add Rx adapter fastpath ops
  event/cnxk: add Tx adapter support
  event/cnxk: add Tx adapter fastpath ops

Shijith Thotton (17):
  event/cnxk: add device capabilities function
  event/cnxk: add platform specific device config
  event/cnxk: add event queue config functions
  event/cnxk: add devargs for inflight buffer count
  event/cnxk: add devargs to control SSO HWGRP QoS
  event/cnxk: add port config functions
  event/cnxk: add event port link and unlink
  event/cnxk: add device start function
  event/cnxk: add device stop and close functions
  event/cnxk: support event timer
  event/cnxk: add timer adapter capabilities
  event/cnxk: create and free timer adapter
  event/cnxk: add timer adapter info function
  event/cnxk: add devargs for chunk size and rings
  event/cnxk: add timer stats get and reset
  event/cnxk: add timer adapter start and stop
  event/cnxk: add devargs to control timer adapters

 MAINTAINERS  |6 +
 app/test/test_eventdev.c |   14 +
 doc/guides/eventdevs/cnxk.rst|  168 +++
 doc/guides/eventdevs/index.rst   |1 +
 drivers/event/cnxk/cn10k_eventdev.c  |  813 +++
 drivers/event/cnxk/cn10k_worker.c|  209 +++
 drivers/event/cnxk/cn10k_worker.h|  309 +
 drivers/event/cnxk/cn9k_eventdev.c   | 1083 +++
 drivers/event/cnxk/cn9k_worker.c |  438 ++
 drivers/event/cnxk/cn9k_worker.h |  490 +++
 drivers/event/cnxk/cnxk_eventdev.c   |  647 +
 drivers/event/cnxk/cnxk_eventdev.h   |  275 
 drivers/event/cnxk/cnxk_eventdev_adptr.c |  330 +
 drivers/event/cnxk/cnxk_sso_selftest.c   | 1570 ++
 drivers/event/cnxk/cnxk_tim_evdev.c  |  538 
 drivers/event/cnxk/cnxk_tim_evdev.h  |  275 
 drivers/event/cnxk/cnxk_tim_worker.c |  191 +++
 drivers/event/cnxk/cnxk_tim_worker.h |  601 +
 drivers/event/cnxk/cnxk_worker.h |  101 ++
 drivers/event/cnxk/meson.build   |   29 +
 drivers/event/cnxk/version.map   |3 +
 drivers/event/meson.build|2 +-
 22 files changed, 8092 insertions(+), 1 deletion(-)
 create mode 100644 doc/guides/eventdevs/cnxk.rst
 create mode 100644 drivers/event/cnxk/cn10k_eventdev.c
 create mode 100644 drivers/event/cnxk/cn10k_worker.c
 create mode 100644 drivers/event/cnxk/cn10k_worker.h
 create mode 100644 drivers/event/cnxk/cn9k_eventdev.c
 create mode 100644 drivers/event/cnxk/cn9k_worker.c
 create mode 100644 drivers/event/cnxk/cn9k_worker.h
 create mode 100644 drivers/event/cnxk/cnxk_eventdev.c
 create mode 100644 drivers/event/cnxk/cnxk_eventdev.h
 create mode 100644 drivers/event/cnxk/cnxk_eventdev_adptr.c
 create mode 100644 drivers/event/cnxk/cnxk_sso_selftest.c
 create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.c
 create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.h
 create mode 100644 drivers/event/cnxk/cnxk_tim_worker.c
 create mode 100644 drivers/event/cnxk/cnxk_tim_worker.h
 create mode 100644 drivers/event/cnxk/cnxk_worker.h
 create mode 100644 drivers/event/cnxk/meson.build
 create mode 100644 drivers/event/cnxk/version.map

--
2.17.1



[dpdk-dev] [PATCH 01/36] event/cnxk: add build infra and device setup

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

Add meson build infra structure along with the event device
SSO initialization and teardown functions.

Signed-off-by: Shijith Thotton 
Signed-off-by: Pavan Nikhilesh 
---
 MAINTAINERS|  6 +++
 doc/guides/eventdevs/cnxk.rst  | 55 
 doc/guides/eventdevs/index.rst |  1 +
 drivers/event/cnxk/cnxk_eventdev.c | 68 ++
 drivers/event/cnxk/cnxk_eventdev.h | 39 +
 drivers/event/cnxk/meson.build | 13 ++
 drivers/event/cnxk/version.map |  3 ++
 drivers/event/meson.build  |  2 +-
 8 files changed, 186 insertions(+), 1 deletion(-)
 create mode 100644 doc/guides/eventdevs/cnxk.rst
 create mode 100644 drivers/event/cnxk/cnxk_eventdev.c
 create mode 100644 drivers/event/cnxk/cnxk_eventdev.h
 create mode 100644 drivers/event/cnxk/meson.build
 create mode 100644 drivers/event/cnxk/version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index e341bc81d..89c23c49c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1211,6 +1211,12 @@ M: Jerin Jacob 
 F: drivers/event/octeontx2/
 F: doc/guides/eventdevs/octeontx2.rst
 
+Marvell OCTEON CNXK
+M: Pavan Nikhilesh 
+M: Shijith Thotton 
+F: drivers/event/cnxk/
+F: doc/guides/eventdevs/cnxk.rst
+
 NXP DPAA eventdev
 M: Hemant Agrawal 
 M: Nipun Gupta 
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
new file mode 100644
index 0..e94225bd3
--- /dev/null
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -0,0 +1,55 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+Copyright(c) 2021 Marvell International Ltd.
+
+OCTEON CNXK SSO Eventdev Driver
+==
+
+The SSO PMD (**librte_event_cnxk**) and provides poll mode
+eventdev driver support for the inbuilt event device found in the
+**Marvell OCTEON CNXK** SoC family.
+
+More information about OCTEON CNXK SoC can be found at `Marvell Official 
Website
+`_.
+
+Supported OCTEON CNXK SoCs
+--
+
+- CN9XX
+- CN10XX
+
+Features
+
+
+Features of the OCTEON CNXK SSO PMD are:
+
+- 256 Event queues
+- 26 (dual) and 52 (single) Event ports on CN10XX
+- 52 Event ports on CN9XX
+- HW event scheduler
+- Supports 1M flows per event queue
+- Flow based event pipelining
+- Flow pinning support in flow based event pipelining
+- Queue based event pipelining
+- Supports ATOMIC, ORDERED, PARALLEL schedule types per flow
+- Event scheduling QoS based on event queue priority
+- Open system with configurable amount of outstanding events limited only by
+  DRAM
+- HW accelerated dequeue timeout support to enable power management
+
+Prerequisites and Compilation procedure
+---
+
+   See :doc:`../platform/cnxk` for setup information.
+
+Debugging Options
+-
+
+.. _table_octeon_cnxk_event_debug_options:
+
+.. table:: OCTEON CNXK event device debug options
+
+   +---++---+
+   | # | Component  | EAL log command   |
+   +===++===+
+   | 1 | SSO| --log-level='pmd\.event\.cnxk,8'  |
+   +---++---+
diff --git a/doc/guides/eventdevs/index.rst b/doc/guides/eventdevs/index.rst
index f5b69b39d..00203e0f0 100644
--- a/doc/guides/eventdevs/index.rst
+++ b/doc/guides/eventdevs/index.rst
@@ -11,6 +11,7 @@ application through the eventdev API.
 :maxdepth: 2
 :numbered:
 
+cnxk
 dlb
 dlb2
 dpaa
diff --git a/drivers/event/cnxk/cnxk_eventdev.c 
b/drivers/event/cnxk/cnxk_eventdev.c
new file mode 100644
index 0..b7f9c81bd
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cnxk_eventdev.h"
+
+int
+cnxk_sso_init(struct rte_eventdev *event_dev)
+{
+   const struct rte_memzone *mz = NULL;
+   struct rte_pci_device *pci_dev;
+   struct cnxk_sso_evdev *dev;
+   int rc;
+
+   mz = rte_memzone_reserve(CNXK_SSO_MZ_NAME, sizeof(uint64_t),
+SOCKET_ID_ANY, 0);
+   if (mz == NULL) {
+   plt_err("Failed to create eventdev memzone");
+   return -ENOMEM;
+   }
+
+   dev = cnxk_sso_pmd_priv(event_dev);
+   pci_dev = container_of(event_dev->dev, struct rte_pci_device, device);
+   dev->sso.pci_dev = pci_dev;
+
+   *(uint64_t *)mz->addr = (uint64_t)dev;
+
+   /* Initialize the base cnxk_dev object */
+   rc = roc_sso_dev_init(&dev->sso);
+   if (rc < 0) {
+   plt_err("Failed to initialize RoC SSO rc=%d", rc);
+   goto error;
+   }
+
+   dev->is_timeout_deq = 0;
+   dev->min_dequeue_timeout_ns = USEC2NSEC(1);
+   dev->m

[dpdk-dev] [PATCH 02/36] event/cnxk: add device capabilities function

2021-03-06 Thread pbhagavatula
From: Shijith Thotton 

Add the info_get function to return details on the queues, flow,
prioritization capabilities, etc. which this device has.

Signed-off-by: Shijith Thotton 
Signed-off-by: Pavan Nikhilesh 
---
 drivers/event/cnxk/cnxk_eventdev.c | 24 
 drivers/event/cnxk/cnxk_eventdev.h |  4 
 2 files changed, 28 insertions(+)

diff --git a/drivers/event/cnxk/cnxk_eventdev.c 
b/drivers/event/cnxk/cnxk_eventdev.c
index b7f9c81bd..ae553fd23 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -4,6 +4,30 @@
 
 #include "cnxk_eventdev.h"
 
+void
+cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
+ struct rte_event_dev_info *dev_info)
+{
+
+   dev_info->min_dequeue_timeout_ns = dev->min_dequeue_timeout_ns;
+   dev_info->max_dequeue_timeout_ns = dev->max_dequeue_timeout_ns;
+   dev_info->max_event_queues = dev->max_event_queues;
+   dev_info->max_event_queue_flows = (1ULL << 20);
+   dev_info->max_event_queue_priority_levels = 8;
+   dev_info->max_event_priority_levels = 1;
+   dev_info->max_event_ports = dev->max_event_ports;
+   dev_info->max_event_port_dequeue_depth = 1;
+   dev_info->max_event_port_enqueue_depth = 1;
+   dev_info->max_num_events = dev->max_num_events;
+   dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS |
+ RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
+ RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
+ RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
+ RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
+ RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
+}
+
 int
 cnxk_sso_init(struct rte_eventdev *event_dev)
 {
diff --git a/drivers/event/cnxk/cnxk_eventdev.h 
b/drivers/event/cnxk/cnxk_eventdev.h
index 148b327a1..583492948 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -17,6 +17,8 @@
 
 struct cnxk_sso_evdev {
struct roc_sso sso;
+   uint8_t max_event_queues;
+   uint8_t max_event_ports;
uint8_t is_timeout_deq;
uint8_t nb_event_queues;
uint8_t nb_event_ports;
@@ -35,5 +37,7 @@ cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
 int cnxk_sso_init(struct rte_eventdev *event_dev);
 int cnxk_sso_fini(struct rte_eventdev *event_dev);
 int cnxk_sso_remove(struct rte_pci_device *pci_dev);
+void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
+  struct rte_event_dev_info *dev_info);
 
 #endif /* __CNXK_EVENTDEV_H__ */
-- 
2.17.1



[dpdk-dev] [PATCH 03/36] event/cnxk: add platform specific device probe

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

Add platform specific event device probe and remove, also add
event device info get function.

Signed-off-by: Pavan Nikhilesh 
Signed-off-by: Shijith Thotton 
---
 drivers/event/cnxk/cn10k_eventdev.c | 101 +++
 drivers/event/cnxk/cn9k_eventdev.c  | 102 
 drivers/event/cnxk/cnxk_eventdev.h  |   2 +
 drivers/event/cnxk/meson.build  |   4 +-
 4 files changed, 208 insertions(+), 1 deletion(-)
 create mode 100644 drivers/event/cnxk/cn10k_eventdev.c
 create mode 100644 drivers/event/cnxk/cn9k_eventdev.c

diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
new file mode 100644
index 0..34238d3b5
--- /dev/null
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cnxk_eventdev.h"
+
+static void
+cn10k_sso_set_rsrc(void *arg)
+{
+   struct cnxk_sso_evdev *dev = arg;
+
+   dev->max_event_ports = dev->sso.max_hws;
+   dev->max_event_queues =
+   dev->sso.max_hwgrp > RTE_EVENT_MAX_QUEUES_PER_DEV ?
+ RTE_EVENT_MAX_QUEUES_PER_DEV :
+ dev->sso.max_hwgrp;
+}
+
+static void
+cn10k_sso_info_get(struct rte_eventdev *event_dev,
+  struct rte_event_dev_info *dev_info)
+{
+   struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+   dev_info->driver_name = RTE_STR(EVENTDEV_NAME_CN10K_PMD);
+   cnxk_sso_info_get(dev, dev_info);
+}
+
+static struct rte_eventdev_ops cn10k_sso_dev_ops = {
+   .dev_infos_get = cn10k_sso_info_get,
+};
+
+static int
+cn10k_sso_init(struct rte_eventdev *event_dev)
+{
+   struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+   int rc;
+
+   if (RTE_CACHE_LINE_SIZE != 64) {
+   plt_err("Driver not compiled for CN9K");
+   return -EFAULT;
+   }
+
+   rc = plt_init();
+   if (rc < 0) {
+   plt_err("Failed to initialize platform model");
+   return rc;
+   }
+
+   event_dev->dev_ops = &cn10k_sso_dev_ops;
+   /* For secondary processes, the primary has done all the work */
+   if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+   return 0;
+
+   rc = cnxk_sso_init(event_dev);
+   if (rc < 0)
+   return rc;
+
+   cn10k_sso_set_rsrc(cnxk_sso_pmd_priv(event_dev));
+   if (!dev->max_event_ports || !dev->max_event_queues) {
+   plt_err("Not enough eventdev resource queues=%d ports=%d",
+   dev->max_event_queues, dev->max_event_ports);
+   cnxk_sso_fini(event_dev);
+   return -ENODEV;
+   }
+
+   plt_sso_dbg("Initializing %s max_queues=%d max_ports=%d",
+   event_dev->data->name, dev->max_event_queues,
+   dev->max_event_ports);
+
+   return 0;
+}
+
+static int
+cn10k_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+   return rte_event_pmd_pci_probe(pci_drv, pci_dev,
+  sizeof(struct cnxk_sso_evdev),
+  cn10k_sso_init);
+}
+
+static const struct rte_pci_id cn10k_pci_sso_map[] = {
+   CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+   CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+   CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+   CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+   {
+   .vendor_id = 0,
+   },
+};
+
+static struct rte_pci_driver cn10k_pci_sso = {
+   .id_table = cn10k_pci_sso_map,
+   .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
+   .probe = cn10k_sso_probe,
+   .remove = cnxk_sso_remove,
+};
+
+RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
+RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
+RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c 
b/drivers/event/cnxk/cn9k_eventdev.c
new file mode 100644
index 0..238540828
--- /dev/null
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cnxk_eventdev.h"
+
+#define CN9K_DUAL_WS_NB_WS 2
+#define CN9K_DUAL_WS_PAIR_ID(x, id) (((x)*CN9K_DUAL_WS_NB_WS) + id)
+
+static void
+cn9k_sso_set_rsrc(void *arg)
+{
+   struct cnxk_sso_evdev *dev = arg;
+
+   if (dev->dual_ws)
+   dev->max_event_ports = dev->sso.max_hws / CN9K_DUAL_WS_NB_WS;
+   else
+   dev->max_event_ports = dev->sso.max_hws;
+   dev->max_event_queues =
+   dev->sso.max_hwgrp > RTE_EVENT_MAX_QUEUES_PER_DEV ?
+ RTE_EVENT_MAX_QUEUES_PER_DEV :
+ 

[dpdk-dev] [PATCH 04/36] event/cnxk: add common configuration validation

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

Add configuration validation, port and queue configuration
functions.

Signed-off-by: Pavan Nikhilesh 
Signed-off-by: Shijith Thotton 
---
 drivers/event/cnxk/cnxk_eventdev.c | 70 ++
 drivers/event/cnxk/cnxk_eventdev.h |  6 +++
 2 files changed, 76 insertions(+)

diff --git a/drivers/event/cnxk/cnxk_eventdev.c 
b/drivers/event/cnxk/cnxk_eventdev.c
index ae553fd23..f15986f3e 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -28,6 +28,76 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
  RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 }
 
+int
+cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
+{
+   struct rte_event_dev_config *conf = &event_dev->data->dev_conf;
+   struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+   uint32_t deq_tmo_ns;
+
+   deq_tmo_ns = conf->dequeue_timeout_ns;
+
+   if (deq_tmo_ns == 0)
+   deq_tmo_ns = dev->min_dequeue_timeout_ns;
+   if (deq_tmo_ns < dev->min_dequeue_timeout_ns ||
+   deq_tmo_ns > dev->max_dequeue_timeout_ns) {
+   plt_err("Unsupported dequeue timeout requested");
+   return -EINVAL;
+   }
+
+   if (conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT)
+   dev->is_timeout_deq = 1;
+
+   dev->deq_tmo_ns = deq_tmo_ns;
+
+   if (!conf->nb_event_queues || !conf->nb_event_ports ||
+   conf->nb_event_ports > dev->max_event_ports ||
+   conf->nb_event_queues > dev->max_event_queues) {
+   plt_err("Unsupported event queues/ports requested");
+   return -EINVAL;
+   }
+
+   if (conf->nb_event_port_dequeue_depth > 1) {
+   plt_err("Unsupported event port deq depth requested");
+   return -EINVAL;
+   }
+
+   if (conf->nb_event_port_enqueue_depth > 1) {
+   plt_err("Unsupported event port enq depth requested");
+   return -EINVAL;
+   }
+
+   dev->nb_event_queues = conf->nb_event_queues;
+   dev->nb_event_ports = conf->nb_event_ports;
+
+   return 0;
+}
+
+void
+cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
+   struct rte_event_queue_conf *queue_conf)
+{
+   RTE_SET_USED(event_dev);
+   RTE_SET_USED(queue_id);
+
+   queue_conf->nb_atomic_flows = (1ULL << 20);
+   queue_conf->nb_atomic_order_sequences = (1ULL << 20);
+   queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+   queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+}
+
+void
+cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
+  struct rte_event_port_conf *port_conf)
+{
+   struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+   RTE_SET_USED(port_id);
+   port_conf->new_event_threshold = dev->max_num_events;
+   port_conf->dequeue_depth = 1;
+   port_conf->enqueue_depth = 1;
+}
+
 int
 cnxk_sso_init(struct rte_eventdev *event_dev)
 {
diff --git a/drivers/event/cnxk/cnxk_eventdev.h 
b/drivers/event/cnxk/cnxk_eventdev.h
index b98c783ae..08eba2270 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -22,6 +22,7 @@ struct cnxk_sso_evdev {
uint8_t is_timeout_deq;
uint8_t nb_event_queues;
uint8_t nb_event_ports;
+   uint32_t deq_tmo_ns;
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
int32_t max_num_events;
@@ -41,5 +42,10 @@ int cnxk_sso_fini(struct rte_eventdev *event_dev);
 int cnxk_sso_remove(struct rte_pci_device *pci_dev);
 void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
   struct rte_event_dev_info *dev_info);
+int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
+void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
+struct rte_event_queue_conf *queue_conf);
+void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
+   struct rte_event_port_conf *port_conf);
 
 #endif /* __CNXK_EVENTDEV_H__ */
-- 
2.17.1



[dpdk-dev] [PATCH 05/36] event/cnxk: add platform specific device config

2021-03-06 Thread pbhagavatula
From: Shijith Thotton 

Add platform specific event device configuration that attaches the
requested number of SSO HWS(event ports) and HWGRP(event queues) LFs
to the RVU PF/VF.

Signed-off-by: Shijith Thotton 
Signed-off-by: Pavan Nikhilesh 
---
 drivers/event/cnxk/cn10k_eventdev.c | 35 +++
 drivers/event/cnxk/cn9k_eventdev.c  | 37 +
 2 files changed, 72 insertions(+)

diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index 34238d3b5..352df88fc 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -16,6 +16,14 @@ cn10k_sso_set_rsrc(void *arg)
  dev->sso.max_hwgrp;
 }
 
+static int
+cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
+{
+   struct cnxk_sso_evdev *dev = arg;
+
+   return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
+}
+
 static void
 cn10k_sso_info_get(struct rte_eventdev *event_dev,
   struct rte_event_dev_info *dev_info)
@@ -26,8 +34,35 @@ cn10k_sso_info_get(struct rte_eventdev *event_dev,
cnxk_sso_info_get(dev, dev_info);
 }
 
+static int
+cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
+{
+   struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+   int rc;
+
+   rc = cnxk_sso_dev_validate(event_dev);
+   if (rc < 0) {
+   plt_err("Invalid event device configuration");
+   return -EINVAL;
+   }
+
+   roc_sso_rsrc_fini(&dev->sso);
+
+   rc = cn10k_sso_rsrc_init(dev, dev->nb_event_ports,
+dev->nb_event_queues);
+   if (rc < 0) {
+   plt_err("Failed to initialize SSO resources");
+   return -ENODEV;
+   }
+
+   return rc;
+}
+
 static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
+   .dev_configure = cn10k_sso_dev_configure,
+   .queue_def_conf = cnxk_sso_queue_def_conf,
+   .port_def_conf = cnxk_sso_port_def_conf,
 };
 
 static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c 
b/drivers/event/cnxk/cn9k_eventdev.c
index 238540828..126388a23 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -22,6 +22,17 @@ cn9k_sso_set_rsrc(void *arg)
  dev->sso.max_hwgrp;
 }
 
+static int
+cn9k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
+{
+   struct cnxk_sso_evdev *dev = arg;
+
+   if (dev->dual_ws)
+   hws = hws * CN9K_DUAL_WS_NB_WS;
+
+   return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
+}
+
 static void
 cn9k_sso_info_get(struct rte_eventdev *event_dev,
  struct rte_event_dev_info *dev_info)
@@ -32,8 +43,34 @@ cn9k_sso_info_get(struct rte_eventdev *event_dev,
cnxk_sso_info_get(dev, dev_info);
 }
 
+static int
+cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
+{
+   struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+   int rc;
+
+   rc = cnxk_sso_dev_validate(event_dev);
+   if (rc < 0) {
+   plt_err("Invalid event device configuration");
+   return -EINVAL;
+   }
+
+   roc_sso_rsrc_fini(&dev->sso);
+
+   rc = cn9k_sso_rsrc_init(dev, dev->nb_event_ports, dev->nb_event_queues);
+   if (rc < 0) {
+   plt_err("Failed to initialize SSO resources");
+   return -ENODEV;
+   }
+
+   return rc;
+}
+
 static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
+   .dev_configure = cn9k_sso_dev_configure,
+   .queue_def_conf = cnxk_sso_queue_def_conf,
+   .port_def_conf = cnxk_sso_port_def_conf,
 };
 
 static int
-- 
2.17.1



[dpdk-dev] [PATCH 06/36] event/cnxk: add event queue config functions

2021-03-06 Thread pbhagavatula
From: Shijith Thotton 

Add setup and release functions for event queues i.e.
SSO HWGRPs.

Signed-off-by: Shijith Thotton 
Signed-off-by: Pavan Nikhilesh 
---
 drivers/event/cnxk/cn10k_eventdev.c |  2 ++
 drivers/event/cnxk/cn9k_eventdev.c  |  2 ++
 drivers/event/cnxk/cnxk_eventdev.c  | 19 +++
 drivers/event/cnxk/cnxk_eventdev.h  |  3 +++
 4 files changed, 26 insertions(+)

diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index 352df88fc..92687c23e 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -62,6 +62,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
.queue_def_conf = cnxk_sso_queue_def_conf,
+   .queue_setup = cnxk_sso_queue_setup,
+   .queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
 };
 
diff --git a/drivers/event/cnxk/cn9k_eventdev.c 
b/drivers/event/cnxk/cn9k_eventdev.c
index 126388a23..1bd2b3343 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -70,6 +70,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
.queue_def_conf = cnxk_sso_queue_def_conf,
+   .queue_setup = cnxk_sso_queue_setup,
+   .queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
 };
 
diff --git a/drivers/event/cnxk/cnxk_eventdev.c 
b/drivers/event/cnxk/cnxk_eventdev.c
index f15986f3e..59cc570fe 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -86,6 +86,25 @@ cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, 
uint8_t queue_id,
queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
 }
 
+int
+cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
+const struct rte_event_queue_conf *queue_conf)
+{
+   struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+   plt_sso_dbg("Queue=%d prio=%d", queue_id, queue_conf->priority);
+   /* Normalize <0-255> to <0-7> */
+   return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, 0xFF, 0xFF,
+ queue_conf->priority / 32);
+}
+
+void
+cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id)
+{
+   RTE_SET_USED(event_dev);
+   RTE_SET_USED(queue_id);
+}
+
 void
 cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
   struct rte_event_port_conf *port_conf)
diff --git a/drivers/event/cnxk/cnxk_eventdev.h 
b/drivers/event/cnxk/cnxk_eventdev.h
index 08eba2270..974c618bc 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -45,6 +45,9 @@ void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
 int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
 void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
 struct rte_event_queue_conf *queue_conf);
+int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
+const struct rte_event_queue_conf *queue_conf);
+void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id);
 void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf);
 
-- 
2.17.1



[dpdk-dev] [PATCH 07/36] event/cnxk: allocate event inflight buffers

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

Allocate buffers in DRAM that hold inflight events.

Signed-off-by: Shijith Thotton 
Signed-off-by: Pavan Nikhilesh 
---
 drivers/event/cnxk/cn10k_eventdev.c |   7 ++
 drivers/event/cnxk/cn9k_eventdev.c  |   7 ++
 drivers/event/cnxk/cnxk_eventdev.c  | 105 
 drivers/event/cnxk/cnxk_eventdev.h  |  14 +++-
 4 files changed, 132 insertions(+), 1 deletion(-)

diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index 92687c23e..7e3fa20c5 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -55,6 +55,13 @@ cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
return -ENODEV;
}
 
+   rc = cnxk_sso_xaq_allocate(dev);
+   if (rc < 0)
+   goto cnxk_rsrc_fini;
+
+   return 0;
+cnxk_rsrc_fini:
+   roc_sso_rsrc_fini(&dev->sso);
return rc;
 }
 
diff --git a/drivers/event/cnxk/cn9k_eventdev.c 
b/drivers/event/cnxk/cn9k_eventdev.c
index 1bd2b3343..71245b660 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -63,6 +63,13 @@ cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
return -ENODEV;
}
 
+   rc = cnxk_sso_xaq_allocate(dev);
+   if (rc < 0)
+   goto cnxk_rsrc_fini;
+
+   return 0;
+cnxk_rsrc_fini:
+   roc_sso_rsrc_fini(&dev->sso);
return rc;
 }
 
diff --git a/drivers/event/cnxk/cnxk_eventdev.c 
b/drivers/event/cnxk/cnxk_eventdev.c
index 59cc570fe..927f99117 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -28,12 +28,107 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
  RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
 }
 
+int
+cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
+{
+   char pool_name[RTE_MEMZONE_NAMESIZE];
+   uint32_t xaq_cnt, npa_aura_id;
+   const struct rte_memzone *mz;
+   struct npa_aura_s *aura;
+   static int reconfig_cnt;
+   int rc;
+
+   if (dev->xaq_pool) {
+   rc = roc_sso_hwgrp_release_xaq(&dev->sso, dev->nb_event_queues);
+   if (rc < 0) {
+   plt_err("Failed to release XAQ %d", rc);
+   return rc;
+   }
+   rte_mempool_free(dev->xaq_pool);
+   dev->xaq_pool = NULL;
+   }
+
+   /*
+* Allocate memory for Add work backpressure.
+*/
+   mz = rte_memzone_lookup(CNXK_SSO_FC_NAME);
+   if (mz == NULL)
+   mz = rte_memzone_reserve_aligned(CNXK_SSO_FC_NAME,
+sizeof(struct npa_aura_s) +
+RTE_CACHE_LINE_SIZE,
+0, 0, RTE_CACHE_LINE_SIZE);
+   if (mz == NULL) {
+   plt_err("Failed to allocate mem for fcmem");
+   return -ENOMEM;
+   }
+
+   dev->fc_iova = mz->iova;
+   dev->fc_mem = mz->addr;
+
+   aura = (struct npa_aura_s *)((uintptr_t)dev->fc_mem +
+RTE_CACHE_LINE_SIZE);
+   memset(aura, 0, sizeof(struct npa_aura_s));
+
+   aura->fc_ena = 1;
+   aura->fc_addr = dev->fc_iova;
+   aura->fc_hyst_bits = 0; /* Store count on all updates */
+
+   /* Taken from HRM 14.3.3(4) */
+   xaq_cnt = dev->nb_event_queues * CNXK_SSO_XAQ_CACHE_CNT;
+   xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
+  (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
+
+   plt_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
+   /* Setup XAQ based on number of nb queues. */
+   snprintf(pool_name, 30, "cnxk_xaq_buf_pool_%d", reconfig_cnt);
+   dev->xaq_pool = (void *)rte_mempool_create_empty(
+   pool_name, xaq_cnt, dev->sso.xaq_buf_size, 0, 0,
+   rte_socket_id(), 0);
+
+   if (dev->xaq_pool == NULL) {
+   plt_err("Unable to create empty mempool.");
+   rte_memzone_free(mz);
+   return -ENOMEM;
+   }
+
+   rc = rte_mempool_set_ops_byname(dev->xaq_pool,
+   rte_mbuf_platform_mempool_ops(), aura);
+   if (rc != 0) {
+   plt_err("Unable to set xaqpool ops.");
+   goto alloc_fail;
+   }
+
+   rc = rte_mempool_populate_default(dev->xaq_pool);
+   if (rc < 0) {
+   plt_err("Unable to set populate xaqpool.");
+   goto alloc_fail;
+   }
+   reconfig_cnt++;
+   /* When SW does addwork (enqueue) check if there is space in XAQ by
+* comparing fc_addr above against the xaq_lmt calculated below.
+* There should be a minimum headroom (CNXK_SSO_XAQ_SLACK / 2) for SSO
+* to request XAQ to cache them even before enqueue is called.
+*/
+   dev->xaq_lmt =
+   xaq_cnt - (CNXK_SSO_XAQ_SLACK / 2 * dev->nb_event_que

[dpdk-dev] [PATCH 08/36] event/cnxk: add devargs for inflight buffer count

2021-03-06 Thread pbhagavatula
From: Shijith Thotton 

The number of events for a *open system* event device is specified
as -1 as per the eventdev specification.
Since, SSO inflight events are only limited by DRAM size, the
xae_cnt devargs parameter is introduced to provide upper limit for
in-flight events.

Example:
--dev "0002:0e:00.0,xae_cnt=8192"

Signed-off-by: Shijith Thotton 
Signed-off-by: Pavan Nikhilesh 
---
 doc/guides/eventdevs/cnxk.rst   | 14 ++
 drivers/event/cnxk/cn10k_eventdev.c |  1 +
 drivers/event/cnxk/cn9k_eventdev.c  |  1 +
 drivers/event/cnxk/cnxk_eventdev.c  | 24 ++--
 drivers/event/cnxk/cnxk_eventdev.h  | 15 +++
 5 files changed, 53 insertions(+), 2 deletions(-)

diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index e94225bd3..569fce4cb 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -41,6 +41,20 @@ Prerequisites and Compilation procedure
 
See :doc:`../platform/cnxk` for setup information.
 
+
+Runtime Config Options
+--
+
+- ``Maximum number of in-flight events`` (default ``8192``)
+
+  In **Marvell OCTEON CNXK** the max number of in-flight events are only 
limited
+  by DRAM size, the ``xae_cnt`` devargs parameter is introduced to provide
+  upper limit for in-flight events.
+
+  For example::
+
+-a 0002:0e:00.0,xae_cnt=16384
+
 Debugging Options
 -
 
diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index 7e3fa20c5..1b278360f 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -143,3 +143,4 @@ static struct rte_pci_driver cn10k_pci_sso = {
 RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
 RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
 RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c 
b/drivers/event/cnxk/cn9k_eventdev.c
index 71245b660..8dfcf35b4 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -146,3 +146,4 @@ static struct rte_pci_driver cn9k_pci_sso = {
 RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
 RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
 RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=");
diff --git a/drivers/event/cnxk/cnxk_eventdev.c 
b/drivers/event/cnxk/cnxk_eventdev.c
index 927f99117..28a03aeab 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -75,8 +75,11 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
 
/* Taken from HRM 14.3.3(4) */
xaq_cnt = dev->nb_event_queues * CNXK_SSO_XAQ_CACHE_CNT;
-   xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
-  (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
+   if (dev->xae_cnt)
+   xaq_cnt += dev->xae_cnt / dev->sso.xae_waes;
+   else
+   xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
+  (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
 
plt_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
/* Setup XAQ based on number of nb queues. */
@@ -222,6 +225,22 @@ cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, 
uint8_t port_id,
port_conf->enqueue_depth = 1;
 }
 
+static void
+cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
+{
+   struct rte_kvargs *kvlist;
+
+   if (devargs == NULL)
+   return;
+   kvlist = rte_kvargs_parse(devargs->args, NULL);
+   if (kvlist == NULL)
+   return;
+
+   rte_kvargs_process(kvlist, CNXK_SSO_XAE_CNT, &parse_kvargs_value,
+  &dev->xae_cnt);
+   rte_kvargs_free(kvlist);
+}
+
 int
 cnxk_sso_init(struct rte_eventdev *event_dev)
 {
@@ -242,6 +261,7 @@ cnxk_sso_init(struct rte_eventdev *event_dev)
dev->sso.pci_dev = pci_dev;
 
*(uint64_t *)mz->addr = (uint64_t)dev;
+   cnxk_sso_parse_devargs(dev, pci_dev->device.devargs);
 
/* Initialize the base cnxk_dev object */
rc = roc_sso_dev_init(&dev->sso);
diff --git a/drivers/event/cnxk/cnxk_eventdev.h 
b/drivers/event/cnxk/cnxk_eventdev.h
index 8478120c0..72b0ff3f8 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -5,6 +5,8 @@
 #ifndef __CNXK_EVENTDEV_H__
 #define __CNXK_EVENTDEV_H__
 
+#include 
+#include 
 #include 
 #include 
 
@@ -12,6 +14,8 @@
 
 #include "roc_api.h"
 
+#define CNXK_SSO_XAE_CNT "xae_cnt"
+
 #define USEC2NSEC(__us) ((__us)*1E3)
 
 #define CNXK_SSO_FC_NAME   "cnxk_evdev_xaq_fc"
@@ -35,10 +39,21 @@ struct cnxk_sso_evdev {
uint64_t nb_xaq_cfg;
rte_iova_t fc_iova;
struct rte_mempool *xaq_pool;
+   /* Dev args */
+   uint32_t xae_cnt;
/* CN9K */
uint8_t dual_ws;
 } __rte_cache_aligned;
 
+static inline

[dpdk-dev] [PATCH 09/36] event/cnxk: add devargs to control SSO HWGRP QoS

2021-03-06 Thread pbhagavatula
From: Shijith Thotton 

SSO HWGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
events. By default the buffers are assigned to the SSO HWGRPs to
satisfy minimum HW requirements. SSO is free to assign the remaining
buffers to HWGRPs based on a preconfigured threshold.
We can control the QoS of SSO HWGRP by modifying the above mentioned
thresholds. HWGRPs that have higher importance can be assigned higher
thresholds than the rest.

Example:
--dev "0002:0e:00.0,qos=[1-50-50-50]" // [Qx-XAQ-TAQ-IAQ]

Qx  -> Event queue Aka SSO GGRP.
XAQ -> DRAM In-flights.
TAQ & IAQ -> SRAM In-flights.

The values need to be expressed in terms of percentages, 0 represents
default.

Signed-off-by: Pavan Nikhilesh 
Signed-off-by: Shijith Thotton 
---
 doc/guides/eventdevs/cnxk.rst   | 16 ++
 drivers/event/cnxk/cn10k_eventdev.c |  3 +-
 drivers/event/cnxk/cn9k_eventdev.c  |  3 +-
 drivers/event/cnxk/cnxk_eventdev.c  | 78 +
 drivers/event/cnxk/cnxk_eventdev.h  | 12 -
 5 files changed, 109 insertions(+), 3 deletions(-)

diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 569fce4cb..cf2156333 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -55,6 +55,22 @@ Runtime Config Options
 
 -a 0002:0e:00.0,xae_cnt=16384
 
+- ``Event Group QoS support``
+
+  SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
+  events. By default the buffers are assigned to the SSO GGRPs to
+  satisfy minimum HW requirements. SSO is free to assign the remaining
+  buffers to GGRPs based on a preconfigured threshold.
+  We can control the QoS of SSO GGRP by modifying the above mentioned
+  thresholds. GGRPs that have higher importance can be assigned higher
+  thresholds than the rest. The dictionary format is as follows
+  [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] expressed in percentages, 0 represents
+  default.
+
+  For example::
+
+-a 0002:0e:00.0,qos=[1-50-50-50]
+
 Debugging Options
 -
 
diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index 1b278360f..47eb8898b 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -143,4 +143,5 @@ static struct rte_pci_driver cn10k_pci_sso = {
 RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
 RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
 RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "="
+ CNXK_SSO_GGRP_QOS "=");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c 
b/drivers/event/cnxk/cn9k_eventdev.c
index 8dfcf35b4..43c045d43 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -146,4 +146,5 @@ static struct rte_pci_driver cn9k_pci_sso = {
 RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
 RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
 RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "="
+ CNXK_SSO_GGRP_QOS "=");
diff --git a/drivers/event/cnxk/cnxk_eventdev.c 
b/drivers/event/cnxk/cnxk_eventdev.c
index 28a03aeab..4cb5359a8 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -225,6 +225,82 @@ cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, 
uint8_t port_id,
port_conf->enqueue_depth = 1;
 }
 
+static void
+parse_queue_param(char *value, void *opaque)
+{
+   struct cnxk_sso_qos queue_qos = {0};
+   uint8_t *val = (uint8_t *)&queue_qos;
+   struct cnxk_sso_evdev *dev = opaque;
+   char *tok = strtok(value, "-");
+   struct cnxk_sso_qos *old_ptr;
+
+   if (!strlen(value))
+   return;
+
+   while (tok != NULL) {
+   *val = atoi(tok);
+   tok = strtok(NULL, "-");
+   val++;
+   }
+
+   if (val != (&queue_qos.iaq_prcnt + 1)) {
+   plt_err("Invalid QoS parameter expected [Qx-XAQ-TAQ-IAQ]");
+   return;
+   }
+
+   dev->qos_queue_cnt++;
+   old_ptr = dev->qos_parse_data;
+   dev->qos_parse_data = rte_realloc(
+   dev->qos_parse_data,
+   sizeof(struct cnxk_sso_qos) * dev->qos_queue_cnt, 0);
+   if (dev->qos_parse_data == NULL) {
+   dev->qos_parse_data = old_ptr;
+   dev->qos_queue_cnt--;
+   return;
+   }
+   dev->qos_parse_data[dev->qos_queue_cnt - 1] = queue_qos;
+}
+
+static void
+parse_qos_list(const char *value, void *opaque)
+{
+   char *s = strdup(value);
+   char *start = NULL;
+   char *end = NULL;
+   char *f = s;
+
+   while (*s) {
+   if (*s == '[')
+   start = s;
+   else if (*s == ']')
+ 

[dpdk-dev] [PATCH 10/36] event/cnxk: add port config functions

2021-03-06 Thread pbhagavatula
From: Shijith Thotton 

Add SSO HWS aka event port setup and release functions.

Signed-off-by: Shijith Thotton 
Signed-off-by: Pavan Nikhilesh 
---
 drivers/event/cnxk/cn10k_eventdev.c | 121 +++
 drivers/event/cnxk/cn9k_eventdev.c  | 147 
 drivers/event/cnxk/cnxk_eventdev.c  |  65 
 drivers/event/cnxk/cnxk_eventdev.h  |  91 +
 4 files changed, 424 insertions(+)

diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index 47eb8898b..c60df7f7b 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -4,6 +4,91 @@
 
 #include "cnxk_eventdev.h"
 
+static void
+cn10k_init_hws_ops(struct cn10k_sso_hws *ws, uintptr_t base)
+{
+   ws->tag_wqe_op = base + SSOW_LF_GWS_WQE0;
+   ws->getwrk_op = base + SSOW_LF_GWS_OP_GET_WORK0;
+   ws->updt_wqe_op = base + SSOW_LF_GWS_OP_UPD_WQP_GRP1;
+   ws->swtag_norm_op = base + SSOW_LF_GWS_OP_SWTAG_NORM;
+   ws->swtag_untag_op = base + SSOW_LF_GWS_OP_SWTAG_UNTAG;
+   ws->swtag_flush_op = base + SSOW_LF_GWS_OP_SWTAG_FLUSH;
+   ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
+}
+
+static uint32_t
+cn10k_sso_gw_mode_wdata(struct cnxk_sso_evdev *dev)
+{
+   uint32_t wdata = BIT(16) | 1;
+
+   switch (dev->gw_mode) {
+   case CN10K_GW_MODE_NONE:
+   default:
+   break;
+   case CN10K_GW_MODE_PREF:
+   wdata |= BIT(19);
+   break;
+   case CN10K_GW_MODE_PREF_WFE:
+   wdata |= BIT(20) | BIT(19);
+   break;
+   }
+
+   return wdata;
+}
+
+static void *
+cn10k_sso_init_hws_mem(void *arg, uint8_t port_id)
+{
+   struct cnxk_sso_evdev *dev = arg;
+   struct cn10k_sso_hws *ws;
+
+   /* Allocate event port memory */
+   ws = rte_zmalloc("cn10k_ws",
+sizeof(struct cn10k_sso_hws) + RTE_CACHE_LINE_SIZE,
+RTE_CACHE_LINE_SIZE);
+   if (ws == NULL) {
+   plt_err("Failed to alloc memory for port=%d", port_id);
+   return NULL;
+   }
+
+   /* First cache line is reserved for cookie */
+   ws = (struct cn10k_sso_hws *)((uint8_t *)ws + RTE_CACHE_LINE_SIZE);
+   ws->base = roc_sso_hws_base_get(&dev->sso, port_id);
+   cn10k_init_hws_ops(ws, ws->base);
+   ws->hws_id = port_id;
+   ws->swtag_req = 0;
+   ws->gw_wdata = cn10k_sso_gw_mode_wdata(dev);
+   ws->lmt_base = dev->sso.lmt_base;
+
+   return ws;
+}
+
+static void
+cn10k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
+{
+   struct cnxk_sso_evdev *dev = arg;
+   struct cn10k_sso_hws *ws = hws;
+   uint64_t val;
+
+   rte_memcpy(ws->grps_base, grps_base,
+  sizeof(uintptr_t) * CNXK_SSO_MAX_HWGRP);
+   ws->fc_mem = dev->fc_mem;
+   ws->xaq_lmt = dev->xaq_lmt;
+
+   /* Set get_work timeout for HWS */
+   val = NSEC2USEC(dev->deq_tmo_ns) - 1;
+   plt_write64(val, ws->base + SSOW_LF_GWS_NW_TIM);
+}
+
+static void
+cn10k_sso_hws_release(void *arg, void *hws)
+{
+   struct cn10k_sso_hws *ws = hws;
+
+   RTE_SET_USED(arg);
+   memset(ws, 0, sizeof(*ws));
+}
+
 static void
 cn10k_sso_set_rsrc(void *arg)
 {
@@ -59,12 +144,46 @@ cn10k_sso_dev_configure(const struct rte_eventdev 
*event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
 
+   rc = cnxk_setup_event_ports(event_dev, cn10k_sso_init_hws_mem,
+   cn10k_sso_hws_setup);
+   if (rc < 0)
+   goto cnxk_rsrc_fini;
+
return 0;
 cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
+   dev->nb_event_ports = 0;
return rc;
 }
 
+static int
+cn10k_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+const struct rte_event_port_conf *port_conf)
+{
+
+   RTE_SET_USED(port_conf);
+   return cnxk_sso_port_setup(event_dev, port_id, cn10k_sso_hws_setup);
+}
+
+static void
+cn10k_sso_port_release(void *port)
+{
+   struct cnxk_sso_hws_cookie *gws_cookie = cnxk_sso_hws_get_cookie(port);
+   struct cnxk_sso_evdev *dev;
+
+   if (port == NULL)
+   return;
+
+   dev = cnxk_sso_pmd_priv(gws_cookie->event_dev);
+   if (!gws_cookie->configured)
+   goto free;
+
+   cn10k_sso_hws_release(dev, port);
+   memset(gws_cookie, 0, sizeof(*gws_cookie));
+free:
+   rte_free(gws_cookie);
+}
+
 static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -72,6 +191,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.queue_setup = cnxk_sso_queue_setup,
.queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
+   .port_setup = cn10k_sso_port_setup,
+   .port_release = cn10k_sso_port_release,
 };
 
 static int
diff --git a/drivers/eve

[dpdk-dev] [PATCH 11/36] event/cnxk: add event port link and unlink

2021-03-06 Thread pbhagavatula
From: Shijith Thotton 

Add platform specific event port, queue link and unlink APIs.

Signed-off-by: Shijith Thotton 
Signed-off-by: Pavan Nikhilesh 
---
 drivers/event/cnxk/cn10k_eventdev.c |  64 +-
 drivers/event/cnxk/cn9k_eventdev.c  | 101 
 drivers/event/cnxk/cnxk_eventdev.c  |  36 ++
 drivers/event/cnxk/cnxk_eventdev.h  |  12 +++-
 4 files changed, 210 insertions(+), 3 deletions(-)

diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index c60df7f7b..3cf07734b 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -63,6 +63,24 @@ cn10k_sso_init_hws_mem(void *arg, uint8_t port_id)
return ws;
 }
 
+static int
+cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+   struct cnxk_sso_evdev *dev = arg;
+   struct cn10k_sso_hws *ws = port;
+
+   return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+}
+
+static int
+cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+   struct cnxk_sso_evdev *dev = arg;
+   struct cn10k_sso_hws *ws = port;
+
+   return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+}
+
 static void
 cn10k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
 {
@@ -83,9 +101,12 @@ cn10k_sso_hws_setup(void *arg, void *hws, uintptr_t 
*grps_base)
 static void
 cn10k_sso_hws_release(void *arg, void *hws)
 {
+   struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = hws;
+   int i;
 
-   RTE_SET_USED(arg);
+   for (i = 0; i < dev->nb_event_queues; i++)
+   roc_sso_hws_unlink(&dev->sso, ws->hws_id, (uint16_t *)&i, 1);
memset(ws, 0, sizeof(*ws));
 }
 
@@ -149,6 +170,12 @@ cn10k_sso_dev_configure(const struct rte_eventdev 
*event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
 
+   /* Restore any prior port-queue mapping. */
+   cnxk_sso_restore_links(event_dev, cn10k_sso_hws_link);
+
+   dev->configured = 1;
+   rte_mb();
+
return 0;
 cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
@@ -184,6 +211,38 @@ cn10k_sso_port_release(void *port)
rte_free(gws_cookie);
 }
 
+static int
+cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port,
+   const uint8_t queues[], const uint8_t priorities[],
+   uint16_t nb_links)
+{
+   struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+   uint16_t hwgrp_ids[nb_links];
+   uint16_t link;
+
+   RTE_SET_USED(priorities);
+   for (link = 0; link < nb_links; link++)
+   hwgrp_ids[link] = queues[link];
+   nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+
+   return (int)nb_links;
+}
+
+static int
+cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
+ uint8_t queues[], uint16_t nb_unlinks)
+{
+   struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+   uint16_t hwgrp_ids[nb_unlinks];
+   uint16_t unlink;
+
+   for (unlink = 0; unlink < nb_unlinks; unlink++)
+   hwgrp_ids[unlink] = queues[unlink];
+   nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+
+   return (int)nb_unlinks;
+}
+
 static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -193,6 +252,9 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_def_conf = cnxk_sso_port_def_conf,
.port_setup = cn10k_sso_port_setup,
.port_release = cn10k_sso_port_release,
+   .port_link = cn10k_sso_port_link,
+   .port_unlink = cn10k_sso_port_unlink,
+   .timeout_ticks = cnxk_sso_timeout_ticks,
 };
 
 static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c 
b/drivers/event/cnxk/cn9k_eventdev.c
index 116f5bdab..5be2776cc 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -18,6 +18,54 @@ cn9k_init_hws_ops(struct cn9k_sso_hws_state *ws, uintptr_t 
base)
ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
 }
 
+static int
+cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+   struct cnxk_sso_evdev *dev = arg;
+   struct cn9k_sso_hws_dual *dws;
+   struct cn9k_sso_hws *ws;
+   int rc;
+
+   if (dev->dual_ws) {
+   dws = port;
+   rc = roc_sso_hws_link(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map,
+ nb_link);
+   rc |= roc_sso_hws_link(&dev->sso,
+  CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+  map, nb_link);
+   } else {
+   ws = port;
+   rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+   }
+
+   return rc;
+}
+
+static int
+cn9k_sso_

[dpdk-dev] [PATCH 12/36] event/cnxk: add devargs to configure getwork mode

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

Add devargs to configure the platform specific getwork mode.

CN9K getwork mode by default is set to use dual workslot mode.
Add option to force single workslot mode.
Example:
--dev "0002:0e:00.0,single_ws=1"

CN10K supports multiple getwork prefetch modes, by default the
prefetch mode is set to none.
Add option to select getwork prefetch mode
Example:
--dev "0002:1e:00.0,gw_mode=1"

Signed-off-by: Pavan Nikhilesh 
Signed-off-by: Shijith Thotton 
---
 doc/guides/eventdevs/cnxk.rst   | 18 ++
 drivers/event/cnxk/cn10k_eventdev.c |  3 ++-
 drivers/event/cnxk/cn9k_eventdev.c  |  3 ++-
 drivers/event/cnxk/cnxk_eventdev.c  |  6 ++
 drivers/event/cnxk/cnxk_eventdev.h  |  6 --
 5 files changed, 32 insertions(+), 4 deletions(-)

diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index cf2156333..b2684d431 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -55,6 +55,24 @@ Runtime Config Options
 
 -a 0002:0e:00.0,xae_cnt=16384
 
+- ``CN9K Getwork mode``
+
+  CN9K ``single_ws`` devargs parameter is introduced to select single workslot
+  mode in SSO and disable the default dual workslot mode.
+
+  For example::
+
+-a 0002:0e:00.0,single_ws=1
+
+- ``CN10K Getwork mode``
+
+  CN10K supports multiple getwork prefetch modes, by default the prefetch
+  mode is set to none.
+
+  For example::
+
+-a 0002:0e:00.0,gw_mode=1
+
 - ``Event Group QoS support``
 
   SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index 3cf07734b..310acc011 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -327,4 +327,5 @@ RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
 RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
 RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
 RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "="
- CNXK_SSO_GGRP_QOS "=");
+ CNXK_SSO_GGRP_QOS "="
+ CN10K_SSO_GW_MODE "=");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c 
b/drivers/event/cnxk/cn9k_eventdev.c
index 5be2776cc..44c7a0c3a 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -395,4 +395,5 @@ RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
 RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
 RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
 RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "="
- CNXK_SSO_GGRP_QOS "=");
+ CNXK_SSO_GGRP_QOS "="
+ CN9K_SSO_SINGLE_WS "=1");
diff --git a/drivers/event/cnxk/cnxk_eventdev.c 
b/drivers/event/cnxk/cnxk_eventdev.c
index 5f4075a31..0e2cc3681 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -406,6 +406,7 @@ static void
 cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
 {
struct rte_kvargs *kvlist;
+   uint8_t single_ws = 0;
 
if (devargs == NULL)
return;
@@ -417,6 +418,11 @@ cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct 
rte_devargs *devargs)
   &dev->xae_cnt);
rte_kvargs_process(kvlist, CNXK_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
   dev);
+   rte_kvargs_process(kvlist, CN9K_SSO_SINGLE_WS, &parse_kvargs_value,
+  &single_ws);
+   rte_kvargs_process(kvlist, CN10K_SSO_GW_MODE, &parse_kvargs_value,
+  &dev->gw_mode);
+   dev->dual_ws = !single_ws;
rte_kvargs_free(kvlist);
 }
 
diff --git a/drivers/event/cnxk/cnxk_eventdev.h 
b/drivers/event/cnxk/cnxk_eventdev.h
index bf2c961aa..85f6058f2 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -14,8 +14,10 @@
 
 #include "roc_api.h"
 
-#define CNXK_SSO_XAE_CNT  "xae_cnt"
-#define CNXK_SSO_GGRP_QOS "qos"
+#define CNXK_SSO_XAE_CNT   "xae_cnt"
+#define CNXK_SSO_GGRP_QOS  "qos"
+#define CN9K_SSO_SINGLE_WS "single_ws"
+#define CN10K_SSO_GW_MODE  "gw_mode"
 
 #define NSEC2USEC(__ns)((__ns) / 1E3)
 #define USEC2NSEC(__us)((__us)*1E3)
-- 
2.17.1



[dpdk-dev] [PATCH 13/36] event/cnxk: add SSO HW device operations

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

Add SSO HW device operations used for enqueue/dequeue.

Signed-off-by: Pavan Nikhilesh 
---
 drivers/event/cnxk/cn10k_worker.c  |   7 +
 drivers/event/cnxk/cn10k_worker.h  | 151 +
 drivers/event/cnxk/cn9k_worker.c   |   7 +
 drivers/event/cnxk/cn9k_worker.h   | 249 +
 drivers/event/cnxk/cnxk_eventdev.h |  10 ++
 drivers/event/cnxk/cnxk_worker.h   | 101 
 drivers/event/cnxk/meson.build |   4 +-
 7 files changed, 528 insertions(+), 1 deletion(-)
 create mode 100644 drivers/event/cnxk/cn10k_worker.c
 create mode 100644 drivers/event/cnxk/cn10k_worker.h
 create mode 100644 drivers/event/cnxk/cn9k_worker.c
 create mode 100644 drivers/event/cnxk/cn9k_worker.h
 create mode 100644 drivers/event/cnxk/cnxk_worker.h

diff --git a/drivers/event/cnxk/cn10k_worker.c 
b/drivers/event/cnxk/cn10k_worker.c
new file mode 100644
index 0..4a7d0b535
--- /dev/null
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cn10k_worker.h"
+#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
diff --git a/drivers/event/cnxk/cn10k_worker.h 
b/drivers/event/cnxk/cn10k_worker.h
new file mode 100644
index 0..0a7cb9c57
--- /dev/null
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __CN10K_WORKER_H__
+#define __CN10K_WORKER_H__
+
+#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
+
+/* SSO Operations */
+
+static __rte_always_inline uint8_t
+cn10k_sso_hws_new_event(struct cn10k_sso_hws *ws, const struct rte_event *ev)
+{
+   const uint32_t tag = (uint32_t)ev->event;
+   const uint8_t new_tt = ev->sched_type;
+   const uint64_t event_ptr = ev->u64;
+   const uint16_t grp = ev->queue_id;
+
+   rte_atomic_thread_fence(__ATOMIC_ACQ_REL);
+   if (ws->xaq_lmt <= *ws->fc_mem)
+   return 0;
+
+   cnxk_sso_hws_add_work(event_ptr, tag, new_tt, ws->grps_base[grp]);
+   return 1;
+}
+
+static __rte_always_inline void
+cn10k_sso_hws_fwd_swtag(struct cn10k_sso_hws *ws, const struct rte_event *ev)
+{
+   const uint32_t tag = (uint32_t)ev->event;
+   const uint8_t new_tt = ev->sched_type;
+   const uint8_t cur_tt = CNXK_TT_FROM_TAG(plt_read64(ws->tag_wqe_op));
+
+   /* CNXK model
+* cur_tt/new_tt SSO_TT_ORDERED SSO_TT_ATOMIC SSO_TT_UNTAGGED
+*
+* SSO_TT_ORDEREDnorm   norm untag
+* SSO_TT_ATOMIC norm   norm   untag
+* SSO_TT_UNTAGGED   norm   norm NOOP
+*/
+
+   if (new_tt == SSO_TT_UNTAGGED) {
+   if (cur_tt != SSO_TT_UNTAGGED)
+   cnxk_sso_hws_swtag_untag(ws->swtag_untag_op);
+   } else {
+   cnxk_sso_hws_swtag_norm(tag, new_tt, ws->swtag_norm_op);
+   }
+   ws->swtag_req = 1;
+}
+
+static __rte_always_inline void
+cn10k_sso_hws_fwd_group(struct cn10k_sso_hws *ws, const struct rte_event *ev,
+   const uint16_t grp)
+{
+   const uint32_t tag = (uint32_t)ev->event;
+   const uint8_t new_tt = ev->sched_type;
+
+   plt_write64(ev->u64, ws->updt_wqe_op);
+   cnxk_sso_hws_swtag_desched(tag, new_tt, grp, ws->swtag_desched_op);
+}
+
+static __rte_always_inline void
+cn10k_sso_hws_forward_event(struct cn10k_sso_hws *ws,
+   const struct rte_event *ev)
+{
+   const uint8_t grp = ev->queue_id;
+
+   /* Group hasn't changed, Use SWTAG to forward the event */
+   if (CNXK_GRP_FROM_TAG(plt_read64(ws->tag_wqe_op)) == grp)
+   cn10k_sso_hws_fwd_swtag(ws, ev);
+   else
+   /*
+* Group has been changed for group based work pipelining,
+* Use deschedule/add_work operation to transfer the event to
+* new group/core
+*/
+   cn10k_sso_hws_fwd_group(ws, ev, grp);
+}
+
+static __rte_always_inline uint16_t
+cn10k_sso_hws_get_work(struct cn10k_sso_hws *ws, struct rte_event *ev)
+{
+   union {
+   __uint128_t get_work;
+   uint64_t u64[2];
+   } gw;
+
+   gw.get_work = ws->gw_wdata;
+#if defined(RTE_ARCH_ARM64) && !defined(__clang__)
+   asm volatile(
+   PLT_CPU_FEATURE_PREAMBLE
+   "caspl %[wdata], %H[wdata], %[wdata], %H[wdata], [%[gw_loc]]\n"
+   : [wdata] "+r"(gw.get_work)
+   : [gw_loc] "r"(ws->getwrk_op)
+   : "memory");
+#else
+   plt_write64(gw.u64[0], ws->getwrk_op);
+   do {
+   roc_load_pair(gw.u64[0], gw.u64[1], ws->tag_wqe_op);
+   } while (gw.u64[0] & BIT_ULL(63));
+#endif
+   gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+   (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ 

[dpdk-dev] [PATCH 14/36] event/cnxk: add SSO GWS fastpath enqueue functions

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

Add SSO GWS fastpath event device enqueue functions.

Signed-off-by: Pavan Nikhilesh 
---
 drivers/event/cnxk/cn10k_eventdev.c |  16 +++-
 drivers/event/cnxk/cn10k_worker.c   |  54 ++
 drivers/event/cnxk/cn10k_worker.h   |  12 +++
 drivers/event/cnxk/cn9k_eventdev.c  |  25 ++-
 drivers/event/cnxk/cn9k_worker.c| 112 
 drivers/event/cnxk/cn9k_worker.h|  24 ++
 6 files changed, 241 insertions(+), 2 deletions(-)

diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index 310acc011..16848798c 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -2,7 +2,9 @@
  * Copyright(C) 2021 Marvell International Ltd.
  */
 
+#include "cn10k_worker.h"
 #include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
 
 static void
 cn10k_init_hws_ops(struct cn10k_sso_hws *ws, uintptr_t base)
@@ -130,6 +132,16 @@ cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
 }
 
+static void
+cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
+{
+   PLT_SET_USED(event_dev);
+   event_dev->enqueue = cn10k_sso_hws_enq;
+   event_dev->enqueue_burst = cn10k_sso_hws_enq_burst;
+   event_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst;
+   event_dev->enqueue_forward_burst = cn10k_sso_hws_enq_fwd_burst;
+}
+
 static void
 cn10k_sso_info_get(struct rte_eventdev *event_dev,
   struct rte_event_dev_info *dev_info)
@@ -276,8 +288,10 @@ cn10k_sso_init(struct rte_eventdev *event_dev)
 
event_dev->dev_ops = &cn10k_sso_dev_ops;
/* For secondary processes, the primary has done all the work */
-   if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+   if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+   cn10k_sso_fp_fns_set(event_dev);
return 0;
+   }
 
rc = cnxk_sso_init(event_dev);
if (rc < 0)
diff --git a/drivers/event/cnxk/cn10k_worker.c 
b/drivers/event/cnxk/cn10k_worker.c
index 4a7d0b535..cef24f4e2 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -5,3 +5,57 @@
 #include "cn10k_worker.h"
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq(void *port, const struct rte_event *ev)
+{
+   struct cn10k_sso_hws *ws = port;
+
+   switch (ev->op) {
+   case RTE_EVENT_OP_NEW:
+   return cn10k_sso_hws_new_event(ws, ev);
+   case RTE_EVENT_OP_FORWARD:
+   cn10k_sso_hws_forward_event(ws, ev);
+   break;
+   case RTE_EVENT_OP_RELEASE:
+   cnxk_sso_hws_swtag_flush(ws->tag_wqe_op, ws->swtag_flush_op);
+   break;
+   default:
+   return 0;
+   }
+
+   return 1;
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq_burst(void *port, const struct rte_event ev[],
+   uint16_t nb_events)
+{
+   RTE_SET_USED(nb_events);
+   return cn10k_sso_hws_enq(port, ev);
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq_new_burst(void *port, const struct rte_event ev[],
+   uint16_t nb_events)
+{
+   struct cn10k_sso_hws *ws = port;
+   uint16_t i, rc = 1;
+
+   for (i = 0; i < nb_events && rc; i++)
+   rc = cn10k_sso_hws_new_event(ws, &ev[i]);
+
+   return nb_events;
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
+   uint16_t nb_events)
+{
+   struct cn10k_sso_hws *ws = port;
+
+   RTE_SET_USED(nb_events);
+   cn10k_sso_hws_forward_event(ws, ev);
+
+   return 1;
+}
diff --git a/drivers/event/cnxk/cn10k_worker.h 
b/drivers/event/cnxk/cn10k_worker.h
index 0a7cb9c57..d75e92846 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -148,4 +148,16 @@ cn10k_sso_hws_get_work_empty(struct cn10k_sso_hws *ws, 
struct rte_event *ev)
return !!gw.u64[1];
 }
 
+/* CN10K Fastpath functions. */
+uint16_t __rte_hot cn10k_sso_hws_enq(void *port, const struct rte_event *ev);
+uint16_t __rte_hot cn10k_sso_hws_enq_burst(void *port,
+  const struct rte_event ev[],
+  uint16_t nb_events);
+uint16_t __rte_hot cn10k_sso_hws_enq_new_burst(void *port,
+  const struct rte_event ev[],
+  uint16_t nb_events);
+uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
+  const struct rte_event ev[],
+  uint16_t nb_events);
+
 #endif
diff --git a/drivers/event/cnxk/cn9k_eventdev.c 
b/drivers/event/cnxk/cn9k_eventdev.c
index 44c7a0c3a..7e4c1b415 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -2,7 +2,9 @@
  * Copyright(C) 20

[dpdk-dev] [PATCH 15/36] event/cnxk: add SSO GWS dequeue fastpath functions

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

Add SSO GWS event dequeue fastpath functions.

Signed-off-by: Pavan Nikhilesh 
---
 drivers/event/cnxk/cn10k_eventdev.c |  10 ++-
 drivers/event/cnxk/cn10k_worker.c   |  54 +
 drivers/event/cnxk/cn10k_worker.h   |  12 +++
 drivers/event/cnxk/cn9k_eventdev.c  |  15 
 drivers/event/cnxk/cn9k_worker.c| 117 
 drivers/event/cnxk/cn9k_worker.h|  24 ++
 6 files changed, 231 insertions(+), 1 deletion(-)

diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index 16848798c..a9948e1b2 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -135,11 +135,19 @@ cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
 static void
 cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
 {
-   PLT_SET_USED(event_dev);
+   struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
event_dev->enqueue = cn10k_sso_hws_enq;
event_dev->enqueue_burst = cn10k_sso_hws_enq_burst;
event_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn10k_sso_hws_enq_fwd_burst;
+
+   event_dev->dequeue = cn10k_sso_hws_deq;
+   event_dev->dequeue_burst = cn10k_sso_hws_deq_burst;
+   if (dev->is_timeout_deq) {
+   event_dev->dequeue = cn10k_sso_hws_tmo_deq;
+   event_dev->dequeue_burst = cn10k_sso_hws_tmo_deq_burst;
+   }
 }
 
 static void
diff --git a/drivers/event/cnxk/cn10k_worker.c 
b/drivers/event/cnxk/cn10k_worker.c
index cef24f4e2..57b0714bb 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -59,3 +59,57 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct 
rte_event ev[],
 
return 1;
 }
+
+uint16_t __rte_hot
+cn10k_sso_hws_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+   struct cn10k_sso_hws *ws = port;
+
+   RTE_SET_USED(timeout_ticks);
+
+   if (ws->swtag_req) {
+   ws->swtag_req = 0;
+   cnxk_sso_hws_swtag_wait(ws->tag_wqe_op);
+   return 1;
+   }
+
+   return cn10k_sso_hws_get_work(ws, ev);
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_deq_burst(void *port, struct rte_event ev[], uint16_t nb_events,
+   uint64_t timeout_ticks)
+{
+   RTE_SET_USED(nb_events);
+
+   return cn10k_sso_hws_deq(port, ev, timeout_ticks);
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_tmo_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+   struct cn10k_sso_hws *ws = port;
+   uint16_t ret = 1;
+   uint64_t iter;
+
+   if (ws->swtag_req) {
+   ws->swtag_req = 0;
+   cnxk_sso_hws_swtag_wait(ws->tag_wqe_op);
+   return ret;
+   }
+
+   ret = cn10k_sso_hws_get_work(ws, ev);
+   for (iter = 1; iter < timeout_ticks && (ret == 0); iter++)
+   ret = cn10k_sso_hws_get_work(ws, ev);
+
+   return ret;
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_tmo_deq_burst(void *port, struct rte_event ev[],
+   uint16_t nb_events, uint64_t timeout_ticks)
+{
+   RTE_SET_USED(nb_events);
+
+   return cn10k_sso_hws_tmo_deq(port, ev, timeout_ticks);
+}
diff --git a/drivers/event/cnxk/cn10k_worker.h 
b/drivers/event/cnxk/cn10k_worker.h
index d75e92846..ed4e3bd63 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -160,4 +160,16 @@ uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
   const struct rte_event ev[],
   uint16_t nb_events);
 
+uint16_t __rte_hot cn10k_sso_hws_deq(void *port, struct rte_event *ev,
+uint64_t timeout_ticks);
+uint16_t __rte_hot cn10k_sso_hws_deq_burst(void *port, struct rte_event ev[],
+  uint16_t nb_events,
+  uint64_t timeout_ticks);
+uint16_t __rte_hot cn10k_sso_hws_tmo_deq(void *port, struct rte_event *ev,
+uint64_t timeout_ticks);
+uint16_t __rte_hot cn10k_sso_hws_tmo_deq_burst(void *port,
+  struct rte_event ev[],
+  uint16_t nb_events,
+  uint64_t timeout_ticks);
+
 #endif
diff --git a/drivers/event/cnxk/cn9k_eventdev.c 
b/drivers/event/cnxk/cn9k_eventdev.c
index 7e4c1b415..8100140fc 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -162,12 +162,27 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst;
 
+   event_dev->dequeue = cn9k_sso_hws_deq;
+   event_dev->dequeue_burst = cn9k_sso_hws_deq_burst;
+   if (dev->deq_tmo_ns) {

[dpdk-dev] [PATCH 16/36] event/cnxk: add device start function

2021-03-06 Thread pbhagavatula
From: Shijith Thotton 

Add eventdev start function along with few cleanup API's to maintain
sanity.

Signed-off-by: Pavan Nikhilesh 
Signed-off-by: Shijith Thotton 
---
 drivers/event/cnxk/cn10k_eventdev.c | 127 
 drivers/event/cnxk/cn9k_eventdev.c  | 113 +
 drivers/event/cnxk/cnxk_eventdev.c  |  64 ++
 drivers/event/cnxk/cnxk_eventdev.h  |   7 ++
 4 files changed, 311 insertions(+)

diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index a9948e1b2..0de44ed43 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -112,6 +112,117 @@ cn10k_sso_hws_release(void *arg, void *hws)
memset(ws, 0, sizeof(*ws));
 }
 
+static void
+cn10k_sso_hws_flush_events(void *hws, uint8_t queue_id, uintptr_t base,
+  cnxk_handle_event_t fn, void *arg)
+{
+   struct cn10k_sso_hws *ws = hws;
+   uint64_t cq_ds_cnt = 1;
+   uint64_t aq_cnt = 1;
+   uint64_t ds_cnt = 1;
+   struct rte_event ev;
+   uint64_t val, req;
+
+   plt_write64(0, base + SSO_LF_GGRP_QCTL);
+
+   req = queue_id; /* GGRP ID */
+   req |= BIT_ULL(18); /* Grouped */
+   req |= BIT_ULL(16); /* WAIT */
+
+   aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+   ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+   cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+   cq_ds_cnt &= 0x3FFF3FFF;
+
+   while (aq_cnt || cq_ds_cnt || ds_cnt) {
+   plt_write64(req, ws->getwrk_op);
+   cn10k_sso_hws_get_work_empty(ws, &ev);
+   if (fn != NULL && ev.u64 != 0)
+   fn(arg, ev);
+   if (ev.sched_type != SSO_TT_EMPTY)
+   cnxk_sso_hws_swtag_flush(ws->tag_wqe_op,
+ws->swtag_flush_op);
+   do {
+   val = plt_read64(ws->base + SSOW_LF_GWS_PENDSTATE);
+   } while (val & BIT_ULL(56));
+   aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+   ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+   cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+   /* Extract cq and ds count */
+   cq_ds_cnt &= 0x3FFF3FFF;
+   }
+
+   plt_write64(0, ws->base + SSOW_LF_GWS_OP_GWC_INVAL);
+   rte_mb();
+}
+
+static void
+cn10k_sso_hws_reset(void *arg, void *hws)
+{
+   struct cnxk_sso_evdev *dev = arg;
+   struct cn10k_sso_hws *ws = hws;
+   uintptr_t base = ws->base;
+   uint64_t pend_state;
+   union {
+   __uint128_t wdata;
+   uint64_t u64[2];
+   } gw;
+   uint8_t pend_tt;
+
+   /* Wait till getwork/swtp/waitw/desched completes. */
+   do {
+   pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+   } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) |
+  BIT_ULL(56) | BIT_ULL(54)));
+   pend_tt = CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_WQE0));
+   if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
+   if (pend_tt == SSO_TT_ATOMIC || pend_tt == SSO_TT_ORDERED)
+   cnxk_sso_hws_swtag_untag(base +
+SSOW_LF_GWS_OP_SWTAG_UNTAG);
+   plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED);
+   }
+
+   /* Wait for desched to complete. */
+   do {
+   pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+   } while (pend_state & BIT_ULL(58));
+
+   switch (dev->gw_mode) {
+   case CN10K_GW_MODE_PREF:
+   while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) & BIT_ULL(63))
+   ;
+   break;
+   case CN10K_GW_MODE_PREF_WFE:
+   while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) &
+  SSOW_LF_GWS_TAG_PEND_GET_WORK_BIT)
+   continue;
+   plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL);
+   break;
+   case CN10K_GW_MODE_NONE:
+   default:
+   break;
+   }
+
+   if (CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_PRF_WQE0)) !=
+   SSO_TT_EMPTY) {
+   plt_write64(BIT_ULL(16) | 1, ws->getwrk_op);
+   do {
+   roc_load_pair(gw.u64[0], gw.u64[1], ws->tag_wqe_op);
+   } while (gw.u64[0] & BIT_ULL(63));
+   pend_tt = CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_WQE0));
+   if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
+   if (pend_tt == SSO_TT_ATOMIC ||
+   pend_tt == SSO_TT_ORDERED)
+   cnxk_sso_hws_swtag_untag(
+   base + SSOW_LF_GWS_OP_SWTAG_UNTAG);
+   plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED);
+   }

[dpdk-dev] [PATCH 17/36] event/cnxk: add device stop and close functions

2021-03-06 Thread pbhagavatula
From: Shijith Thotton 

Add event device stop and close callback functions.

Signed-off-by: Pavan Nikhilesh 
Signed-off-by: Shijith Thotton 
---
 drivers/event/cnxk/cn10k_eventdev.c | 15 +
 drivers/event/cnxk/cn9k_eventdev.c  | 14 +
 drivers/event/cnxk/cnxk_eventdev.c  | 48 +
 drivers/event/cnxk/cnxk_eventdev.h  |  6 
 4 files changed, 83 insertions(+)

diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index 0de44ed43..6a0b9bcd9 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -388,6 +388,19 @@ cn10k_sso_start(struct rte_eventdev *event_dev)
return rc;
 }
 
+static void
+cn10k_sso_stop(struct rte_eventdev *event_dev)
+{
+   cnxk_sso_stop(event_dev, cn10k_sso_hws_reset,
+ cn10k_sso_hws_flush_events);
+}
+
+static int
+cn10k_sso_close(struct rte_eventdev *event_dev)
+{
+   return cnxk_sso_close(event_dev, cn10k_sso_hws_unlink);
+}
+
 static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -402,6 +415,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.timeout_ticks = cnxk_sso_timeout_ticks,
 
.dev_start = cn10k_sso_start,
+   .dev_stop = cn10k_sso_stop,
+   .dev_close = cn10k_sso_close,
 };
 
 static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c 
b/drivers/event/cnxk/cn9k_eventdev.c
index 39f29b687..195ed49d8 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -463,6 +463,18 @@ cn9k_sso_start(struct rte_eventdev *event_dev)
return rc;
 }
 
+static void
+cn9k_sso_stop(struct rte_eventdev *event_dev)
+{
+   cnxk_sso_stop(event_dev, cn9k_sso_hws_reset, cn9k_sso_hws_flush_events);
+}
+
+static int
+cn9k_sso_close(struct rte_eventdev *event_dev)
+{
+   return cnxk_sso_close(event_dev, cn9k_sso_hws_unlink);
+}
+
 static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -477,6 +489,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.timeout_ticks = cnxk_sso_timeout_ticks,
 
.dev_start = cn9k_sso_start,
+   .dev_stop = cn9k_sso_stop,
+   .dev_close = cn9k_sso_close,
 };
 
 static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c 
b/drivers/event/cnxk/cnxk_eventdev.c
index 0059b0eca..01685633d 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -390,6 +390,54 @@ cnxk_sso_start(struct rte_eventdev *event_dev, 
cnxk_sso_hws_reset_t reset_fn,
return 0;
 }
 
+void
+cnxk_sso_stop(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn)
+{
+   plt_sso_dbg();
+   cnxk_sso_cleanup(event_dev, reset_fn, flush_fn, false);
+   rte_mb();
+}
+
+int
+cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn)
+{
+   struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+   uint16_t all_queues[CNXK_SSO_MAX_HWGRP];
+   uint16_t i;
+   void *ws;
+
+   if (!dev->configured)
+   return 0;
+
+   for (i = 0; i < dev->nb_event_queues; i++)
+   all_queues[i] = i;
+
+   for (i = 0; i < dev->nb_event_ports; i++) {
+   ws = event_dev->data->ports[i];
+   unlink_fn(dev, ws, all_queues, dev->nb_event_queues);
+   rte_free(cnxk_sso_hws_get_cookie(ws));
+   event_dev->data->ports[i] = NULL;
+   }
+
+   roc_sso_rsrc_fini(&dev->sso);
+   rte_mempool_free(dev->xaq_pool);
+   rte_memzone_free(rte_memzone_lookup(CNXK_SSO_FC_NAME));
+
+   dev->fc_iova = 0;
+   dev->fc_mem = NULL;
+   dev->xaq_pool = NULL;
+   dev->configured = false;
+   dev->is_timeout_deq = 0;
+   dev->nb_event_ports = 0;
+   dev->max_num_events = -1;
+   dev->nb_event_queues = 0;
+   dev->min_dequeue_timeout_ns = USEC2NSEC(1);
+   dev->max_dequeue_timeout_ns = USEC2NSEC(0x3FF);
+
+   return 0;
+}
+
 static void
 parse_queue_param(char *value, void *opaque)
 {
diff --git a/drivers/event/cnxk/cnxk_eventdev.h 
b/drivers/event/cnxk/cnxk_eventdev.h
index 6ead171c0..1030d5840 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -48,6 +48,8 @@ typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, 
uintptr_t *grp_base);
 typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
 typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
   uint16_t nb_link);
+typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map,
+uint16_t nb_link);
 typedef void (*cnxk_handle_event_t)(void *arg, struct rte_event ev);
 typedef void (*cnxk_sso_hws_reset_t)(void *arg, void *ws);
 typedef void (*cnxk_sso_hws_flush_t)(void *ws, uin

[dpdk-dev] [PATCH 18/36] event/cnxk: add SSO selftest and dump

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

Add selftest to verify sanity of SSO and also add function to
dump internal state of SSO.

Signed-off-by: Pavan Nikhilesh 
---
 app/test/test_eventdev.c   |   14 +
 drivers/event/cnxk/cn10k_eventdev.c|8 +
 drivers/event/cnxk/cn9k_eventdev.c |   10 +-
 drivers/event/cnxk/cnxk_eventdev.c |8 +
 drivers/event/cnxk/cnxk_eventdev.h |5 +
 drivers/event/cnxk/cnxk_sso_selftest.c | 1570 
 drivers/event/cnxk/meson.build |3 +-
 7 files changed, 1616 insertions(+), 2 deletions(-)
 create mode 100644 drivers/event/cnxk/cnxk_sso_selftest.c

diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 27ca5a649..107003f0b 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1042,6 +1042,18 @@ test_eventdev_selftest_dlb2(void)
return test_eventdev_selftest_impl("dlb2_event", "");
 }
 
+static int
+test_eventdev_selftest_cn9k(void)
+{
+   return test_eventdev_selftest_impl("event_cn9k", "");
+}
+
+static int
+test_eventdev_selftest_cn10k(void)
+{
+   return test_eventdev_selftest_impl("event_cn10k", "");
+}
+
 REGISTER_TEST_COMMAND(eventdev_common_autotest, test_eventdev_common);
 REGISTER_TEST_COMMAND(eventdev_selftest_sw, test_eventdev_selftest_sw);
 REGISTER_TEST_COMMAND(eventdev_selftest_octeontx,
@@ -1051,3 +1063,5 @@ REGISTER_TEST_COMMAND(eventdev_selftest_octeontx2,
 REGISTER_TEST_COMMAND(eventdev_selftest_dpaa2, test_eventdev_selftest_dpaa2);
 REGISTER_TEST_COMMAND(eventdev_selftest_dlb, test_eventdev_selftest_dlb);
 REGISTER_TEST_COMMAND(eventdev_selftest_dlb2, test_eventdev_selftest_dlb2);
+REGISTER_TEST_COMMAND(eventdev_selftest_cn9k, test_eventdev_selftest_cn9k);
+REGISTER_TEST_COMMAND(eventdev_selftest_cn10k, test_eventdev_selftest_cn10k);
diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index 6a0b9bcd9..74070e005 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -401,6 +401,12 @@ cn10k_sso_close(struct rte_eventdev *event_dev)
return cnxk_sso_close(event_dev, cn10k_sso_hws_unlink);
 }
 
+static int
+cn10k_sso_selftest(void)
+{
+   return cnxk_sso_selftest(RTE_STR(event_cn10k));
+}
+
 static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -414,9 +420,11 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_unlink = cn10k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
 
+   .dump = cnxk_sso_dump,
.dev_start = cn10k_sso_start,
.dev_stop = cn10k_sso_stop,
.dev_close = cn10k_sso_close,
+   .dev_selftest = cn10k_sso_selftest,
 };
 
 static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c 
b/drivers/event/cnxk/cn9k_eventdev.c
index 195ed49d8..4fb0f1ccc 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -222,7 +222,7 @@ cn9k_sso_hws_reset(void *arg, void *hws)
}
 }
 
-static void
+void
 cn9k_sso_set_rsrc(void *arg)
 {
struct cnxk_sso_evdev *dev = arg;
@@ -475,6 +475,12 @@ cn9k_sso_close(struct rte_eventdev *event_dev)
return cnxk_sso_close(event_dev, cn9k_sso_hws_unlink);
 }
 
+static int
+cn9k_sso_selftest(void)
+{
+   return cnxk_sso_selftest(RTE_STR(event_cn9k));
+}
+
 static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -488,9 +494,11 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_unlink = cn9k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
 
+   .dump = cnxk_sso_dump,
.dev_start = cn9k_sso_start,
.dev_stop = cn9k_sso_stop,
.dev_close = cn9k_sso_close,
+   .dev_selftest = cn9k_sso_selftest,
 };
 
 static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c 
b/drivers/event/cnxk/cnxk_eventdev.c
index 01685633d..dbd35ca5d 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -326,6 +326,14 @@ cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, 
uint64_t ns,
return 0;
 }
 
+void
+cnxk_sso_dump(struct rte_eventdev *event_dev, FILE *f)
+{
+   struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+   roc_sso_dump(&dev->sso, dev->sso.nb_hws, dev->sso.nb_hwgrp, f);
+}
+
 static void
 cnxk_handle_event(void *arg, struct rte_event event)
 {
diff --git a/drivers/event/cnxk/cnxk_eventdev.h 
b/drivers/event/cnxk/cnxk_eventdev.h
index 1030d5840..ee7dce5f5 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -211,5 +211,10 @@ void cnxk_sso_stop(struct rte_eventdev *event_dev,
   cnxk_sso_hws_reset_t reset_fn,
   cnxk_sso_hws_flush_t flush_fn);
 int cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t 
unlink_fn);
+int cnxk_sso_selftest(const char *dev_na

[dpdk-dev] [PATCH 19/36] event/cnxk: support event timer

2021-03-06 Thread pbhagavatula
From: Shijith Thotton 

Add event timer adapter aka TIM initialization on SSO probe.

Signed-off-by: Pavan Nikhilesh 
Signed-off-by: Shijith Thotton 
---
 doc/guides/eventdevs/cnxk.rst   |  6 
 drivers/event/cnxk/cnxk_eventdev.c  |  3 ++
 drivers/event/cnxk/cnxk_eventdev.h  |  2 ++
 drivers/event/cnxk/cnxk_tim_evdev.c | 47 +
 drivers/event/cnxk/cnxk_tim_evdev.h | 44 +++
 drivers/event/cnxk/meson.build  |  3 +-
 6 files changed, 104 insertions(+), 1 deletion(-)
 create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.c
 create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.h

diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index b2684d431..662df2971 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -35,6 +35,10 @@ Features of the OCTEON CNXK SSO PMD are:
 - Open system with configurable amount of outstanding events limited only by
   DRAM
 - HW accelerated dequeue timeout support to enable power management
+- HW managed event timers support through TIM, with high precision and
+  time granularity of 2.5us on CN9K and 1us on CN10K.
+- Up to 256 TIM rings aka event timer adapters.
+- Up to 8 rings traversed in parallel.
 
 Prerequisites and Compilation procedure
 ---
@@ -101,3 +105,5 @@ Debugging Options
+===++===+
| 1 | SSO| --log-level='pmd\.event\.cnxk,8'  |
+---++---+
+   | 2 | TIM| --log-level='pmd\.event\.cnxk\.timer,8'   |
+   +---++---+
diff --git a/drivers/event/cnxk/cnxk_eventdev.c 
b/drivers/event/cnxk/cnxk_eventdev.c
index dbd35ca5d..c404bb586 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -582,6 +582,8 @@ cnxk_sso_init(struct rte_eventdev *event_dev)
dev->nb_event_queues = 0;
dev->nb_event_ports = 0;
 
+   cnxk_tim_init(&dev->sso);
+
return 0;
 
 error:
@@ -598,6 +600,7 @@ cnxk_sso_fini(struct rte_eventdev *event_dev)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
 
+   cnxk_tim_fini();
roc_sso_rsrc_fini(&dev->sso);
roc_sso_dev_fini(&dev->sso);
 
diff --git a/drivers/event/cnxk/cnxk_eventdev.h 
b/drivers/event/cnxk/cnxk_eventdev.h
index ee7dce5f5..e4051a64b 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -14,6 +14,8 @@
 
 #include "roc_api.h"
 
+#include "cnxk_tim_evdev.h"
+
 #define CNXK_SSO_XAE_CNT   "xae_cnt"
 #define CNXK_SSO_GGRP_QOS  "qos"
 #define CN9K_SSO_SINGLE_WS "single_ws"
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c 
b/drivers/event/cnxk/cnxk_tim_evdev.c
new file mode 100644
index 0..76b17910f
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cnxk_eventdev.h"
+#include "cnxk_tim_evdev.h"
+
+void
+cnxk_tim_init(struct roc_sso *sso)
+{
+   const struct rte_memzone *mz;
+   struct cnxk_tim_evdev *dev;
+   int rc;
+
+   if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+   return;
+
+   mz = rte_memzone_reserve(RTE_STR(CNXK_TIM_EVDEV_NAME),
+sizeof(struct cnxk_tim_evdev), 0, 0);
+   if (mz == NULL) {
+   plt_tim_dbg("Unable to allocate memory for TIM Event device");
+   return;
+   }
+   dev = mz->addr;
+
+   dev->tim.roc_sso = sso;
+   rc = roc_tim_init(&dev->tim);
+   if (rc < 0) {
+   plt_err("Failed to initialize roc tim resources");
+   rte_memzone_free(mz);
+   return;
+   }
+   dev->nb_rings = rc;
+   dev->chunk_sz = CNXK_TIM_RING_DEF_CHUNK_SZ;
+}
+
+void
+cnxk_tim_fini(void)
+{
+   struct cnxk_tim_evdev *dev = tim_priv_get();
+
+   if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+   return;
+
+   roc_tim_fini(&dev->tim);
+   rte_memzone_free(rte_memzone_lookup(RTE_STR(CNXK_TIM_EVDEV_NAME)));
+}
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h 
b/drivers/event/cnxk/cnxk_tim_evdev.h
new file mode 100644
index 0..6cf0adb21
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __CNXK_TIM_EVDEV_H__
+#define __CNXK_TIM_EVDEV_H__
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+
+#include "roc_api.h"
+
+#define CNXK_TIM_EVDEV_NAME   cnxk_tim_eventdev
+#define CNXK_TIM_RING_DEF_CHUNK_SZ (4096)
+
+struct cnxk_tim_evdev {
+   struct roc_tim tim;
+   struct rte_eventdev *event_dev;
+   uint16_t nb_rings;
+   uint32_

[dpdk-dev] [PATCH 20/36] event/cnxk: add timer adapter capabilities

2021-03-06 Thread pbhagavatula
From: Shijith Thotton 

Add function to retrieve event timer adapter capabilities.

Signed-off-by: Pavan Nikhilesh 
Signed-off-by: Shijith Thotton 
---
 drivers/event/cnxk/cn10k_eventdev.c |  2 ++
 drivers/event/cnxk/cn9k_eventdev.c  |  2 ++
 drivers/event/cnxk/cnxk_tim_evdev.c | 22 +-
 drivers/event/cnxk/cnxk_tim_evdev.h |  6 +-
 4 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index 74070e005..30ca0d901 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -420,6 +420,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_unlink = cn10k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
 
+   .timer_adapter_caps_get = cnxk_tim_caps_get,
+
.dump = cnxk_sso_dump,
.dev_start = cn10k_sso_start,
.dev_stop = cn10k_sso_stop,
diff --git a/drivers/event/cnxk/cn9k_eventdev.c 
b/drivers/event/cnxk/cn9k_eventdev.c
index 4fb0f1ccc..773152e55 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -494,6 +494,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_unlink = cn9k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
 
+   .timer_adapter_caps_get = cnxk_tim_caps_get,
+
.dump = cnxk_sso_dump,
.dev_start = cn9k_sso_start,
.dev_stop = cn9k_sso_stop,
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c 
b/drivers/event/cnxk/cnxk_tim_evdev.c
index 76b17910f..6000b507a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -5,6 +5,26 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_tim_evdev.h"
 
+int
+cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
+ uint32_t *caps,
+ const struct rte_event_timer_adapter_ops **ops)
+{
+   struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+
+   RTE_SET_USED(flags);
+   RTE_SET_USED(ops);
+
+   if (dev == NULL)
+   return -ENODEV;
+
+   /* Store evdev pointer for later use. */
+   dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
+   *caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
+
+   return 0;
+}
+
 void
 cnxk_tim_init(struct roc_sso *sso)
 {
@@ -37,7 +57,7 @@ cnxk_tim_init(struct roc_sso *sso)
 void
 cnxk_tim_fini(void)
 {
-   struct cnxk_tim_evdev *dev = tim_priv_get();
+   struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
 
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return;
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h 
b/drivers/event/cnxk/cnxk_tim_evdev.h
index 6cf0adb21..8dcecb281 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -27,7 +27,7 @@ struct cnxk_tim_evdev {
 };
 
 static inline struct cnxk_tim_evdev *
-tim_priv_get(void)
+cnxk_tim_priv_get(void)
 {
const struct rte_memzone *mz;
 
@@ -38,6 +38,10 @@ tim_priv_get(void)
return mz->addr;
 }
 
+int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
+ uint32_t *caps,
+ const struct rte_event_timer_adapter_ops **ops);
+
 void cnxk_tim_init(struct roc_sso *sso);
 void cnxk_tim_fini(void);
 
-- 
2.17.1



[dpdk-dev] [PATCH 21/36] event/cnxk: create and free timer adapter

2021-03-06 Thread pbhagavatula
From: Shijith Thotton 

When the application calls timer adapter create the following is used:
- Allocate a TIM LF based on number of LF's provisioned.
- Verify the config parameters supplied.
- Allocate memory required for
* Buckets based on min and max timeout supplied.
* Allocate the chunk pool based on the number of timers.

On Free:
- Free the allocated bucket and chunk memory.
- Free the TIM lf allocated.

Signed-off-by: Shijith Thotton 
Signed-off-by: Pavan Nikhilesh 
---
 drivers/event/cnxk/cnxk_tim_evdev.c | 174 
 drivers/event/cnxk/cnxk_tim_evdev.h | 128 +++-
 2 files changed, 300 insertions(+), 2 deletions(-)

diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c 
b/drivers/event/cnxk/cnxk_tim_evdev.c
index 6000b507a..986ad8493 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -5,6 +5,177 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_tim_evdev.h"
 
+static struct rte_event_timer_adapter_ops cnxk_tim_ops;
+
+static int
+cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
+ struct rte_event_timer_adapter_conf *rcfg)
+{
+   unsigned int cache_sz = (tim_ring->nb_chunks / 1.5);
+   unsigned int mp_flags = 0;
+   char pool_name[25];
+   int rc;
+
+   cache_sz /= rte_lcore_count();
+   /* Create chunk pool. */
+   if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
+   mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
+   plt_tim_dbg("Using single producer mode");
+   tim_ring->prod_type_sp = true;
+   }
+
+   snprintf(pool_name, sizeof(pool_name), "cnxk_tim_chunk_pool%d",
+tim_ring->ring_id);
+
+   if (cache_sz > RTE_MEMPOOL_CACHE_MAX_SIZE)
+   cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE;
+   cache_sz = cache_sz != 0 ? cache_sz : 2;
+   tim_ring->nb_chunks += (cache_sz * rte_lcore_count());
+   tim_ring->chunk_pool = rte_mempool_create_empty(
+   pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz, cache_sz, 0,
+   rte_socket_id(), mp_flags);
+
+   if (tim_ring->chunk_pool == NULL) {
+   plt_err("Unable to create chunkpool.");
+   return -ENOMEM;
+   }
+
+   rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
+   rte_mbuf_platform_mempool_ops(), NULL);
+   if (rc < 0) {
+   plt_err("Unable to set chunkpool ops");
+   goto free;
+   }
+
+   rc = rte_mempool_populate_default(tim_ring->chunk_pool);
+   if (rc < 0) {
+   plt_err("Unable to set populate chunkpool.");
+   goto free;
+   }
+   tim_ring->aura =
+   roc_npa_aura_handle_to_aura(tim_ring->chunk_pool->pool_id);
+   tim_ring->ena_dfb = 0;
+
+   return 0;
+
+free:
+   rte_mempool_free(tim_ring->chunk_pool);
+   return rc;
+}
+
+static int
+cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
+{
+   struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf;
+   struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+   struct cnxk_tim_ring *tim_ring;
+   int rc;
+
+   if (dev == NULL)
+   return -ENODEV;
+
+   if (adptr->data->id >= dev->nb_rings)
+   return -ENODEV;
+
+   tim_ring = rte_zmalloc("cnxk_tim_prv", sizeof(struct cnxk_tim_ring), 0);
+   if (tim_ring == NULL)
+   return -ENOMEM;
+
+   rc = roc_tim_lf_alloc(&dev->tim, adptr->data->id, NULL);
+   if (rc < 0) {
+   plt_err("Failed to create timer ring");
+   goto tim_ring_free;
+   }
+
+   if (NSEC2TICK(RTE_ALIGN_MUL_CEIL(
+ rcfg->timer_tick_ns,
+ cnxk_tim_min_resolution_ns(cnxk_tim_cntfrq())),
+ cnxk_tim_cntfrq()) <
+   cnxk_tim_min_tmo_ticks(cnxk_tim_cntfrq())) {
+   if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_ADJUST_RES)
+   rcfg->timer_tick_ns = TICK2NSEC(
+   cnxk_tim_min_tmo_ticks(cnxk_tim_cntfrq()),
+   cnxk_tim_cntfrq());
+   else {
+   rc = -ERANGE;
+   goto tim_hw_free;
+   }
+   }
+   tim_ring->ring_id = adptr->data->id;
+   tim_ring->clk_src = (int)rcfg->clk_src;
+   tim_ring->tck_nsec = RTE_ALIGN_MUL_CEIL(
+   rcfg->timer_tick_ns,
+   cnxk_tim_min_resolution_ns(cnxk_tim_cntfrq()));
+   tim_ring->max_tout = rcfg->max_tmo_ns;
+   tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec);
+   tim_ring->nb_timers = rcfg->nb_timers;
+   tim_ring->chunk_sz = dev->chunk_sz;
+
+   tim_ring->nb_chunks = tim_ring->nb_timers;
+   tim_ring->nb_chunk_slots = CNXK_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
+   /* Create buckets. */
+   

[dpdk-dev] [PATCH 22/36] event/cnxk: add devargs to disable NPA

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

If the chunks are allocated from NPA then TIM can automatically free
them when traversing the list of chunks.
Add devargs to disable NPA and use software mempool to manage chunks.

Example:
--dev "0002:0e:00.0,tim_disable_npa=1"

Signed-off-by: Shijith Thotton 
Signed-off-by: Pavan Nikhilesh 
---
 doc/guides/eventdevs/cnxk.rst   | 10 
 drivers/event/cnxk/cn10k_eventdev.c |  3 +-
 drivers/event/cnxk/cn9k_eventdev.c  |  3 +-
 drivers/event/cnxk/cnxk_eventdev.h  |  9 +++
 drivers/event/cnxk/cnxk_tim_evdev.c | 86 +
 drivers/event/cnxk/cnxk_tim_evdev.h |  5 ++
 6 files changed, 92 insertions(+), 24 deletions(-)

diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 662df2971..9e14f99f2 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -93,6 +93,16 @@ Runtime Config Options
 
 -a 0002:0e:00.0,qos=[1-50-50-50]
 
+- ``TIM disable NPA``
+
+  By default chunks are allocated from NPA then TIM can automatically free
+  them when traversing the list of chunks. The ``tim_disable_npa`` devargs
+  parameter disables NPA and uses software mempool to manage chunks
+
+  For example::
+
+-a 0002:0e:00.0,tim_disable_npa=1
+
 Debugging Options
 -
 
diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index 30ca0d901..807e666d3 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -502,4 +502,5 @@ RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
 RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
 RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "="
  CNXK_SSO_GGRP_QOS "="
- CN10K_SSO_GW_MODE "=");
+ CN10K_SSO_GW_MODE "="
+ CNXK_TIM_DISABLE_NPA "=1");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c 
b/drivers/event/cnxk/cn9k_eventdev.c
index 773152e55..3e27fce4a 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -571,4 +571,5 @@ RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
 RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
 RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "="
  CNXK_SSO_GGRP_QOS "="
- CN9K_SSO_SINGLE_WS "=1");
+ CN9K_SSO_SINGLE_WS "=1"
+ CNXK_TIM_DISABLE_NPA "=1");
diff --git a/drivers/event/cnxk/cnxk_eventdev.h 
b/drivers/event/cnxk/cnxk_eventdev.h
index e4051a64b..487c7f822 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -159,6 +159,15 @@ struct cnxk_sso_hws_cookie {
bool configured;
 } __rte_cache_aligned;
 
+static inline int
+parse_kvargs_flag(const char *key, const char *value, void *opaque)
+{
+   RTE_SET_USED(key);
+
+   *(uint8_t *)opaque = !!atoi(value);
+   return 0;
+}
+
 static inline int
 parse_kvargs_value(const char *key, const char *value, void *opaque)
 {
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c 
b/drivers/event/cnxk/cnxk_tim_evdev.c
index 986ad8493..44bcad94d 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -31,30 +31,43 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE;
cache_sz = cache_sz != 0 ? cache_sz : 2;
tim_ring->nb_chunks += (cache_sz * rte_lcore_count());
-   tim_ring->chunk_pool = rte_mempool_create_empty(
-   pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz, cache_sz, 0,
-   rte_socket_id(), mp_flags);
-
-   if (tim_ring->chunk_pool == NULL) {
-   plt_err("Unable to create chunkpool.");
-   return -ENOMEM;
-   }
+   if (!tim_ring->disable_npa) {
+   tim_ring->chunk_pool = rte_mempool_create_empty(
+   pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz,
+   cache_sz, 0, rte_socket_id(), mp_flags);
+
+   if (tim_ring->chunk_pool == NULL) {
+   plt_err("Unable to create chunkpool.");
+   return -ENOMEM;
+   }
 
-   rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
-   rte_mbuf_platform_mempool_ops(), NULL);
-   if (rc < 0) {
-   plt_err("Unable to set chunkpool ops");
-   goto free;
-   }
+   rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
+   rte_mbuf_platform_mempool_ops(),
+   NULL);
+   if (rc < 0) {
+   plt_err("Unable to set chunkpool ops");
+   goto free;
+   }
 
-   rc = rte_mempool_populate_default(tim_ring->chunk_pool);
-   i

[dpdk-dev] [PATCH 23/36] event/cnxk: allow adapters to resize inflights

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

Add internal SSO functions to allow event adapters to resize SSO buffers
that are used to hold in-flight events in DRAM.

Signed-off-by: Pavan Nikhilesh 
Signed-off-by: Shijith Thotton 
---
 drivers/event/cnxk/cnxk_eventdev.c   | 33 
 drivers/event/cnxk/cnxk_eventdev.h   |  7 +++
 drivers/event/cnxk/cnxk_eventdev_adptr.c | 67 
 drivers/event/cnxk/cnxk_tim_evdev.c  |  5 ++
 drivers/event/cnxk/meson.build   |  1 +
 5 files changed, 113 insertions(+)
 create mode 100644 drivers/event/cnxk/cnxk_eventdev_adptr.c

diff --git a/drivers/event/cnxk/cnxk_eventdev.c 
b/drivers/event/cnxk/cnxk_eventdev.c
index c404bb586..29e38478d 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -77,6 +77,9 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
xaq_cnt = dev->nb_event_queues * CNXK_SSO_XAQ_CACHE_CNT;
if (dev->xae_cnt)
xaq_cnt += dev->xae_cnt / dev->sso.xae_waes;
+   else if (dev->adptr_xae_cnt)
+   xaq_cnt += (dev->adptr_xae_cnt / dev->sso.xae_waes) +
+  (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
else
xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
   (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
@@ -125,6 +128,36 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
return rc;
 }
 
+int
+cnxk_sso_xae_reconfigure(struct rte_eventdev *event_dev)
+{
+   struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+   int rc = 0;
+
+   if (event_dev->data->dev_started)
+   event_dev->dev_ops->dev_stop(event_dev);
+
+   rc = roc_sso_hwgrp_release_xaq(&dev->sso, dev->nb_event_queues);
+   if (rc < 0) {
+   plt_err("Failed to release XAQ %d", rc);
+   return rc;
+   }
+
+   rte_mempool_free(dev->xaq_pool);
+   dev->xaq_pool = NULL;
+   rc = cnxk_sso_xaq_allocate(dev);
+   if (rc < 0) {
+   plt_err("Failed to alloc XAQ %d", rc);
+   return rc;
+   }
+
+   rte_mb();
+   if (event_dev->data->dev_started)
+   event_dev->dev_ops->dev_start(event_dev);
+
+   return 0;
+}
+
 int
 cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
   cnxk_sso_init_hws_mem_t init_hws_fn,
diff --git a/drivers/event/cnxk/cnxk_eventdev.h 
b/drivers/event/cnxk/cnxk_eventdev.h
index 487c7f822..32abf9632 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -81,6 +81,10 @@ struct cnxk_sso_evdev {
uint64_t nb_xaq_cfg;
rte_iova_t fc_iova;
struct rte_mempool *xaq_pool;
+   uint64_t adptr_xae_cnt;
+   uint16_t tim_adptr_ring_cnt;
+   uint16_t *timer_adptr_rings;
+   uint64_t *timer_adptr_sz;
/* Dev args */
uint32_t xae_cnt;
uint8_t qos_queue_cnt;
@@ -190,7 +194,10 @@ cnxk_sso_hws_get_cookie(void *ws)
 }
 
 /* Configuration functions */
+int cnxk_sso_xae_reconfigure(struct rte_eventdev *event_dev);
 int cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev);
+void cnxk_sso_updt_xae_cnt(struct cnxk_sso_evdev *dev, void *data,
+  uint32_t event_type);
 
 /* Common ops API. */
 int cnxk_sso_init(struct rte_eventdev *event_dev);
diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c 
b/drivers/event/cnxk/cnxk_eventdev_adptr.c
new file mode 100644
index 0..6d9615453
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cnxk_eventdev.h"
+
+void
+cnxk_sso_updt_xae_cnt(struct cnxk_sso_evdev *dev, void *data,
+ uint32_t event_type)
+{
+   int i;
+
+   switch (event_type) {
+   case RTE_EVENT_TYPE_TIMER: {
+   struct cnxk_tim_ring *timr = data;
+   uint16_t *old_ring_ptr;
+   uint64_t *old_sz_ptr;
+
+   for (i = 0; i < dev->tim_adptr_ring_cnt; i++) {
+   if (timr->ring_id != dev->timer_adptr_rings[i])
+   continue;
+   if (timr->nb_timers == dev->timer_adptr_sz[i])
+   return;
+   dev->adptr_xae_cnt -= dev->timer_adptr_sz[i];
+   dev->adptr_xae_cnt += timr->nb_timers;
+   dev->timer_adptr_sz[i] = timr->nb_timers;
+
+   return;
+   }
+
+   dev->tim_adptr_ring_cnt++;
+   old_ring_ptr = dev->timer_adptr_rings;
+   old_sz_ptr = dev->timer_adptr_sz;
+
+   dev->timer_adptr_rings = rte_realloc(
+   dev->timer_adptr_rings,
+   sizeof(uint16_t) * dev->tim_adptr_ring_cnt, 0);
+   if (dev->timer_adptr_rings == NULL) {
+   dev->adptr_xae_cnt += tim

[dpdk-dev] [PATCH 24/36] event/cnxk: add timer adapter info function

2021-03-06 Thread pbhagavatula
From: Shijith Thotton 

Add TIM event timer adapter info get function.

Signed-off-by: Shijith Thotton 
Signed-off-by: Pavan Nikhilesh 
---
 drivers/event/cnxk/cnxk_tim_evdev.c | 13 +
 1 file changed, 13 insertions(+)

diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c 
b/drivers/event/cnxk/cnxk_tim_evdev.c
index 4add1d659..6bbfadb25 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -76,6 +76,18 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
return rc;
 }
 
+static void
+cnxk_tim_ring_info_get(const struct rte_event_timer_adapter *adptr,
+  struct rte_event_timer_adapter_info *adptr_info)
+{
+   struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+
+   adptr_info->max_tmo_ns = tim_ring->max_tout;
+   adptr_info->min_resolution_ns = tim_ring->tck_nsec;
+   rte_memcpy(&adptr_info->conf, &adptr->data->conf,
+  sizeof(struct rte_event_timer_adapter_conf));
+}
+
 static int
 cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
 {
@@ -218,6 +230,7 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, 
uint64_t flags,
 
cnxk_tim_ops.init = cnxk_tim_ring_create;
cnxk_tim_ops.uninit = cnxk_tim_ring_free;
+   cnxk_tim_ops.get_info = cnxk_tim_ring_info_get;
 
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
-- 
2.17.1



[dpdk-dev] [PATCH 25/36] event/cnxk: add devargs for chunk size and rings

2021-03-06 Thread pbhagavatula
From: Shijith Thotton 

Add devargs to control default chunk size and max numbers of
timer rings to attach to a given RVU PF.

Example:
--dev "0002:1e:00.0,tim_chnk_slots=1024"
--dev "0002:1e:00.0,tim_rings_lmt=4"

Signed-off-by: Pavan Nikhilesh 
Signed-off-by: Shijith Thotton 
---
 doc/guides/eventdevs/cnxk.rst   | 23 +++
 drivers/event/cnxk/cn10k_eventdev.c |  4 +++-
 drivers/event/cnxk/cn9k_eventdev.c  |  4 +++-
 drivers/event/cnxk/cnxk_tim_evdev.c | 14 +-
 drivers/event/cnxk/cnxk_tim_evdev.h |  4 
 5 files changed, 46 insertions(+), 3 deletions(-)

diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 9e14f99f2..05dcf06f4 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -103,6 +103,29 @@ Runtime Config Options
 
 -a 0002:0e:00.0,tim_disable_npa=1
 
+- ``TIM modify chunk slots``
+
+  The ``tim_chnk_slots`` devargs can be used to modify number of chunk slots.
+  Chunks are used to store event timers, a chunk can be visualised as an array
+  where the last element points to the next chunk and rest of them are used to
+  store events. TIM traverses the list of chunks and enqueues the event timers
+  to SSO. The default value is 255 and the max value is 4095.
+
+  For example::
+
+-a 0002:0e:00.0,tim_chnk_slots=1023
+
+- ``TIM limit max rings reserved``
+
+  The ``tim_rings_lmt`` devargs can be used to limit the max number of TIM
+  rings i.e. event timer adapter reserved on probe. Since, TIM rings are HW
+  resources we can avoid starving other applications by not grabbing all the
+  rings.
+
+  For example::
+
+-a 0002:0e:00.0,tim_rings_lmt=5
+
 Debugging Options
 -
 
diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index 807e666d3..a5a614196 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -503,4 +503,6 @@ RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
 RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "="
  CNXK_SSO_GGRP_QOS "="
  CN10K_SSO_GW_MODE "="
- CNXK_TIM_DISABLE_NPA "=1");
+ CNXK_TIM_DISABLE_NPA "=1"
+ CNXK_TIM_CHNK_SLOTS "="
+ CNXK_TIM_RINGS_LMT "=");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c 
b/drivers/event/cnxk/cn9k_eventdev.c
index 3e27fce4a..cfea3723a 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -572,4 +572,6 @@ RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
 RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "="
  CNXK_SSO_GGRP_QOS "="
  CN9K_SSO_SINGLE_WS "=1"
- CNXK_TIM_DISABLE_NPA "=1");
+ CNXK_TIM_DISABLE_NPA "=1"
+ CNXK_TIM_CHNK_SLOTS "="
+ CNXK_TIM_RINGS_LMT "=");
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c 
b/drivers/event/cnxk/cnxk_tim_evdev.c
index 6bbfadb25..07ec57fd2 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -253,6 +253,10 @@ cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct 
cnxk_tim_evdev *dev)
 
rte_kvargs_process(kvlist, CNXK_TIM_DISABLE_NPA, &parse_kvargs_flag,
   &dev->disable_npa);
+   rte_kvargs_process(kvlist, CNXK_TIM_CHNK_SLOTS, &parse_kvargs_value,
+  &dev->chunk_slots);
+   rte_kvargs_process(kvlist, CNXK_TIM_RINGS_LMT, &parse_kvargs_value,
+  &dev->min_ring_cnt);
 
rte_kvargs_free(kvlist);
 }
@@ -278,6 +282,7 @@ cnxk_tim_init(struct roc_sso *sso)
cnxk_tim_parse_devargs(sso->pci_dev->device.devargs, dev);
 
dev->tim.roc_sso = sso;
+   dev->tim.nb_lfs = dev->min_ring_cnt;
rc = roc_tim_init(&dev->tim);
if (rc < 0) {
plt_err("Failed to initialize roc tim resources");
@@ -285,7 +290,14 @@ cnxk_tim_init(struct roc_sso *sso)
return;
}
dev->nb_rings = rc;
-   dev->chunk_sz = CNXK_TIM_RING_DEF_CHUNK_SZ;
+
+   if (dev->chunk_slots && dev->chunk_slots <= CNXK_TIM_MAX_CHUNK_SLOTS &&
+   dev->chunk_slots >= CNXK_TIM_MIN_CHUNK_SLOTS) {
+   dev->chunk_sz =
+   (dev->chunk_slots + 1) * CNXK_TIM_CHUNK_ALIGNMENT;
+   } else {
+   dev->chunk_sz = CNXK_TIM_RING_DEF_CHUNK_SZ;
+   }
 }
 
 void
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h 
b/drivers/event/cnxk/cnxk_tim_evdev.h
index 8c21ab1fe..6208c150a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -34,6 +34,8 @@
 #define CN9K_TIM_MIN_TMO_TKS (256)
 
 #define CNXK_TIM_DISABLE_NPA "tim_disable_npa"
+#define CNXK_TIM_CHNK_SLOTS  "tim_chnk_slots"
+

[dpdk-dev] [PATCH 26/36] event/cnxk: add TIM bucket operations

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

Add TIM bucket operations used for event timer arm and cancel.

Signed-off-by: Pavan Nikhilesh 
Signed-off-by: Shijith Thotton 
---
 drivers/event/cnxk/cnxk_tim_evdev.h  |  30 +++
 drivers/event/cnxk/cnxk_tim_worker.c |   6 ++
 drivers/event/cnxk/cnxk_tim_worker.h | 123 +++
 drivers/event/cnxk/meson.build   |   1 +
 4 files changed, 160 insertions(+)
 create mode 100644 drivers/event/cnxk/cnxk_tim_worker.c
 create mode 100644 drivers/event/cnxk/cnxk_tim_worker.h

diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h 
b/drivers/event/cnxk/cnxk_tim_evdev.h
index 6208c150a..c844d9b61 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -37,6 +37,36 @@
 #define CNXK_TIM_CHNK_SLOTS  "tim_chnk_slots"
 #define CNXK_TIM_RINGS_LMT   "tim_rings_lmt"
 
+#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
+#define TIM_BUCKET_W1_M_CHUNK_REMAINDER
\
+   ((1ULL << (64 - TIM_BUCKET_W1_S_CHUNK_REMAINDER)) - 1)
+#define TIM_BUCKET_W1_S_LOCK (40)
+#define TIM_BUCKET_W1_M_LOCK   
\
+   ((1ULL << (TIM_BUCKET_W1_S_CHUNK_REMAINDER - TIM_BUCKET_W1_S_LOCK)) - 1)
+#define TIM_BUCKET_W1_S_RSVD (35)
+#define TIM_BUCKET_W1_S_BSK  (34)
+#define TIM_BUCKET_W1_M_BSK
\
+   ((1ULL << (TIM_BUCKET_W1_S_RSVD - TIM_BUCKET_W1_S_BSK)) - 1)
+#define TIM_BUCKET_W1_S_HBT (33)
+#define TIM_BUCKET_W1_M_HBT
\
+   ((1ULL << (TIM_BUCKET_W1_S_BSK - TIM_BUCKET_W1_S_HBT)) - 1)
+#define TIM_BUCKET_W1_S_SBT (32)
+#define TIM_BUCKET_W1_M_SBT
\
+   ((1ULL << (TIM_BUCKET_W1_S_HBT - TIM_BUCKET_W1_S_SBT)) - 1)
+#define TIM_BUCKET_W1_S_NUM_ENTRIES (0)
+#define TIM_BUCKET_W1_M_NUM_ENTRIES
\
+   ((1ULL << (TIM_BUCKET_W1_S_SBT - TIM_BUCKET_W1_S_NUM_ENTRIES)) - 1)
+
+#define TIM_BUCKET_SEMA (TIM_BUCKET_CHUNK_REMAIN)
+
+#define TIM_BUCKET_CHUNK_REMAIN
\
+   (TIM_BUCKET_W1_M_CHUNK_REMAINDER << TIM_BUCKET_W1_S_CHUNK_REMAINDER)
+
+#define TIM_BUCKET_LOCK (TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK)
+
+#define TIM_BUCKET_SEMA_WLOCK  
\
+   (TIM_BUCKET_CHUNK_REMAIN | (1ull << TIM_BUCKET_W1_S_LOCK))
+
 struct cnxk_tim_evdev {
struct roc_tim tim;
struct rte_eventdev *event_dev;
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c 
b/drivers/event/cnxk/cnxk_tim_worker.c
new file mode 100644
index 0..564687d9b
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -0,0 +1,6 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cnxk_tim_evdev.h"
+#include "cnxk_tim_worker.h"
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h 
b/drivers/event/cnxk/cnxk_tim_worker.h
new file mode 100644
index 0..bd205e5c1
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __CNXK_TIM_WORKER_H__
+#define __CNXK_TIM_WORKER_H__
+
+#include "cnxk_tim_evdev.h"
+
+static inline uint8_t
+cnxk_tim_bkt_fetch_lock(uint64_t w1)
+{
+   return (w1 >> TIM_BUCKET_W1_S_LOCK) & TIM_BUCKET_W1_M_LOCK;
+}
+
+static inline int16_t
+cnxk_tim_bkt_fetch_rem(uint64_t w1)
+{
+   return (w1 >> TIM_BUCKET_W1_S_CHUNK_REMAINDER) &
+  TIM_BUCKET_W1_M_CHUNK_REMAINDER;
+}
+
+static inline int16_t
+cnxk_tim_bkt_get_rem(struct cnxk_tim_bkt *bktp)
+{
+   return __atomic_load_n(&bktp->chunk_remainder, __ATOMIC_ACQUIRE);
+}
+
+static inline void
+cnxk_tim_bkt_set_rem(struct cnxk_tim_bkt *bktp, uint16_t v)
+{
+   __atomic_store_n(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
+}
+
+static inline void
+cnxk_tim_bkt_sub_rem(struct cnxk_tim_bkt *bktp, uint16_t v)
+{
+   __atomic_fetch_sub(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
+}
+
+static inline uint8_t
+cnxk_tim_bkt_get_hbt(uint64_t w1)
+{
+   return (w1 >> TIM_BUCKET_W1_S_HBT) & TIM_BUCKET_W1_M_HBT;
+}
+
+static inline uint8_t
+cnxk_tim_bkt_get_bsk(uint64_t w1)
+{
+   return (w1 >> TIM_BUCKET_W1_S_BSK) & TIM_BUCKET_W1_M_BSK;
+}
+
+static inline uint64_t
+cnxk_tim_bkt_clr_bsk(struct cnxk_tim_bkt *bktp)
+{
+   /* Clear everything except lock. */
+   const uint64_t v = TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK;
+
+   return __atomic_fetch_and(&bktp->w1, v, __ATOMIC_ACQ_REL);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_fetch_sema_lock(struct cnxk_tim_bkt *bktp)
+{
+   return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA_WLOCK,
+ __ATOMIC_ACQUIRE);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_fetch_sema(struct cnxk_tim_bkt *bktp)
+{
+   return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_

[dpdk-dev] [PATCH 27/36] event/cnxk: add timer arm routine

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

Add event timer arm routine.

Signed-off-by: Pavan Nikhilesh 
Signed-off-by: Shijith Thotton 
---
 drivers/event/cnxk/cnxk_tim_evdev.c  |  18 ++
 drivers/event/cnxk/cnxk_tim_evdev.h  |  23 ++
 drivers/event/cnxk/cnxk_tim_worker.c |  95 +
 drivers/event/cnxk/cnxk_tim_worker.h | 300 +++
 4 files changed, 436 insertions(+)

diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c 
b/drivers/event/cnxk/cnxk_tim_evdev.c
index 07ec57fd2..a3be66f9a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -76,6 +76,21 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
return rc;
 }
 
+static void
+cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
+{
+   uint8_t prod_flag = !tim_ring->prod_type_sp;
+
+   /* [DFB/FB] [SP][MP]*/
+   const rte_event_timer_arm_burst_t arm_burst[2][2] = {
+#define FP(_name, _f2, _f1, flags) [_f2][_f1] = cnxk_tim_arm_burst_##_name,
+   TIM_ARM_FASTPATH_MODES
+#undef FP
+   };
+
+   cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
+}
+
 static void
 cnxk_tim_ring_info_get(const struct rte_event_timer_adapter *adptr,
   struct rte_event_timer_adapter_info *adptr_info)
@@ -173,6 +188,9 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
plt_write64((uint64_t)tim_ring->bkt, tim_ring->base + TIM_LF_RING_BASE);
plt_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
 
+   /* Set fastpath ops. */
+   cnxk_tim_set_fp_ops(tim_ring);
+
/* Update SSO xae count. */
cnxk_sso_updt_xae_cnt(cnxk_sso_pmd_priv(dev->event_dev), tim_ring,
  RTE_EVENT_TYPE_TIMER);
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h 
b/drivers/event/cnxk/cnxk_tim_evdev.h
index c844d9b61..7cbcdb701 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -14,6 +14,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "roc_api.h"
 
@@ -37,6 +38,11 @@
 #define CNXK_TIM_CHNK_SLOTS  "tim_chnk_slots"
 #define CNXK_TIM_RINGS_LMT   "tim_rings_lmt"
 
+#define CNXK_TIM_SP 0x1
+#define CNXK_TIM_MP 0x2
+#define CNXK_TIM_ENA_FB 0x10
+#define CNXK_TIM_ENA_DFB 0x20
+
 #define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
 #define TIM_BUCKET_W1_M_CHUNK_REMAINDER
\
((1ULL << (64 - TIM_BUCKET_W1_S_CHUNK_REMAINDER)) - 1)
@@ -107,10 +113,14 @@ struct cnxk_tim_ring {
uintptr_t base;
uint16_t nb_chunk_slots;
uint32_t nb_bkts;
+   uint64_t last_updt_cyc;
+   uint64_t ring_start_cyc;
uint64_t tck_int;
uint64_t tot_int;
struct cnxk_tim_bkt *bkt;
struct rte_mempool *chunk_pool;
+   struct rte_reciprocal_u64 fast_div;
+   struct rte_reciprocal_u64 fast_bkt;
uint64_t arm_cnt;
uint8_t prod_type_sp;
uint8_t disable_npa;
@@ -201,6 +211,19 @@ cnxk_tim_cntfrq(void)
 }
 #endif
 
+#define TIM_ARM_FASTPATH_MODES 
\
+   FP(sp, 0, 0, CNXK_TIM_ENA_DFB | CNXK_TIM_SP)   \
+   FP(mp, 0, 1, CNXK_TIM_ENA_DFB | CNXK_TIM_MP)   \
+   FP(fb_sp, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
+   FP(fb_mp, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP)
+
+#define FP(_name, _f2, _f1, flags) 
\
+   uint16_t cnxk_tim_arm_burst_##_name(   \
+   const struct rte_event_timer_adapter *adptr,   \
+   struct rte_event_timer **tim, const uint16_t nb_timers);
+TIM_ARM_FASTPATH_MODES
+#undef FP
+
 int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
  uint32_t *caps,
  const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c 
b/drivers/event/cnxk/cnxk_tim_worker.c
index 564687d9b..eec39b9c2 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -4,3 +4,98 @@
 
 #include "cnxk_tim_evdev.h"
 #include "cnxk_tim_worker.h"
+
+static inline int
+cnxk_tim_arm_checks(const struct cnxk_tim_ring *const tim_ring,
+   struct rte_event_timer *const tim)
+{
+   if (unlikely(tim->state)) {
+   tim->state = RTE_EVENT_TIMER_ERROR;
+   rte_errno = EALREADY;
+   goto fail;
+   }
+
+   if (unlikely(!tim->timeout_ticks ||
+tim->timeout_ticks > tim_ring->nb_bkts)) {
+   tim->state = tim->timeout_ticks ?
+  RTE_EVENT_TIMER_ERROR_TOOLATE :
+  RTE_EVENT_TIMER_ERROR_TOOEARLY;
+   rte_errno = EINVAL;
+   goto fail;
+   }
+
+   return 0;
+
+fail:
+   return -EINVAL;
+}
+
+static inline void

[dpdk-dev] [PATCH 28/36] event/cnxk: add timer arm timeout burst

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

Add event timer arm timeout burst function.
All the timers requested to be armed have the same timeout.

Signed-off-by: Pavan Nikhilesh 
Signed-off-by: Shijith Thotton 
---
 drivers/event/cnxk/cnxk_tim_evdev.c  |   7 ++
 drivers/event/cnxk/cnxk_tim_evdev.h  |  12 +++
 drivers/event/cnxk/cnxk_tim_worker.c |  53 ++
 drivers/event/cnxk/cnxk_tim_worker.h | 141 +++
 4 files changed, 213 insertions(+)

diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c 
b/drivers/event/cnxk/cnxk_tim_evdev.c
index a3be66f9a..e6f31b19f 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -88,7 +88,14 @@ cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
 #undef FP
};
 
+   const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2] = {
+#define FP(_name, _f1, flags) [_f1] = cnxk_tim_arm_tmo_tick_burst_##_name,
+   TIM_ARM_TMO_FASTPATH_MODES
+#undef FP
+   };
+
cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
+   cnxk_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->ena_dfb];
 }
 
 static void
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h 
b/drivers/event/cnxk/cnxk_tim_evdev.h
index 7cbcdb701..04ba3dc8c 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -217,6 +217,10 @@ cnxk_tim_cntfrq(void)
FP(fb_sp, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
FP(fb_mp, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP)
 
+#define TIM_ARM_TMO_FASTPATH_MODES 
\
+   FP(dfb, 0, CNXK_TIM_ENA_DFB)   \
+   FP(fb, 1, CNXK_TIM_ENA_FB)
+
 #define FP(_name, _f2, _f1, flags) 
\
uint16_t cnxk_tim_arm_burst_##_name(   \
const struct rte_event_timer_adapter *adptr,   \
@@ -224,6 +228,14 @@ cnxk_tim_cntfrq(void)
 TIM_ARM_FASTPATH_MODES
 #undef FP
 
+#define FP(_name, _f1, flags)  
\
+   uint16_t cnxk_tim_arm_tmo_tick_burst_##_name(  \
+   const struct rte_event_timer_adapter *adptr,   \
+   struct rte_event_timer **tim, const uint64_t timeout_tick, \
+   const uint16_t nb_timers);
+TIM_ARM_TMO_FASTPATH_MODES
+#undef FP
+
 int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
  uint32_t *caps,
  const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c 
b/drivers/event/cnxk/cnxk_tim_worker.c
index eec39b9c2..2f1676ec1 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -99,3 +99,56 @@ cnxk_tim_timer_arm_burst(const struct 
rte_event_timer_adapter *adptr,
}
 TIM_ARM_FASTPATH_MODES
 #undef FP
+
+static __rte_always_inline uint16_t
+cnxk_tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
+   struct rte_event_timer **tim,
+   const uint64_t timeout_tick,
+   const uint16_t nb_timers, const uint8_t flags)
+{
+   struct cnxk_tim_ent entry[CNXK_TIM_MAX_BURST] __rte_cache_aligned;
+   struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+   uint16_t set_timers = 0;
+   uint16_t arr_idx = 0;
+   uint16_t idx;
+   int ret;
+
+   if (unlikely(!timeout_tick || timeout_tick > tim_ring->nb_bkts)) {
+   const enum rte_event_timer_state state =
+   timeout_tick ? RTE_EVENT_TIMER_ERROR_TOOLATE :
+RTE_EVENT_TIMER_ERROR_TOOEARLY;
+   for (idx = 0; idx < nb_timers; idx++)
+   tim[idx]->state = state;
+
+   rte_errno = EINVAL;
+   return 0;
+   }
+
+   cnxk_tim_sync_start_cyc(tim_ring);
+   while (arr_idx < nb_timers) {
+   for (idx = 0; idx < CNXK_TIM_MAX_BURST && (arr_idx < nb_timers);
+idx++, arr_idx++) {
+   cnxk_tim_format_event(tim[arr_idx], &entry[idx]);
+   }
+   ret = cnxk_tim_add_entry_brst(tim_ring, timeout_tick,
+ &tim[set_timers], entry, idx,
+ flags);
+   set_timers += ret;
+   if (ret != idx)
+   break;
+   }
+
+   return set_timers;
+}
+
+#define FP(_name, _f1, _flags) 
\
+   uint16_t __rte_noinline cnxk_tim_arm_tmo_tick_burst_##_name(   \
+   const struct rte_event_timer_adapter *adptr,   \
+   struct rte_event_timer **tim, const uint64_t timeout_tick, \
+   const uint16_t nb_ti

[dpdk-dev] [PATCH 29/36] event/cnxk: add timer cancel function

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

Add function to cancel event timer that has been armed.

Signed-off-by: Pavan Nikhilesh 
Signed-off-by: Shijith Thotton 
---
 drivers/event/cnxk/cnxk_tim_evdev.c  |  1 +
 drivers/event/cnxk/cnxk_tim_evdev.h  |  5 
 drivers/event/cnxk/cnxk_tim_worker.c | 30 ++
 drivers/event/cnxk/cnxk_tim_worker.h | 37 
 4 files changed, 73 insertions(+)

diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c 
b/drivers/event/cnxk/cnxk_tim_evdev.c
index e6f31b19f..edc8706f8 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -96,6 +96,7 @@ cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
 
cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
cnxk_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->ena_dfb];
+   cnxk_tim_ops.cancel_burst = cnxk_tim_timer_cancel_burst;
 }
 
 static void
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h 
b/drivers/event/cnxk/cnxk_tim_evdev.h
index 04ba3dc8c..9cc6e7512 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -236,6 +236,11 @@ TIM_ARM_FASTPATH_MODES
 TIM_ARM_TMO_FASTPATH_MODES
 #undef FP
 
+uint16_t
+cnxk_tim_timer_cancel_burst(const struct rte_event_timer_adapter *adptr,
+   struct rte_event_timer **tim,
+   const uint16_t nb_timers);
+
 int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
  uint32_t *caps,
  const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c 
b/drivers/event/cnxk/cnxk_tim_worker.c
index 2f1676ec1..ce6918465 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -152,3 +152,33 @@ cnxk_tim_timer_arm_tmo_brst(const struct 
rte_event_timer_adapter *adptr,
}
 TIM_ARM_TMO_FASTPATH_MODES
 #undef FP
+
+uint16_t
+cnxk_tim_timer_cancel_burst(const struct rte_event_timer_adapter *adptr,
+   struct rte_event_timer **tim,
+   const uint16_t nb_timers)
+{
+   uint16_t index;
+   int ret;
+
+   RTE_SET_USED(adptr);
+   rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+   for (index = 0; index < nb_timers; index++) {
+   if (tim[index]->state == RTE_EVENT_TIMER_CANCELED) {
+   rte_errno = EALREADY;
+   break;
+   }
+
+   if (tim[index]->state != RTE_EVENT_TIMER_ARMED) {
+   rte_errno = EINVAL;
+   break;
+   }
+   ret = cnxk_tim_rm_entry(tim[index]);
+   if (ret) {
+   rte_errno = -ret;
+   break;
+   }
+   }
+
+   return index;
+}
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h 
b/drivers/event/cnxk/cnxk_tim_worker.h
index 7a4cfd1a6..02f58eb3d 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.h
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -561,4 +561,41 @@ cnxk_tim_add_entry_brst(struct cnxk_tim_ring *const 
tim_ring,
return nb_timers;
 }
 
+static int
+cnxk_tim_rm_entry(struct rte_event_timer *tim)
+{
+   struct cnxk_tim_ent *entry;
+   struct cnxk_tim_bkt *bkt;
+   uint64_t lock_sema;
+
+   if (tim->impl_opaque[1] == 0 || tim->impl_opaque[0] == 0)
+   return -ENOENT;
+
+   entry = (struct cnxk_tim_ent *)(uintptr_t)tim->impl_opaque[0];
+   if (entry->wqe != tim->ev.u64) {
+   tim->impl_opaque[0] = 0;
+   tim->impl_opaque[1] = 0;
+   return -ENOENT;
+   }
+
+   bkt = (struct cnxk_tim_bkt *)(uintptr_t)tim->impl_opaque[1];
+   lock_sema = cnxk_tim_bkt_inc_lock(bkt);
+   if (cnxk_tim_bkt_get_hbt(lock_sema) ||
+   !cnxk_tim_bkt_get_nent(lock_sema)) {
+   tim->impl_opaque[0] = 0;
+   tim->impl_opaque[1] = 0;
+   cnxk_tim_bkt_dec_lock(bkt);
+   return -ENOENT;
+   }
+
+   entry->w0 = 0;
+   entry->wqe = 0;
+   tim->state = RTE_EVENT_TIMER_CANCELED;
+   tim->impl_opaque[0] = 0;
+   tim->impl_opaque[1] = 0;
+   cnxk_tim_bkt_dec_lock(bkt);
+
+   return 0;
+}
+
 #endif /* __CNXK_TIM_WORKER_H__ */
-- 
2.17.1



[dpdk-dev] [PATCH 30/36] event/cnxk: add timer stats get and reset

2021-03-06 Thread pbhagavatula
From: Shijith Thotton 

Add event timer adapter statistics get and reset functions.
Stats are disabled by default and can be enabled through devargs.

Example:
--dev "0002:1e:00.0,tim_stats_ena=1"

Signed-off-by: Pavan Nikhilesh 
Signed-off-by: Shijith Thotton 
---
 doc/guides/eventdevs/cnxk.rst|  9 +
 drivers/event/cnxk/cn10k_eventdev.c  |  3 +-
 drivers/event/cnxk/cn9k_eventdev.c   |  3 +-
 drivers/event/cnxk/cnxk_tim_evdev.c  | 50 
 drivers/event/cnxk/cnxk_tim_evdev.h  | 38 ++---
 drivers/event/cnxk/cnxk_tim_worker.c | 11 --
 6 files changed, 91 insertions(+), 23 deletions(-)

diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 05dcf06f4..cfa743da1 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -115,6 +115,15 @@ Runtime Config Options
 
 -a 0002:0e:00.0,tim_chnk_slots=1023
 
+- ``TIM enable arm/cancel statistics``
+
+  The ``tim_stats_ena`` devargs can be used to enable arm and cancel stats of
+  event timer adapter.
+
+  For example::
+
+-a 0002:0e:00.0,tim_stats_ena=1
+
 - ``TIM limit max rings reserved``
 
   The ``tim_rings_lmt`` devargs can be used to limit the max number of TIM
diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index a5a614196..2b2025cdb 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -505,4 +505,5 @@ RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT 
"="
  CN10K_SSO_GW_MODE "="
  CNXK_TIM_DISABLE_NPA "=1"
  CNXK_TIM_CHNK_SLOTS "="
- CNXK_TIM_RINGS_LMT "=");
+ CNXK_TIM_RINGS_LMT "="
+ CNXK_TIM_STATS_ENA "=1");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c 
b/drivers/event/cnxk/cn9k_eventdev.c
index cfea3723a..e39b4ded2 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -574,4 +574,5 @@ RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT 
"="
  CN9K_SSO_SINGLE_WS "=1"
  CNXK_TIM_DISABLE_NPA "=1"
  CNXK_TIM_CHNK_SLOTS "="
- CNXK_TIM_RINGS_LMT "=");
+ CNXK_TIM_RINGS_LMT "="
+ CNXK_TIM_STATS_ENA "=1");
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c 
b/drivers/event/cnxk/cnxk_tim_evdev.c
index edc8706f8..1b2518a64 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -81,21 +81,25 @@ cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
 {
uint8_t prod_flag = !tim_ring->prod_type_sp;
 
-   /* [DFB/FB] [SP][MP]*/
-   const rte_event_timer_arm_burst_t arm_burst[2][2] = {
-#define FP(_name, _f2, _f1, flags) [_f2][_f1] = cnxk_tim_arm_burst_##_name,
+   /* [STATS] [DFB/FB] [SP][MP]*/
+   const rte_event_timer_arm_burst_t arm_burst[2][2][2] = {
+#define FP(_name, _f3, _f2, _f1, flags)
\
+   [_f3][_f2][_f1] = cnxk_tim_arm_burst_##_name,
TIM_ARM_FASTPATH_MODES
 #undef FP
};
 
-   const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2] = {
-#define FP(_name, _f1, flags) [_f1] = cnxk_tim_arm_tmo_tick_burst_##_name,
+   const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2][2] = {
+#define FP(_name, _f2, _f1, flags) 
\
+   [_f2][_f1] = cnxk_tim_arm_tmo_tick_burst_##_name,
TIM_ARM_TMO_FASTPATH_MODES
 #undef FP
};
 
-   cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
-   cnxk_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->ena_dfb];
+   cnxk_tim_ops.arm_burst =
+   arm_burst[tim_ring->enable_stats][tim_ring->ena_dfb][prod_flag];
+   cnxk_tim_ops.arm_tmo_tick_burst =
+   arm_tmo_burst[tim_ring->enable_stats][tim_ring->ena_dfb];
cnxk_tim_ops.cancel_burst = cnxk_tim_timer_cancel_burst;
 }
 
@@ -159,6 +163,7 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->nb_timers = rcfg->nb_timers;
tim_ring->chunk_sz = dev->chunk_sz;
tim_ring->disable_npa = dev->disable_npa;
+   tim_ring->enable_stats = dev->enable_stats;
 
if (tim_ring->disable_npa) {
tim_ring->nb_chunks =
@@ -241,6 +246,30 @@ cnxk_tim_ring_free(struct rte_event_timer_adapter *adptr)
return 0;
 }
 
+static int
+cnxk_tim_stats_get(const struct rte_event_timer_adapter *adapter,
+  struct rte_event_timer_adapter_stats *stats)
+{
+   struct cnxk_tim_ring *tim_ring = adapter->data->adapter_priv;
+   uint64_t bkt_cyc = cnxk_tim_cntvct() - tim_ring->ring_start_cyc;
+
+   stats->evtim_exp_count =
+   __atomic_load_n(&tim_ring->arm_cnt, __ATO

[dpdk-dev] [PATCH 31/36] event/cnxk: add timer adapter start and stop

2021-03-06 Thread pbhagavatula
From: Shijith Thotton 

Add event timer adapter start and stop functions.

Signed-off-by: Pavan Nikhilesh 
Signed-off-by: Shijith Thotton 
---
 drivers/event/cnxk/cnxk_tim_evdev.c | 71 -
 1 file changed, 70 insertions(+), 1 deletion(-)

diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c 
b/drivers/event/cnxk/cnxk_tim_evdev.c
index 1b2518a64..7b28969c9 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -246,6 +246,73 @@ cnxk_tim_ring_free(struct rte_event_timer_adapter *adptr)
return 0;
 }
 
+static void
+cnxk_tim_calibrate_start_tsc(struct cnxk_tim_ring *tim_ring)
+{
+#define CNXK_TIM_CALIB_ITER 1E6
+   uint32_t real_bkt, bucket;
+   int icount, ecount = 0;
+   uint64_t bkt_cyc;
+
+   for (icount = 0; icount < CNXK_TIM_CALIB_ITER; icount++) {
+   real_bkt = plt_read64(tim_ring->base + TIM_LF_RING_REL) >> 44;
+   bkt_cyc = cnxk_tim_cntvct();
+   bucket = (bkt_cyc - tim_ring->ring_start_cyc) /
+tim_ring->tck_int;
+   bucket = bucket % (tim_ring->nb_bkts);
+   tim_ring->ring_start_cyc =
+   bkt_cyc - (real_bkt * tim_ring->tck_int);
+   if (bucket != real_bkt)
+   ecount++;
+   }
+   tim_ring->last_updt_cyc = bkt_cyc;
+   plt_tim_dbg("Bucket mispredict %3.2f distance %d\n",
+   100 - (((double)(icount - ecount) / (double)icount) * 100),
+   bucket - real_bkt);
+}
+
+static int
+cnxk_tim_ring_start(const struct rte_event_timer_adapter *adptr)
+{
+   struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+   struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+   int rc;
+
+   if (dev == NULL)
+   return -ENODEV;
+
+   rc = roc_tim_lf_enable(&dev->tim, tim_ring->ring_id,
+  &tim_ring->ring_start_cyc, NULL);
+   if (rc < 0)
+   return rc;
+
+   tim_ring->tck_int = NSEC2TICK(tim_ring->tck_nsec, cnxk_tim_cntfrq());
+   tim_ring->tot_int = tim_ring->tck_int * tim_ring->nb_bkts;
+   tim_ring->fast_div = rte_reciprocal_value_u64(tim_ring->tck_int);
+   tim_ring->fast_bkt = rte_reciprocal_value_u64(tim_ring->nb_bkts);
+
+   cnxk_tim_calibrate_start_tsc(tim_ring);
+
+   return rc;
+}
+
+static int
+cnxk_tim_ring_stop(const struct rte_event_timer_adapter *adptr)
+{
+   struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+   struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+   int rc;
+
+   if (dev == NULL)
+   return -ENODEV;
+
+   rc = roc_tim_lf_disable(&dev->tim, tim_ring->ring_id);
+   if (rc < 0)
+   plt_err("Failed to disable timer ring");
+
+   return rc;
+}
+
 static int
 cnxk_tim_stats_get(const struct rte_event_timer_adapter *adapter,
   struct rte_event_timer_adapter_stats *stats)
@@ -278,13 +345,14 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, 
uint64_t flags,
struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
 
RTE_SET_USED(flags);
-   RTE_SET_USED(ops);
 
if (dev == NULL)
return -ENODEV;
 
cnxk_tim_ops.init = cnxk_tim_ring_create;
cnxk_tim_ops.uninit = cnxk_tim_ring_free;
+   cnxk_tim_ops.start = cnxk_tim_ring_start;
+   cnxk_tim_ops.stop = cnxk_tim_ring_stop;
cnxk_tim_ops.get_info = cnxk_tim_ring_info_get;
 
if (dev->enable_stats) {
@@ -295,6 +363,7 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, 
uint64_t flags,
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
*caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
+   *ops = &cnxk_tim_ops;
 
return 0;
 }
-- 
2.17.1



[dpdk-dev] [PATCH 34/36] event/cnxk: add Rx adapter fastpath ops

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

Add support for event eth Rx adapter fastpath operations.

Signed-off-by: Pavan Nikhilesh 
---
 drivers/event/cnxk/cn10k_eventdev.c | 115 -
 drivers/event/cnxk/cn10k_worker.c   | 164 +
 drivers/event/cnxk/cn10k_worker.h   |  91 +--
 drivers/event/cnxk/cn9k_eventdev.c  | 254 ++-
 drivers/event/cnxk/cn9k_worker.c| 364 +++-
 drivers/event/cnxk/cn9k_worker.h| 158 +---
 drivers/event/cnxk/meson.build  |   8 +
 7 files changed, 932 insertions(+), 222 deletions(-)

diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index 72175e16f..70c6fedae 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -247,17 +247,120 @@ static void
 cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
 {
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+   const event_dequeue_t sso_hws_deq[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) 
\
+   [f3][f2][f1][f0] = cn10k_sso_hws_deq_##name,
+   NIX_RX_FASTPATH_MODES
+#undef R
+   };
+
+   const event_dequeue_burst_t sso_hws_deq_burst[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) 
\
+   [f3][f2][f1][f0] = cn10k_sso_hws_deq_burst_##name,
+   NIX_RX_FASTPATH_MODES
+#undef R
+   };
+
+   const event_dequeue_t sso_hws_tmo_deq[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) 
\
+   [f3][f2][f1][f0] = cn10k_sso_hws_tmo_deq_##name,
+   NIX_RX_FASTPATH_MODES
+#undef R
+   };
+
+   const event_dequeue_burst_t sso_hws_tmo_deq_burst[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) 
\
+   [f3][f2][f1][f0] = cn10k_sso_hws_tmo_deq_burst_##name,
+   NIX_RX_FASTPATH_MODES
+#undef R
+   };
+
+   const event_dequeue_t sso_hws_deq_seg[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) 
\
+   [f3][f2][f1][f0] = cn10k_sso_hws_deq_seg_##name,
+   NIX_RX_FASTPATH_MODES
+#undef R
+   };
+
+   const event_dequeue_burst_t sso_hws_deq_seg_burst[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) 
\
+   [f3][f2][f1][f0] = cn10k_sso_hws_deq_seg_burst_##name,
+   NIX_RX_FASTPATH_MODES
+#undef R
+   };
+
+   const event_dequeue_t sso_hws_tmo_deq_seg[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) 
\
+   [f3][f2][f1][f0] = cn10k_sso_hws_tmo_deq_seg_##name,
+   NIX_RX_FASTPATH_MODES
+#undef R
+   };
+
+   const event_dequeue_burst_t sso_hws_tmo_deq_seg_burst[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) 
\
+   [f3][f2][f1][f0] = cn10k_sso_hws_tmo_deq_seg_burst_##name,
+   NIX_RX_FASTPATH_MODES
+#undef R
+   };
 
event_dev->enqueue = cn10k_sso_hws_enq;
event_dev->enqueue_burst = cn10k_sso_hws_enq_burst;
event_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn10k_sso_hws_enq_fwd_burst;
-
-   event_dev->dequeue = cn10k_sso_hws_deq;
-   event_dev->dequeue_burst = cn10k_sso_hws_deq_burst;
-   if (dev->is_timeout_deq) {
-   event_dev->dequeue = cn10k_sso_hws_tmo_deq;
-   event_dev->dequeue_burst = cn10k_sso_hws_tmo_deq_burst;
+   if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
+   event_dev->dequeue = sso_hws_deq_seg
+   [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+   [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
+   [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+   [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+   event_dev->dequeue_burst = sso_hws_deq_seg_burst
+   [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+   [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
+   [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+   [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+   if (dev->is_timeout_deq) {
+   event_dev->dequeue = sso_hws_tmo_deq_seg
+   [!!(dev->rx_offloads &
+   NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+   [!!(dev->rx_offloads &
+   NIX_RX_OFFLOAD_CHECKSUM_F)]
+   [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+   [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+   event_dev->dequeue_burst = sso_hws_tmo_deq_seg_burst
+ 

[dpdk-dev] [PATCH 32/36] event/cnxk: add devargs to control timer adapters

2021-03-06 Thread pbhagavatula
From: Shijith Thotton 

Add devargs to control each event timer adapter i.e. TIM rings internal
parameters uniquely. The following dict format is expected
[ring-chnk_slots-disable_npa-stats_ena]. 0 represents default values.

Example:
--dev "0002:1e:00.0,tim_ring_ctl=[2-1023-1-0]"

Signed-off-by: Pavan Nikhilesh 
Signed-off-by: Shijith Thotton 
---
 doc/guides/eventdevs/cnxk.rst   | 11 
 drivers/event/cnxk/cnxk_tim_evdev.c | 96 -
 drivers/event/cnxk/cnxk_tim_evdev.h | 10 +++
 3 files changed, 116 insertions(+), 1 deletion(-)

diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index cfa743da1..c42784a3b 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -135,6 +135,17 @@ Runtime Config Options
 
 -a 0002:0e:00.0,tim_rings_lmt=5
 
+- ``TIM ring control internal parameters``
+
+  When using multiple TIM rings the ``tim_ring_ctl`` devargs can be used to
+  control each TIM rings internal parameters uniquely. The following dict
+  format is expected [ring-chnk_slots-disable_npa-stats_ena]. 0 represents
+  default values.
+
+  For Example::
+
+-a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
+
 Debugging Options
 -
 
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c 
b/drivers/event/cnxk/cnxk_tim_evdev.c
index 7b28969c9..fdc78270d 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -121,7 +121,7 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf;
struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
struct cnxk_tim_ring *tim_ring;
-   int rc;
+   int i, rc;
 
if (dev == NULL)
return -ENODEV;
@@ -165,6 +165,20 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->disable_npa = dev->disable_npa;
tim_ring->enable_stats = dev->enable_stats;
 
+   for (i = 0; i < dev->ring_ctl_cnt; i++) {
+   struct cnxk_tim_ctl *ring_ctl = &dev->ring_ctl_data[i];
+
+   if (ring_ctl->ring == tim_ring->ring_id) {
+   tim_ring->chunk_sz =
+   ring_ctl->chunk_slots ?
+   ((uint32_t)(ring_ctl->chunk_slots + 1) *
+CNXK_TIM_CHUNK_ALIGNMENT) :
+ tim_ring->chunk_sz;
+   tim_ring->enable_stats = ring_ctl->enable_stats;
+   tim_ring->disable_npa = ring_ctl->disable_npa;
+   }
+   }
+
if (tim_ring->disable_npa) {
tim_ring->nb_chunks =
tim_ring->nb_timers /
@@ -368,6 +382,84 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, 
uint64_t flags,
return 0;
 }
 
+static void
+cnxk_tim_parse_ring_param(char *value, void *opaque)
+{
+   struct cnxk_tim_evdev *dev = opaque;
+   struct cnxk_tim_ctl ring_ctl = {0};
+   char *tok = strtok(value, "-");
+   struct cnxk_tim_ctl *old_ptr;
+   uint16_t *val;
+
+   val = (uint16_t *)&ring_ctl;
+
+   if (!strlen(value))
+   return;
+
+   while (tok != NULL) {
+   *val = atoi(tok);
+   tok = strtok(NULL, "-");
+   val++;
+   }
+
+   if (val != (&ring_ctl.enable_stats + 1)) {
+   plt_err("Invalid ring param expected 
[ring-chunk_sz-disable_npa-enable_stats]");
+   return;
+   }
+
+   dev->ring_ctl_cnt++;
+   old_ptr = dev->ring_ctl_data;
+   dev->ring_ctl_data =
+   rte_realloc(dev->ring_ctl_data,
+   sizeof(struct cnxk_tim_ctl) * dev->ring_ctl_cnt, 0);
+   if (dev->ring_ctl_data == NULL) {
+   dev->ring_ctl_data = old_ptr;
+   dev->ring_ctl_cnt--;
+   return;
+   }
+
+   dev->ring_ctl_data[dev->ring_ctl_cnt - 1] = ring_ctl;
+}
+
+static void
+cnxk_tim_parse_ring_ctl_list(const char *value, void *opaque)
+{
+   char *s = strdup(value);
+   char *start = NULL;
+   char *end = NULL;
+   char *f = s;
+
+   while (*s) {
+   if (*s == '[')
+   start = s;
+   else if (*s == ']')
+   end = s;
+
+   if (start && start < end) {
+   *end = 0;
+   cnxk_tim_parse_ring_param(start + 1, opaque);
+   start = end;
+   s = end;
+   }
+   s++;
+   }
+
+   free(f);
+}
+
+static int
+cnxk_tim_parse_kvargs_dict(const char *key, const char *value, void *opaque)
+{
+   RTE_SET_USED(key);
+
+   /* Dict format [ring-chunk_sz-disable_npa-enable_stats] use '-' as ','
+* isn't allowed. 0 represents default.
+*/
+   cnxk_tim_parse_ring_ctl_list(value, opaque);
+
+   return 0;
+}
+
 

[dpdk-dev] [PATCH 35/36] event/cnxk: add Tx adapter support

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

Add support for event eth Tx adapter.

Signed-off-by: Pavan Nikhilesh 
---
 doc/guides/eventdevs/cnxk.rst|   4 +-
 drivers/event/cnxk/cn10k_eventdev.c  |  90 +
 drivers/event/cnxk/cn9k_eventdev.c   | 117 +++
 drivers/event/cnxk/cnxk_eventdev.h   |  22 -
 drivers/event/cnxk/cnxk_eventdev_adptr.c | 106 
 5 files changed, 335 insertions(+), 4 deletions(-)

diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index abab7f742..0f916ff5c 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -42,7 +42,9 @@ Features of the OCTEON CNXK SSO PMD are:
 - HW managed packets enqueued from ethdev to eventdev exposed through event eth
   RX adapter.
 - N:1 ethernet device Rx queue to Event queue mapping.
-- Full Rx offload support defined through ethdev queue configuration.
+- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+  capability while maintaining receive packet order.
+- Full Rx/Tx offload support defined through ethdev queue configuration.
 
 Prerequisites and Compilation procedure
 ---
diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index 70c6fedae..3662fd720 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -243,6 +243,39 @@ cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
 }
 
+static int
+cn10k_sso_updt_tx_adptr_data(const struct rte_eventdev *event_dev)
+{
+   struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+   int i;
+
+   if (dev->tx_adptr_data == NULL)
+   return 0;
+
+   for (i = 0; i < dev->nb_event_ports; i++) {
+   struct cn10k_sso_hws *ws = event_dev->data->ports[i];
+   void *ws_cookie;
+
+   ws_cookie = cnxk_sso_hws_get_cookie(ws);
+   ws_cookie = rte_realloc_socket(
+   ws_cookie,
+   sizeof(struct cnxk_sso_hws_cookie) +
+   sizeof(struct cn10k_sso_hws) +
+   (sizeof(uint64_t) * (dev->max_port_id + 1) *
+RTE_MAX_QUEUES_PER_PORT),
+   RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
+   if (ws_cookie == NULL)
+   return -ENOMEM;
+   ws = RTE_PTR_ADD(ws_cookie, sizeof(struct cnxk_sso_hws_cookie));
+   memcpy(&ws->tx_adptr_data, dev->tx_adptr_data,
+  sizeof(uint64_t) * (dev->max_port_id + 1) *
+  RTE_MAX_QUEUES_PER_PORT);
+   event_dev->data->ports[i] = ws;
+   }
+
+   return 0;
+}
+
 static void
 cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
 {
@@ -482,6 +515,10 @@ cn10k_sso_start(struct rte_eventdev *event_dev)
 {
int rc;
 
+   rc = cn10k_sso_updt_tx_adptr_data(event_dev);
+   if (rc < 0)
+   return rc;
+
rc = cnxk_sso_start(event_dev, cn10k_sso_hws_reset,
cn10k_sso_hws_flush_events);
if (rc < 0)
@@ -580,6 +617,55 @@ cn10k_sso_rx_adapter_queue_del(const struct rte_eventdev 
*event_dev,
return cnxk_sso_rx_adapter_queue_del(event_dev, eth_dev, rx_queue_id);
 }
 
+static int
+cn10k_sso_tx_adapter_caps_get(const struct rte_eventdev *dev,
+ const struct rte_eth_dev *eth_dev, uint32_t *caps)
+{
+   int ret;
+
+   RTE_SET_USED(dev);
+   ret = strncmp(eth_dev->device->driver->name, "net_cn10k", 8);
+   if (ret)
+   *caps = 0;
+   else
+   *caps = RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT;
+
+   return 0;
+}
+
+static int
+cn10k_sso_tx_adapter_queue_add(uint8_t id, const struct rte_eventdev 
*event_dev,
+  const struct rte_eth_dev *eth_dev,
+  int32_t tx_queue_id)
+{
+   int rc;
+
+   RTE_SET_USED(id);
+   rc = cnxk_sso_tx_adapter_queue_add(event_dev, eth_dev, tx_queue_id);
+   if (rc < 0)
+   return rc;
+   rc = cn10k_sso_updt_tx_adptr_data(event_dev);
+   if (rc < 0)
+   return rc;
+   cn10k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
+
+   return 0;
+}
+
+static int
+cn10k_sso_tx_adapter_queue_del(uint8_t id, const struct rte_eventdev 
*event_dev,
+  const struct rte_eth_dev *eth_dev,
+  int32_t tx_queue_id)
+{
+   int rc;
+
+   RTE_SET_USED(id);
+   rc = cnxk_sso_tx_adapter_queue_del(event_dev, eth_dev, tx_queue_id);
+   if (rc < 0)
+   return rc;
+   return cn10k_sso_updt_tx_adptr_data(event_dev);
+}
+
 static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure

[dpdk-dev] [PATCH 33/36] event/cnxk: add Rx adapter support

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

Add support for event eth Rx adapter.

Signed-off-by: Pavan Nikhilesh 
---
 doc/guides/eventdevs/cnxk.rst|   4 +
 drivers/event/cnxk/cn10k_eventdev.c  |  76 +++
 drivers/event/cnxk/cn10k_worker.h|   4 +
 drivers/event/cnxk/cn9k_eventdev.c   |  82 
 drivers/event/cnxk/cn9k_worker.h |   4 +
 drivers/event/cnxk/cnxk_eventdev.h   |  21 +++
 drivers/event/cnxk/cnxk_eventdev_adptr.c | 157 +++
 7 files changed, 348 insertions(+)

diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index c42784a3b..abab7f742 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -39,6 +39,10 @@ Features of the OCTEON CNXK SSO PMD are:
   time granularity of 2.5us on CN9K and 1us on CN10K.
 - Up to 256 TIM rings aka event timer adapters.
 - Up to 8 rings traversed in parallel.
+- HW managed packets enqueued from ethdev to eventdev exposed through event eth
+  RX adapter.
+- N:1 ethernet device Rx queue to Event queue mapping.
+- Full Rx offload support defined through ethdev queue configuration.
 
 Prerequisites and Compilation procedure
 ---
diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index 2b2025cdb..72175e16f 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -407,6 +407,76 @@ cn10k_sso_selftest(void)
return cnxk_sso_selftest(RTE_STR(event_cn10k));
 }
 
+static int
+cn10k_sso_rx_adapter_caps_get(const struct rte_eventdev *event_dev,
+ const struct rte_eth_dev *eth_dev, uint32_t *caps)
+{
+   int rc;
+
+   RTE_SET_USED(event_dev);
+   rc = strncmp(eth_dev->device->driver->name, "net_cn10k", 9);
+   if (rc)
+   *caps = RTE_EVENT_ETH_RX_ADAPTER_SW_CAP;
+   else
+   *caps = RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT |
+   RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ |
+   RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID;
+
+   return 0;
+}
+
+static void
+cn10k_sso_set_lookup_mem(const struct rte_eventdev *event_dev, void 
*lookup_mem)
+{
+   struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+   int i;
+
+   for (i = 0; i < dev->nb_event_ports; i++) {
+   struct cn10k_sso_hws *ws = event_dev->data->ports[i];
+   ws->lookup_mem = lookup_mem;
+   }
+}
+
+static int
+cn10k_sso_rx_adapter_queue_add(
+   const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev,
+   int32_t rx_queue_id,
+   const struct rte_event_eth_rx_adapter_queue_conf *queue_conf)
+{
+   void *lookup_mem;
+   int rc;
+
+   rc = strncmp(eth_dev->device->driver->name, "net_cn10k", 8);
+   if (rc)
+   return -EINVAL;
+
+   rc = cnxk_sso_rx_adapter_queue_add(event_dev, eth_dev, rx_queue_id,
+  queue_conf);
+   if (rc)
+   return -EINVAL;
+
+   lookup_mem = ((struct cn10k_eth_rxq *)eth_dev->data->rx_queues[0])
+->lookup_mem;
+   cn10k_sso_set_lookup_mem(event_dev, lookup_mem);
+   cn10k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
+
+   return 0;
+}
+
+static int
+cn10k_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev,
+  const struct rte_eth_dev *eth_dev,
+  int32_t rx_queue_id)
+{
+   int rc;
+
+   rc = strncmp(eth_dev->device->driver->name, "net_cn10k", 8);
+   if (rc)
+   return -EINVAL;
+
+   return cnxk_sso_rx_adapter_queue_del(event_dev, eth_dev, rx_queue_id);
+}
+
 static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -420,6 +490,12 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_unlink = cn10k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
 
+   .eth_rx_adapter_caps_get = cn10k_sso_rx_adapter_caps_get,
+   .eth_rx_adapter_queue_add = cn10k_sso_rx_adapter_queue_add,
+   .eth_rx_adapter_queue_del = cn10k_sso_rx_adapter_queue_del,
+   .eth_rx_adapter_start = cnxk_sso_rx_adapter_start,
+   .eth_rx_adapter_stop = cnxk_sso_rx_adapter_stop,
+
.timer_adapter_caps_get = cnxk_tim_caps_get,
 
.dump = cnxk_sso_dump,
diff --git a/drivers/event/cnxk/cn10k_worker.h 
b/drivers/event/cnxk/cn10k_worker.h
index ed4e3bd63..d418e80aa 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -5,9 +5,13 @@
 #ifndef __CN10K_WORKER_H__
 #define __CN10K_WORKER_H__
 
+#include "cnxk_ethdev.h"
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
+#include "cn10k_ethdev.h"
+#include "cn10k_rx.h"
+
 /* SSO Operations */
 
 static __rte_always_inline uint8_t
diff --git a/drivers/event/cn

[dpdk-dev] [PATCH 36/36] event/cnxk: add Tx adapter fastpath ops

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

Add support for event eth Tx adapter fastpath operations.

Signed-off-by: Pavan Nikhilesh 
---
 drivers/event/cnxk/cn10k_eventdev.c | 35 
 drivers/event/cnxk/cn10k_worker.c   | 32 +++
 drivers/event/cnxk/cn10k_worker.h   | 67 ++
 drivers/event/cnxk/cn9k_eventdev.c  | 76 +
 drivers/event/cnxk/cn9k_worker.c| 60 
 drivers/event/cnxk/cn9k_worker.h| 87 +
 6 files changed, 357 insertions(+)

diff --git a/drivers/event/cnxk/cn10k_eventdev.c 
b/drivers/event/cnxk/cn10k_eventdev.c
index 3662fd720..817dcc7cc 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -336,6 +336,22 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
 #undef R
};
 
+   /* Tx modes */
+   const event_tx_adapter_enqueue sso_hws_tx_adptr_enq[2][2][2][2][2] = {
+#define T(name, f4, f3, f2, f1, f0, sz, flags) 
\
+   [f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_##name,
+   NIX_TX_FASTPATH_MODES
+#undef T
+   };
+
+   const event_tx_adapter_enqueue sso_hws_tx_adptr_enq_seg[2][2][2][2][2] =
+   {
+#define T(name, f4, f3, f2, f1, f0, sz, flags) 
\
+   [f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_seg_##name,
+   NIX_TX_FASTPATH_MODES
+#undef T
+   };
+
event_dev->enqueue = cn10k_sso_hws_enq;
event_dev->enqueue_burst = cn10k_sso_hws_enq_burst;
event_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst;
@@ -395,6 +411,25 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
[!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
}
}
+
+   if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) {
+   /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */
+   event_dev->txa_enqueue = sso_hws_tx_adptr_enq_seg
+   [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
+   [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
+   [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
+   [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
+   [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
+   } else {
+   event_dev->txa_enqueue = sso_hws_tx_adptr_enq
+   [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
+   [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
+   [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
+   [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
+   [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
+   }
+
+   event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue;
 }
 
 static void
diff --git a/drivers/event/cnxk/cn10k_worker.c 
b/drivers/event/cnxk/cn10k_worker.c
index 46f72cf20..ab149c5e3 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -175,3 +175,35 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct 
rte_event ev[],
 
 NIX_RX_FASTPATH_MODES
 #undef R
+
+#define T(name, f4, f3, f2, f1, f0, sz, flags) 
\
+   uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_##name(  \
+   void *port, struct rte_event ev[], uint16_t nb_events) \
+   {  \
+   struct cn10k_sso_hws *ws = port;   \
+   uint64_t cmd[sz];  \
+   
\
+   RTE_SET_USED(nb_events);   \
+   return cn10k_sso_hws_event_tx( \
+   ws, &ev[0], cmd,   \
+   (const uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) & \
+   ws->tx_adptr_data, \
+   flags);\
+   }  \
+   
\
+   uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_seg_##name(  \
+   void *port, struct rte_event ev[], uint16_t nb_events) \
+   {  \
+   uint64_t cmd[(sz) + CNXK_NIX_TX_MSEG_SG_DWORDS - 2];   \
+   struct cn10k_sso_hws *ws = port;   \
+

[dpdk-dev] [PATCH] net/bnxt: optimizations for Tx completion handling

2021-03-06 Thread Lance Richardson
Avoid copying mbuf pointers to separate array for bulk
mbuf free when handling transmit completions for vector
mode transmit.

Signed-off-by: Lance Richardson 
Reviewed-by: Ajit Kumar Khaparde 
---
 drivers/net/bnxt/bnxt_ethdev.c  |  4 +-
 drivers/net/bnxt/bnxt_ring.c|  2 +-
 drivers/net/bnxt/bnxt_rxtx_vec_common.h | 89 +++--
 drivers/net/bnxt/bnxt_rxtx_vec_neon.c   |  5 +-
 drivers/net/bnxt/bnxt_rxtx_vec_sse.c|  7 +-
 drivers/net/bnxt/bnxt_txq.c |  8 +--
 drivers/net/bnxt/bnxt_txr.c | 68 ++-
 drivers/net/bnxt/bnxt_txr.h |  7 +-
 8 files changed, 106 insertions(+), 84 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 88da345034..d4028e2bb2 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3186,7 +3186,7 @@ bnxt_tx_descriptor_status_op(void *tx_queue, uint16_t 
offset)
struct bnxt_tx_queue *txq = (struct bnxt_tx_queue *)tx_queue;
struct bnxt_tx_ring_info *txr;
struct bnxt_cp_ring_info *cpr;
-   struct bnxt_sw_tx_bd *tx_buf;
+   struct rte_mbuf **tx_buf;
struct tx_pkt_cmpl *txcmp;
uint32_t cons, cp_cons;
int rc;
@@ -3216,7 +3216,7 @@ bnxt_tx_descriptor_status_op(void *tx_queue, uint16_t 
offset)
return RTE_ETH_TX_DESC_UNAVAIL;
}
tx_buf = &txr->tx_buf_ring[cons];
-   if (tx_buf->mbuf == NULL)
+   if (*tx_buf == NULL)
return RTE_ETH_TX_DESC_DONE;
 
return RTE_ETH_TX_DESC_FULL;
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index 997dcdc28b..e4055fa49b 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -230,7 +230,7 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx,
tx_ring->vmem =
(void **)((char *)mz->addr + tx_vmem_start);
tx_ring_info->tx_buf_ring =
-   (struct bnxt_sw_tx_bd *)tx_ring->vmem;
+   (struct rte_mbuf **)tx_ring->vmem;
}
}
 
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_common.h 
b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
index 91ff6736b1..9b9489a695 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_common.h
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
@@ -100,57 +100,78 @@ bnxt_rxq_rearm(struct bnxt_rx_queue *rxq, struct 
bnxt_rx_ring_info *rxr)
  * is enabled.
  */
 static inline void
-bnxt_tx_cmp_vec_fast(struct bnxt_tx_queue *txq, int nr_pkts)
+bnxt_tx_cmp_vec_fast(struct bnxt_tx_queue *txq, uint32_t nr_pkts)
 {
struct bnxt_tx_ring_info *txr = txq->tx_ring;
-   struct rte_mbuf **free = txq->free;
uint16_t cons, raw_cons = txr->tx_raw_cons;
-   unsigned int blk = 0;
-   uint32_t ring_mask = txr->tx_ring_struct->ring_mask;
-
-   while (nr_pkts--) {
-   struct bnxt_sw_tx_bd *tx_buf;
-
-   cons = raw_cons++ & ring_mask;
-   tx_buf = &txr->tx_buf_ring[cons];
-   free[blk++] = tx_buf->mbuf;
-   tx_buf->mbuf = NULL;
+   uint32_t ring_mask, ring_size, num;
+   struct rte_mempool *pool;
+
+   ring_mask = txr->tx_ring_struct->ring_mask;
+   ring_size = txr->tx_ring_struct->ring_size;
+
+   cons = raw_cons & ring_mask;
+   num = RTE_MIN(nr_pkts, ring_size - cons);
+   pool = txr->tx_buf_ring[cons]->pool;
+
+   rte_mempool_put_bulk(pool, (void **)&txr->tx_buf_ring[cons], num);
+   memset(&txr->tx_buf_ring[cons], 0, num * sizeof(struct rte_mbuf *));
+   raw_cons += num;
+   num = nr_pkts - num;
+   if (num) {
+   cons = raw_cons & ring_mask;
+   rte_mempool_put_bulk(pool, (void **)&txr->tx_buf_ring[cons],
+num);
+   memset(&txr->tx_buf_ring[cons], 0,
+  num * sizeof(struct rte_mbuf *));
+   raw_cons += num;
}
-   if (blk)
-   rte_mempool_put_bulk(free[0]->pool, (void **)free, blk);
 
txr->tx_raw_cons = raw_cons;
 }
 
 static inline void
-bnxt_tx_cmp_vec(struct bnxt_tx_queue *txq, int nr_pkts)
+bnxt_tx_cmp_vec(struct bnxt_tx_queue *txq, uint32_t nr_pkts)
 {
struct bnxt_tx_ring_info *txr = txq->tx_ring;
-   struct rte_mbuf **free = txq->free;
uint16_t cons, raw_cons = txr->tx_raw_cons;
-   unsigned int blk = 0;
-   uint32_t ring_mask = txr->tx_ring_struct->ring_mask;
+   uint32_t ring_mask, ring_size, num, blk;
+   struct rte_mempool *pool;
 
-   while (nr_pkts--) {
-   struct bnxt_sw_tx_bd *tx_buf;
-   struct rte_mbuf *mbuf;
+   ring_mask = txr->tx_ring_struct->ring_mask;
+   ring_size = txr->tx_ring_struct->ring_size;
 
-   cons = raw_cons++ & ring_mask;
-   tx_buf = &txr->tx_buf_ring[cons];
-   mbuf = rte_pktmbuf_prefree_seg(tx_buf

Re: [dpdk-dev] [PATCH v4 2/4] eal: add asprintf() internal wrapper

2021-03-06 Thread Lance Richardson
On Fri, Mar 5, 2021 at 7:05 PM Dmitry Kozlyuk  wrote:
>
> POSIX asprintf() is unavailable on Windows.

AFAIK asprintf() is not a POSIX API, it is a GNU extension that has
also been implemented in some BSDs.

> Add eal_asprintf() wrapper for EAL internal use.
> On Windows it's a function, on Unix it's a macro for asprintf().
>
> Signed-off-by: Dmitry Kozlyuk 
> Acked-by: Khoa To 
> ---


Re: [dpdk-dev] [PATCH v3] rte_metrics: unconditionally exports rte_metrics_tel_xxx functions

2021-03-06 Thread Dmitry Kozlyuk
2021-02-24 10:46 (UTC-0800), Jie Zhou:
[...]
> V2 changes:
> Address comments from Dmitry Kozlyuk  and
> Bruce Richardson :
> - Set dpdk.conf RTE_HAS_JANSSON in metrics meson.build
> - Reduce #ifdef RTE_HAS_JANSSON blocks
> 
> V3 changes:
> Address comment from Bruce Richardson :
> - Remove redundant includes line on librte_telemetry

Nit: change log should be below "---" so that it's not included in commit.

Acked-by: Dmitry Kozlyuk 


[dpdk-dev] [PATCH] test/event: fix timeout accuracy

2021-03-06 Thread pbhagavatula
From: Pavan Nikhilesh 

Round timeout ticks when converting from nanoseconds, this prevents
loss of accuracy and deviation from requested timeout value.

Fixes: d1f3385d0076 ("test: add event timer adapter auto-test")
Cc: sta...@dpdk.org

Signed-off-by: Pavan Nikhilesh 
---
 app/test/test_event_timer_adapter.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/app/test/test_event_timer_adapter.c 
b/app/test/test_event_timer_adapter.c
index ad3f4dcc2..b536ddef4 100644
--- a/app/test/test_event_timer_adapter.c
+++ b/app/test/test_event_timer_adapter.c
@@ -3,6 +3,8 @@
  * Copyright(c) 2017-2018 Intel Corporation.
  */
 
+#include 
+
 #include 
 #include 
 #include 
@@ -46,7 +48,7 @@ static uint64_t global_info_bkt_tck_ns;
 static volatile uint8_t arm_done;
 
 #define CALC_TICKS(tks)\
-   ((tks * global_bkt_tck_ns) / global_info_bkt_tck_ns)
+   ceil((double)(tks * global_bkt_tck_ns) / global_info_bkt_tck_ns)
 
 
 static bool using_services;
-- 
2.17.1