Re: [dpdk-dev] [PATCH v4 01/10] vhost: remove unused internal API

2018-12-16 Thread Maxime Coquelin




On 12/14/18 10:16 PM, Xiao Wang wrote:

vhost_detach_vdpa_device() is internally defined but not used, remove
it in this patch.

Signed-off-by: Xiao Wang 
---
  lib/librte_vhost/vhost.c | 13 -
  lib/librte_vhost/vhost.h |  1 -
  2 files changed, 14 deletions(-)



Reviewed-by: Maxime Coquelin 

Thanks,
Maxime


Re: [dpdk-dev] [PATCH v4 02/10] vhost: provide helper for host notifier ctrl

2018-12-16 Thread Maxime Coquelin




On 12/14/18 10:16 PM, Xiao Wang wrote:

VDPA driver can decide if it needs to enable/disable the host notifier
mapping, so exposing a API can allow flexibility. A later patch will
base on this.

Signed-off-by: Xiao Wang 
---
  drivers/net/ifc/ifcvf_vdpa.c   |  3 +++
  lib/librte_vhost/rte_vdpa.h| 18 ++
  lib/librte_vhost/rte_vhost_version.map |  1 +
  lib/librte_vhost/vhost_user.c  |  7 +--
  4 files changed, 23 insertions(+), 6 deletions(-)



Reviewed-by: Maxime Coquelin 

Thanks,
Maxime


Re: [dpdk-dev] [PATCH v4 03/10] vhost: provide helpers for virtio ring relay

2018-12-16 Thread Maxime Coquelin




On 12/14/18 10:16 PM, Xiao Wang wrote:

This patch provides two helpers for vdpa device driver to perform a
relay between the guest virtio ring and a mediate virtio ring.


s/mediate/mediated/ ?
I'm not 100% sure, but if it is mediated, please change everywhere else
in the patch.



The available ring relay will synchronize the available entries, and
helps to do desc validity checking.


s/helps/help/



The used ring relay will synchronize the used entries from mediate ring
to guest ring, and helps to do dirty page logging for live migration.


s/helps/help/



The next patch will leverage these two helpers.

Signed-off-by: Xiao Wang 
---
  lib/librte_vhost/rte_vdpa.h|  39 +++
  lib/librte_vhost/rte_vhost_version.map |   2 +
  lib/librte_vhost/vdpa.c| 194 +
  lib/librte_vhost/vhost.h   |  40 +++
  lib/librte_vhost/virtio_net.c  |  39 ---
  5 files changed, 275 insertions(+), 39 deletions(-)




Appart from that:
Reviewed-by: Maxime Coquelin 

Thanks,
Maxime


Re: [dpdk-dev] [PATCH v4 04/10] net/ifc: dump debug message for error

2018-12-16 Thread Maxime Coquelin




On 12/14/18 10:16 PM, Xiao Wang wrote:

Driver probe may fail for different causes, debug message is helpful for
debugging issue.

Signed-off-by: Xiao Wang 
---
  drivers/net/ifc/ifcvf_vdpa.c | 19 +--
  1 file changed, 13 insertions(+), 6 deletions(-)



Reviewed-by: Maxime Coquelin 

Thanks,
Maxime



Re: [dpdk-dev] [PATCH v4 05/10] net/ifc: store only registered device instance

2018-12-16 Thread Maxime Coquelin




On 12/14/18 10:16 PM, Xiao Wang wrote:

If driver fails to register ifc VF device into vhost lib, then this
device should not be stored.

Fixes: a3f8150eac6d ("net/ifcvf: add ifcvf vDPA driver")
cc: sta...@dpdk.org

Signed-off-by: Xiao Wang 
---
  drivers/net/ifc/ifcvf_vdpa.c | 8 
  1 file changed, 4 insertions(+), 4 deletions(-)



Reviewed-by: Maxime Coquelin 

Thanks,
Maxime



Re: [dpdk-dev] [PATCH v4 06/10] net/ifc: detect if VDPA mode is specified

2018-12-16 Thread Maxime Coquelin




On 12/14/18 10:16 PM, Xiao Wang wrote:

If user wants the VF to be used in VDPA (vhost data path acceleration)
mode, then the user can add a "vdpa=1" parameter for the device.

So if driver doesn't not find this option, it should quit and let the


s/doesn't not/does not/


bus continue the probe.

Signed-off-by: Xiao Wang 
---
  drivers/net/ifc/Makefile |  1 +
  drivers/net/ifc/ifcvf_vdpa.c | 47 
  2 files changed, 48 insertions(+)



Should this option be documented somewhere?

Apart from that:
Reviewed-by: Maxime Coquelin 

Thanks,
Maxime


Re: [dpdk-dev] [PATCH v4 07/10] net/ifc: add devarg for LM mode

2018-12-16 Thread Maxime Coquelin




On 12/14/18 10:16 PM, Xiao Wang wrote:

This patch series enables a new method for live migration, i.e. software
assisted live migration. This patch provides a device argument for user
to choose the methold.

When "swlm=1", driver/device will do live migration with a relay thread
dealing with dirty page logging. Without this parameter, device will do
dirty page logging and there's no relay thread consuming CPU resource.

Signed-off-by: Xiao Wang 
---
  drivers/net/ifc/ifcvf_vdpa.c | 13 +
  1 file changed, 13 insertions(+)

diff --git a/drivers/net/ifc/ifcvf_vdpa.c b/drivers/net/ifc/ifcvf_vdpa.c
index c0e50354a..395c5112f 100644
--- a/drivers/net/ifc/ifcvf_vdpa.c
+++ b/drivers/net/ifc/ifcvf_vdpa.c
@@ -8,6 +8,7 @@
  #include 
  #include 
  #include 
+#include 
  
  #include 

  #include 
@@ -31,9 +32,11 @@
  #endif
  
  #define IFCVF_VDPA_MODE		"vdpa"

+#define IFCVF_SW_FALLBACK_LM   "swlm"



The patch looks good, except that I don't like the "swlm" name.
Maybe we could have something less obscure, even if a little bt longer?

What about "sw-live-migration"?


Re: [dpdk-dev] [PATCH v4 08/10] net/ifc: use lib API for used ring logging

2018-12-16 Thread Maxime Coquelin




On 12/14/18 10:16 PM, Xiao Wang wrote:

Vhost lib has already provided a helper for used ring logging, driver
could use it to reduce code.

Signed-off-by: Xiao Wang 
---
  drivers/net/ifc/ifcvf_vdpa.c | 27 ---
  1 file changed, 8 insertions(+), 19 deletions(-)



Reviewed-by: Maxime Coquelin 

Thanks,
Maxime


Re: [dpdk-dev] [PATCH v4 09/10] net/ifc: support SW assisted VDPA live migration

2018-12-16 Thread Maxime Coquelin




On 12/14/18 10:16 PM, Xiao Wang wrote:

In SW assisted live migration mode, driver will stop the device and
setup a mediate virtio ring to relay the communication between the
virtio driver and the VDPA device.

This data path intervention will allow SW to help on guest dirty page
logging for live migration.

This SW fallback is event driven relay thread, so when the network
throughput is low, this SW fallback will take little CPU resource, but
when the throughput goes up, the relay thread's CPU usage will goes up
accordinly.


s/accordinly/accordingly/



User needs to take all the factors including CPU usage, guest perf
degradation, etc. into consideration when selecting the live migration
support mode.

Signed-off-by: Xiao Wang 
---
  drivers/net/ifc/base/ifcvf.h |   1 +
  drivers/net/ifc/ifcvf_vdpa.c | 346 ++-
  2 files changed, 344 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ifc/base/ifcvf.h b/drivers/net/ifc/base/ifcvf.h
index c15c69107..e8a30d2c6 100644
--- a/drivers/net/ifc/base/ifcvf.h
+++ b/drivers/net/ifc/base/ifcvf.h
@@ -50,6 +50,7 @@
  #define IFCVF_LM_ENABLE_VF0x1
  #define IFCVF_LM_ENABLE_PF0x3
  #define IFCVF_LOG_BASE0x1000
+#define IFCVF_MEDIATE_VRING0x2000


MEDIATED?

  
  #define IFCVF_32_BIT_MASK		0x
  
diff --git a/drivers/net/ifc/ifcvf_vdpa.c b/drivers/net/ifc/ifcvf_vdpa.c

index f181c5a6e..61757d0b4 100644
--- a/drivers/net/ifc/ifcvf_vdpa.c
+++ b/drivers/net/ifc/ifcvf_vdpa.c
@@ -63,6 +63,9 @@ struct ifcvf_internal {
rte_atomic32_t running;
rte_spinlock_t lock;
bool sw_lm;
+   bool sw_fallback_running;
+   /* mediated vring for sw fallback */
+   struct vring m_vring[IFCVF_MAX_QUEUES * 2];
  };
  
  struct internal_list {

@@ -308,6 +311,9 @@ vdpa_ifcvf_stop(struct ifcvf_internal *internal)
rte_vhost_set_vring_base(vid, i, hw->vring[i].last_avail_idx,
hw->vring[i].last_used_idx);
  
+	if (internal->sw_lm)

+   return;
+
rte_vhost_get_negotiated_features(vid, &features);
if (RTE_VHOST_NEED_LOG(features)) {
ifcvf_disable_logging(hw);
@@ -539,6 +545,318 @@ update_datapath(struct ifcvf_internal *internal)
return ret;
  }
  
+static int

+m_ifcvf_start(struct ifcvf_internal *internal)
+{
+   struct ifcvf_hw *hw = &internal->hw;
+   uint32_t i, nr_vring;
+   int vid, ret;
+   struct rte_vhost_vring vq;
+   void *vring_buf;
+   uint64_t m_vring_iova = IFCVF_MEDIATE_VRING;
+   uint64_t size;
+   uint64_t gpa;
+
+   vid = internal->vid;
+   nr_vring = rte_vhost_get_vring_num(vid);
+   rte_vhost_get_negotiated_features(vid, &hw->req_features);
+
+   for (i = 0; i < nr_vring; i++) {
+   rte_vhost_get_vhost_vring(vid, i, &vq);
+
+   size = RTE_ALIGN_CEIL(vring_size(vq.size, PAGE_SIZE),
+   PAGE_SIZE);
+   vring_buf = rte_zmalloc("ifcvf", size, PAGE_SIZE);
+   vring_init(&internal->m_vring[i], vq.size, vring_buf,
+   PAGE_SIZE);
+
+   ret = rte_vfio_container_dma_map(internal->vfio_container_fd,
+   (uint64_t)(uintptr_t)vring_buf, m_vring_iova, size);
+   if (ret < 0) {
+   DRV_LOG(ERR, "mediate vring DMA map failed.");
+   goto error;
+   }
+
+   gpa = hva_to_gpa(vid, (uint64_t)(uintptr_t)vq.desc);
+   if (gpa == 0) {
+   DRV_LOG(ERR, "Fail to get GPA for descriptor ring.");
+   return -1;
+   }
+   hw->vring[i].desc = gpa;
+
+   hw->vring[i].avail = m_vring_iova +
+   (char *)internal->m_vring[i].avail -
+   (char *)internal->m_vring[i].desc;
+
+   hw->vring[i].used = m_vring_iova +
+   (char *)internal->m_vring[i].used -
+   (char *)internal->m_vring[i].desc;
+
+   hw->vring[i].size = vq.size;
+
+   rte_vhost_get_vring_base(vid, i, &hw->vring[i].last_avail_idx,
+   &hw->vring[i].last_used_idx);
+
+   m_vring_iova += size;
+   }
+   hw->nr_vring = nr_vring;
+
+   return ifcvf_start_hw(&internal->hw);
+
+error:
+   for (i = 0; i < nr_vring; i++)
+   if (internal->m_vring[i].desc)
+   rte_free(internal->m_vring[i].desc);
+
+   return -1;
+}
+
+static int
+m_ifcvf_stop(struct ifcvf_internal *internal)
+{
+   int vid;
+   uint32_t i;
+   struct rte_vhost_vring vq;
+   struct ifcvf_hw *hw = &internal->hw;
+   uint64_t m_vring_iova = IFCVF_MEDIATE_VRING;
+   uint64_t size, len;
+
+   vid = internal->vid;
+   ifcvf_stop_hw(hw);
+
+   for (i = 0; i < h

Re: [dpdk-dev] [PATCH v4 10/10] doc: update ifc NIC document

2018-12-16 Thread Maxime Coquelin




On 12/14/18 10:16 PM, Xiao Wang wrote:

Add the SW assisted VDPA live migration feature into NIC doc.

Signed-off-by: Xiao Wang 
---
  doc/guides/nics/ifc.rst| 8 
  doc/guides/rel_notes/release_19_02.rst | 6 ++
  2 files changed, 14 insertions(+)

diff --git a/doc/guides/nics/ifc.rst b/doc/guides/nics/ifc.rst
index 48f9adf1d..eb55d329a 100644
--- a/doc/guides/nics/ifc.rst
+++ b/doc/guides/nics/ifc.rst
@@ -39,6 +39,13 @@ the driver probe a new container is created for this device, 
with this
  container vDPA driver can program DMA remapping table with the VM's memory
  region information.
  
+The device argument "swlm=1" will configure the driver into SW assisted live

+migration mode. In this mode, the driver will set up a SW relay thread when LM
+happens, this thread will help device to log dirty pages. Thus this mode does
+not require HW to implement a dirty page logging function block, but will
+consume some percentage of CPU resource depending on the network throughput.
+If no "swlm=1" specified, driver will rely on device's logging capability.
+


Ok, so that's documented here.
What about documenting vdpa option too?


  Key IFCVF vDPA driver ops
  ~
  
@@ -70,6 +77,7 @@ Features

  Features of the IFCVF driver are:
  
  - Compatibility with virtio 0.95 and 1.0.

+- SW assisted vDPA live migration.
  
  
  Prerequisites

diff --git a/doc/guides/rel_notes/release_19_02.rst 
b/doc/guides/rel_notes/release_19_02.rst
index e86ef9511..ced6af8f0 100644
--- a/doc/guides/rel_notes/release_19_02.rst
+++ b/doc/guides/rel_notes/release_19_02.rst
@@ -60,6 +60,12 @@ New Features
* Added the handler to get firmware version string.
* Added support for multicast filtering.
  
+* **Added support for SW-assisted VDPA live migration.**

+
+  This SW-assisted VDPA live migration facility helps VDPA devices without
+  logging capability to perform live migration, a mediate SW relay can help
+  devices to track dirty pages caused by DMA. IFC driver has enabled this
+  SW-assisted live migration mode.
  
  Removed Items

  -



Re: [dpdk-dev] CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES: no difference in memory pool allocations, when enabling/disabling this configuration

2018-12-16 Thread Asaf Sinai
Hi Anatoly,

Thank you very much for the useful explanations!

Thanks,
Asaf

-Original Message-
From: Burakov, Anatoly  
Sent: Monday, December 10, 2018 12:10 PM
To: Asaf Sinai ; Ilya Maximets ; 
Hemant Agrawal ; dev@dpdk.org; Thomas Monjalon 

Cc: Ilia Ferdman ; Sasha Hodos 
Subject: Re: [dpdk-dev] CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES: no difference in 
memory pool allocations, when enabling/disabling this configuration

On 09-Dec-18 8:14 AM, Asaf Sinai wrote:
> Hi all,
> 
> Thanks for the detailed explanations!
> 
> So, what we understood from that, is the following (please correct, if it is 
> wrong):
> Before 18.05 version:
> - Dividing huge pages between NUMAs was based, by default, on Linux good will.
> - Enforcing Linux to divide huge pages between NUMAs, required enabling 
> configuration option "CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES".
> - The enforcement was done via "libnuma" library.
> 
>  From 18.05 version:
> - The mentioned configuration option is ignored, so that by default, all huge 
> pages are allocated on NUMA 0.
> - if "libnuma" library exists in system, then huge pages will be divided 
> between NUMAs, without any special configuration.
> - The above is relevant to architectures that support NUMA, e.g. X86 (which 
> we use).
> 
> Thanks,
> Asaf

Hi Asaf,

Before 18.05, the above description is correct.

Since 18.05, it's not _quite_ like that. There are two memory modes in
18.05 - default and legacy. Legacy mode pretty much behaves like
pre-18.05 code.

Default memory mode without the CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES for all 
intents and purposes should be considered unsupported for post-18.05 code, and 
libnuma should be considered to be a hard dependency for non-legacy, NUMA-aware 
code. Without this option, EAL will disallow allocations on sockets other than 
0, but on a NUMA-enabled system, you won't necessarily get memory from socket 0 
- it will *say* it is on socket 0, but it may not *actually* be the case, 
because without libnuma we do not check where it was allocated.

Reasons for the above behavior is simple: legacy mem mode preallocates all 
memory in advance. This gives us an opportunity to figure out page socket 
affinity at initialization, and not worry about it afterwards. 
Non-legacy mode doesn't have the luxury of preallocating all memory in advance, 
instead we allocate memory on the fly - which means that whenever an allocation 
is requested, we need memory not just anywhere (like in legacy init case), but 
located on a specific socket - we cannot "sort it out later" like we do with 
legacy mem. Without libnuma, we cannot get this functionality.

> 
> -Original Message-
> From: Ilya Maximets 
> Sent: Tuesday, November 27, 2018 06:50 PM
> To: Burakov, Anatoly ; Hemant Agrawal 
> ; Asaf Sinai ; 
> dev@dpdk.org; Thomas Monjalon 
> Cc: Ilia Ferdman ; Sasha Hodos 
> Subject: Re: [dpdk-dev] CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES: no 
> difference in memory pool allocations, when enabling/disabling this 
> configuration
> 
> On 27.11.2018 13:33, Burakov, Anatoly wrote:
>> On 27-Nov-18 10:26 AM, Hemant Agrawal wrote:
>>>
>>> On 11/26/2018 8:55 PM, Asaf Sinai wrote:
 +CC Ilia & Sasha.

 -Original Message-
 From: Burakov, Anatoly 
 Sent: Monday, November 26, 2018 04:57 PM
 To: Ilya Maximets ; Asaf Sinai 
 ; dev@dpdk.org; Thomas Monjalon 
 
 Subject: Re: [dpdk-dev] CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES: no 
 difference in memory pool allocations, when enabling/disabling this 
 configuration

 On 26-Nov-18 2:32 PM, Ilya Maximets wrote:
> On 26.11.2018 17:21, Burakov, Anatoly wrote:
>> On 26-Nov-18 2:10 PM, Ilya Maximets wrote:
>>> On 26.11.2018 16:42, Burakov, Anatoly wrote:
 On 26-Nov-18 1:20 PM, Ilya Maximets wrote:
> On 26.11.2018 16:16, Ilya Maximets wrote:
>> On 26.11.2018 15:50, Burakov, Anatoly wrote:
>>> On 26-Nov-18 11:43 AM, Burakov, Anatoly wrote:
 On 26-Nov-18 11:33 AM, Asaf Sinai wrote:
> Hi Anatoly,
>
> We did not check it with "testpmd", only with our application.
>        From the beginning, we did not enable this configuration 
> (look at attached files), and everything works fine.
> Of course we rebuild DPDK, when we change configuration.
> Please note that we use DPDK 17.11.3, maybe this is why it works 
> fine?
 Just tested with DPDK 17.11, and yes, it does work the way you are 
 describing. This is not intended behavior. I will look into it.

>>> +CC author of commit introducing 
>>> CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES.
>>>
>>> Looking at the code, i think this config option needs to be 
>>> reworked and we should clarify what we mean by this option. It 
>>> appears that i've misunderstood what this option actually intended 
>>> to do, and i also t

[dpdk-dev] [PATCH] maintainers: update Cavium maintainers email id

2018-12-16 Thread Jerin Jacob Kollanukkaran
Following Marvell's acquisition of Cavium, we need to update all the
Cavium maintainer's entries to point to our new e-mail addresses.
Update maintainers as they are no longer working for Cavium.

Thanks to Harish Patil for his support and development of our various
dpdk drivers.

Signed-off-by: Jerin Jacob 
---
 MAINTAINERS | 43 +--
 1 file changed, 21 insertions(+), 22 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 71ba31208..c19f6590c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -51,7 +51,7 @@ M: Akhil Goyal 
 T: git://dpdk.org/next/dpdk-next-crypto
 
 Next-eventdev Tree
-M: Jerin Jacob 
+M: Jerin Jacob 
 T: git://dpdk.org/next/dpdk-next-eventdev
 
 Next-qos Tree
@@ -220,7 +220,7 @@ F: lib/librte_eal/common/arch/arm/
 F: lib/librte_eal/common/include/arch/arm/
 
 ARM v8
-M: Jerin Jacob 
+M: Jerin Jacob 
 M: Gavin Hu 
 F: lib/librte_eal/common/include/arch/arm/*_64.h
 F: lib/librte_net/net_crc_neon.h
@@ -357,7 +357,7 @@ F: doc/guides/prog_guide/rte_security.rst
 Compression API - EXPERIMENTAL
 M: Fiona Trahe 
 M: Pablo de Lara 
-M: Ashish Gupta 
+M: Ashish Gupta 
 T: git://dpdk.org/next/dpdk-next-crypto
 F: lib/librte_compressdev/
 F: drivers/compress/
@@ -366,7 +366,7 @@ F: doc/guides/prog_guide/compressdev.rst
 F: doc/guides/compressdevs/features/default.ini
 
 Eventdev API
-M: Jerin Jacob 
+M: Jerin Jacob 
 T: git://dpdk.org/next/dpdk-next-eventdev
 F: lib/librte_eventdev/
 F: drivers/event/skeleton/
@@ -510,21 +510,21 @@ F: doc/guides/nics/bnxt.rst
 F: doc/guides/nics/features/bnxt.ini
 
 Cavium ThunderX nicvf
-M: Jerin Jacob 
-M: Maciej Czekaj 
+M: Jerin Jacob 
+M: Maciej Czekaj 
 F: drivers/net/thunderx/
 F: doc/guides/nics/thunderx.rst
 F: doc/guides/nics/features/thunderx.ini
 
 Cavium LiquidIO
-M: Shijith Thotton 
-M: Srisivasubramanian Srinivasan 
+M: Shijith Thotton 
+M: Srisivasubramanian Srinivasan 
 F: drivers/net/liquidio/
 F: doc/guides/nics/liquidio.rst
 F: doc/guides/nics/features/liquidio.ini
 
 Cavium OCTEON TX
-M: Jerin Jacob 
+M: Jerin Jacob 
 F: drivers/common/octeontx/
 F: drivers/mempool/octeontx/
 F: drivers/net/octeontx/
@@ -676,16 +676,15 @@ F: doc/guides/nics/enetc.rst
 F: doc/guides/nics/features/enetc.ini
 
 QLogic bnx2x
-M: Harish Patil 
-M: Rasesh Mody 
+M: Rasesh Mody 
+M: Shahed Shaikh 
 F: drivers/net/bnx2x/
 F: doc/guides/nics/bnx2x.rst
 F: doc/guides/nics/features/bnx2x*.ini
 
 QLogic qede PMD
-M: Rasesh Mody 
-M: Harish Patil 
-M: Shahed Shaikh 
+M: Rasesh Mody 
+M: Shahed Shaikh 
 F: drivers/net/qede/
 F: doc/guides/nics/qede.rst
 F: doc/guides/nics/features/qede*.ini
@@ -800,13 +799,13 @@ F: doc/guides/cryptodevs/ccp.rst
 F: doc/guides/cryptodevs/features/ccp.ini
 
 ARMv8 Crypto
-M: Jerin Jacob 
+M: Jerin Jacob 
 F: drivers/crypto/armv8/
 F: doc/guides/cryptodevs/armv8.rst
 F: doc/guides/cryptodevs/features/armv8.ini
 
 Cavium OCTEON TX crypto
-M: Anoob Joseph 
+M: Anoob Joseph 
 F: drivers/common/cpt/
 F: drivers/crypto/octeontx/
 F: doc/guides/cryptodevs/octeontx.rst
@@ -910,7 +909,7 @@ M: Pablo de Lara 
 T: git://dpdk.org/next/dpdk-next-crypto
 
 Cavium OCTEON TX zipvf
-M: Ashish Gupta 
+M: Ashish Gupta 
 F: drivers/compress/octeontx/
 F: doc/guides/compressdevs/octeontx.rst
 F: doc/guides/compressdevs/features/octeontx.ini
@@ -927,7 +926,7 @@ F: doc/guides/compressdevs/isal.rst
 F: doc/guides/compressdevs/features/isal.ini
 
 ZLIB
-M: Sunila Sahu 
+M: Sunila Sahu 
 F: drivers/compress/zlib/
 F: doc/guides/compressdevs/zlib.rst
 F: doc/guides/compressdevs/features/zlib.ini
@@ -935,16 +934,16 @@ F: doc/guides/compressdevs/features/zlib.ini
 
 Eventdev Drivers
 
-M: Jerin Jacob 
+M: Jerin Jacob 
 T: git://dpdk.org/next/dpdk-next-eventdev
 
 Cavium OCTEON TX ssovf
-M: Jerin Jacob 
+M: Jerin Jacob 
 F: drivers/event/octeontx/
 F: doc/guides/eventdevs/octeontx.rst
 
 Cavium OCTEON TX timvf
-M: Pavan Nikhilesh 
+M: Pavan Nikhilesh 
 F: drivers/event/octeontx/timvf_*
 
 NXP DPAA eventdev
@@ -1248,7 +1247,7 @@ F: app/test-crypto-perf/
 F: doc/guides/tools/cryptoperf.rst
 
 Eventdev test application
-M: Jerin Jacob 
+M: Jerin Jacob 
 F: app/test-eventdev/
 F: doc/guides/tools/testeventdev.rst
 F: doc/guides/tools/img/eventdev_*
-- 
2.20.0



[dpdk-dev] [PATCH 3/4] eal:add tailq walk routine

2018-12-16 Thread Keith Wiles
Signed-off-by: Keith Wiles 
---
 lib/librte_eal/common/eal_common_tailqs.c | 19 +++
 lib/librte_eal/common/include/rte_tailq.h | 13 +
 lib/librte_eal/rte_eal_version.map|  1 +
 3 files changed, 33 insertions(+)

diff --git a/lib/librte_eal/common/eal_common_tailqs.c 
b/lib/librte_eal/common/eal_common_tailqs.c
index babd3b30a..791dcc37b 100644
--- a/lib/librte_eal/common/eal_common_tailqs.c
+++ b/lib/librte_eal/common/eal_common_tailqs.c
@@ -69,6 +69,25 @@ rte_dump_tailq(FILE *f)
rte_rwlock_read_unlock(&mcfg->qlock);
 }
 
+void
+rte_tailq_walk(void (*iter)(const struct rte_tailq_head *, void *), void *arg)
+{
+   struct rte_mem_config *mcfg;
+   unsigned int i = 0;
+
+   if (!iter)
+   return;
+   mcfg = rte_eal_get_configuration()->mem_config;
+
+   rte_rwlock_read_lock(&mcfg->qlock);
+   for (i = 0; i < RTE_MAX_TAILQ; i++) {
+   const struct rte_tailq_head *tailq = &mcfg->tailq_head[i];
+
+   iter(tailq, arg);
+   }
+   rte_rwlock_read_unlock(&mcfg->qlock);
+}
+
 static struct rte_tailq_head *
 rte_eal_tailq_create(const char *name)
 {
diff --git a/lib/librte_eal/common/include/rte_tailq.h 
b/lib/librte_eal/common/include/rte_tailq.h
index 9b01abb2c..b9b1c6e75 100644
--- a/lib/librte_eal/common/include/rte_tailq.h
+++ b/lib/librte_eal/common/include/rte_tailq.h
@@ -18,6 +18,7 @@ extern "C" {
 #include 
 #include 
 #include 
+#include 
 
 /** dummy structure type used by the rte_tailq APIs */
 struct rte_tailq_entry {
@@ -85,6 +86,18 @@ struct rte_tailq_elem {
  */
 void rte_dump_tailq(FILE *f);
 
+/**
+ * Walk the tailq list and call the Iterator function given.
+ *
+ * @param func
+ *   Iterator function
+ * @param arg
+ *   pointer to user supplied argument passed to Iterator function
+ */
+void __rte_experimental
+   rte_tailq_walk(void (*iter)(const struct rte_tailq_head *, void *),
+   void *arg);
+
 /**
  * Lookup for a tail queue.
  *
diff --git a/lib/librte_eal/rte_eal_version.map 
b/lib/librte_eal/rte_eal_version.map
index 3fe78260d..9a7abc778 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -360,4 +360,5 @@ EXPERIMENTAL {
rte_service_may_be_active;
rte_socket_count;
rte_socket_id_by_idx;
+   rte_tailq_walk;
 };
-- 
2.17.1



[dpdk-dev] [PATCH 4/4] ring:add ring walk routine

2018-12-16 Thread Keith Wiles
Signed-off-by: Keith Wiles 
---
 lib/librte_ring/rte_ring.c   | 20 
 lib/librte_ring/rte_ring.h   | 14 ++
 lib/librte_ring/rte_ring_version.map |  7 +++
 3 files changed, 41 insertions(+)

diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c
index d215acecc..fb5819e4b 100644
--- a/lib/librte_ring/rte_ring.c
+++ b/lib/librte_ring/rte_ring.c
@@ -280,3 +280,23 @@ rte_ring_lookup(const char *name)
 
return r;
 }
+
+void
+rte_ring_walk(void (*func)(struct rte_ring *r, void *arg), void *arg)
+{
+   const struct rte_tailq_entry *te;
+   struct rte_ring_list *ring_list;
+
+   if (!func)
+   return;
+
+   ring_list = RTE_TAILQ_CAST(rte_ring_tailq.head, rte_ring_list);
+
+   rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+   TAILQ_FOREACH(te, ring_list, next) {
+   func((struct rte_ring *) te->data, arg);
+   }
+
+   rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+}
diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index af5444a9f..b9391a655 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -769,6 +769,20 @@ rte_ring_get_capacity(const struct rte_ring *r)
  */
 void rte_ring_list_dump(FILE *f);
 
+/**
+ * Walk the list of ring entries and call the function provided
+ *
+ * @param func
+ *   The function to call for each ring entry using the following prototype
+ * void (*func)(struct rte_ring *r, void *arg)
+ * @param arg
+ *   argument for the call to function
+ * @return
+ *   None.
+ */
+void __rte_experimental
+rte_ring_walk(void (*func)(struct rte_ring *r, void *arg), void *arg);
+
 /**
  * Search a ring from its name
  *
diff --git a/lib/librte_ring/rte_ring_version.map 
b/lib/librte_ring/rte_ring_version.map
index d935efd0d..17c05c1b2 100644
--- a/lib/librte_ring/rte_ring_version.map
+++ b/lib/librte_ring/rte_ring_version.map
@@ -17,3 +17,10 @@ DPDK_2.2 {
rte_ring_free;
 
 } DPDK_2.0;
+
+EXPERIMENTAL {
+   global:
+
+   rte_ring_walk;
+};
+
-- 
2.17.1



[dpdk-dev] [PATCH 2/4] eal: turn off getopt_long error messages

2018-12-16 Thread Keith Wiles
When using dpdk register option api when parsing for log level
the opterr flags was still set to one causing an error message
from getopt_long(). Set opterr to zero to disable error messages.

Signed-off-by: Keith Wiles 
---
 lib/librte_eal/bsdapp/eal/eal.c   | 1 +
 lib/librte_eal/linuxapp/eal/eal.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/lib/librte_eal/bsdapp/eal/eal.c b/lib/librte_eal/bsdapp/eal/eal.c
index b8152a75c..85d6dddc9 100644
--- a/lib/librte_eal/bsdapp/eal/eal.c
+++ b/lib/librte_eal/bsdapp/eal/eal.c
@@ -374,6 +374,7 @@ eal_log_level_parse(int argc, char **argv)
argvopt = argv;
optind = 1;
optreset = 1;
+   opterr = 0;
 
while ((opt = getopt_long(argc, argvopt, eal_short_options,
  eal_long_options, &option_index)) != EOF) {
diff --git a/lib/librte_eal/linuxapp/eal/eal.c 
b/lib/librte_eal/linuxapp/eal/eal.c
index 361744d40..9a1289532 100644
--- a/lib/librte_eal/linuxapp/eal/eal.c
+++ b/lib/librte_eal/linuxapp/eal/eal.c
@@ -565,6 +565,7 @@ eal_log_level_parse(int argc, char **argv)
 
argvopt = argv;
optind = 1;
+   opterr = 0;
 
while ((opt = getopt_long(argc, argvopt, eal_short_options,
  eal_long_options, &option_index)) != EOF) {
-- 
2.17.1



[dpdk-dev] [PATCH] eal: turn off getopt_long error messages

2018-12-16 Thread Keith Wiles
When using dpdk register option api when parsing for log level
the opterr flags was still set to one causing an error message
from getopt_long(). Set opterr to zero to disable error messages.

Signed-off-by: Keith Wiles 
---
 lib/librte_eal/bsdapp/eal/eal.c   | 1 +
 lib/librte_eal/linuxapp/eal/eal.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/lib/librte_eal/bsdapp/eal/eal.c b/lib/librte_eal/bsdapp/eal/eal.c
index b8152a75c..85d6dddc9 100644
--- a/lib/librte_eal/bsdapp/eal/eal.c
+++ b/lib/librte_eal/bsdapp/eal/eal.c
@@ -374,6 +374,7 @@ eal_log_level_parse(int argc, char **argv)
argvopt = argv;
optind = 1;
optreset = 1;
+   opterr = 0;
 
while ((opt = getopt_long(argc, argvopt, eal_short_options,
  eal_long_options, &option_index)) != EOF) {
diff --git a/lib/librte_eal/linuxapp/eal/eal.c 
b/lib/librte_eal/linuxapp/eal/eal.c
index 361744d40..9a1289532 100644
--- a/lib/librte_eal/linuxapp/eal/eal.c
+++ b/lib/librte_eal/linuxapp/eal/eal.c
@@ -565,6 +565,7 @@ eal_log_level_parse(int argc, char **argv)
 
argvopt = argv;
optind = 1;
+   opterr = 0;
 
while ((opt = getopt_long(argc, argvopt, eal_short_options,
  eal_long_options, &option_index)) != EOF) {
-- 
2.17.1



Re: [dpdk-dev] [PATCH v3] app/eventdev: detect deadlock for timer event producer

2018-12-16 Thread Jerin Jacob Kollanukkaran
On Mon, 2018-12-03 at 11:48 -0600, Erik Gabriel Carrillo wrote:
> If timer events get dropped for some reason, the thread that launched
> producer and worker cores will never exit, because the deadlock check
> doesn't currently apply to the event timer adapter case. This commit
> fixes this.
> 
> Fixes: d008f20bce23 ("app/eventdev: add event timer adapter as a
> producer")
> 
> Signed-off-by: Erik Gabriel Carrillo 
> Acked-by: Jerin Jacob 


Applied to dpdk-next-eventdev/master. Thanks.


> ---
> v3:
>  - Forgot to add Jerin's ack line.
> v2:
>  - Add a fixline to commit message (Jerin)
> 
>  app/test-eventdev/test_perf_common.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/app/test-eventdev/test_perf_common.c b/app/test-
> eventdev/test_perf_common.c
> index 8618775..f99a6a6 100644
> --- a/app/test-eventdev/test_perf_common.c
> +++ b/app/test-eventdev/test_perf_common.c
> @@ -327,7 +327,8 @@ perf_launch_lcores(struct evt_test *test, struct
> evt_options *opt,
> }
> 
> if (new_cycles - dead_lock_cycles > dead_lock_sample
> &&
> -   opt->prod_type == EVT_PROD_TYPE_SYNT)
> {
> +   (opt->prod_type == EVT_PROD_TYPE_SYNT ||
> +opt->prod_type ==
> EVT_PROD_TYPE_EVENT_TIMER_ADPTR)) {
> remaining = t->outstand_pkts -
> processed_pkts(t);
> if (dead_lock_remaining == remaining) {
> rte_event_dev_dump(opt->dev_id,
> stdout);
> --
> 2.6.4
> 


[dpdk-dev] [PATCH v2 3/3] ring:add ring walk routine

2018-12-16 Thread Keith Wiles
Signed-off-by: Keith Wiles 
---
V2
   Fix checkpatch warnings.

 lib/librte_ring/rte_ring.c   | 20 
 lib/librte_ring/rte_ring.h   | 14 ++
 lib/librte_ring/rte_ring_version.map |  7 +++
 3 files changed, 41 insertions(+)

diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c
index d215acecc..fb5819e4b 100644
--- a/lib/librte_ring/rte_ring.c
+++ b/lib/librte_ring/rte_ring.c
@@ -280,3 +280,23 @@ rte_ring_lookup(const char *name)
 
return r;
 }
+
+void
+rte_ring_walk(void (*func)(struct rte_ring *r, void *arg), void *arg)
+{
+   const struct rte_tailq_entry *te;
+   struct rte_ring_list *ring_list;
+
+   if (!func)
+   return;
+
+   ring_list = RTE_TAILQ_CAST(rte_ring_tailq.head, rte_ring_list);
+
+   rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+   TAILQ_FOREACH(te, ring_list, next) {
+   func((struct rte_ring *) te->data, arg);
+   }
+
+   rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+}
diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index af5444a9f..b9391a655 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -769,6 +769,20 @@ rte_ring_get_capacity(const struct rte_ring *r)
  */
 void rte_ring_list_dump(FILE *f);
 
+/**
+ * Walk the list of ring entries and call the function provided
+ *
+ * @param func
+ *   The function to call for each ring entry using the following prototype
+ * void (*func)(struct rte_ring *r, void *arg)
+ * @param arg
+ *   argument for the call to function
+ * @return
+ *   None.
+ */
+void __rte_experimental
+rte_ring_walk(void (*func)(struct rte_ring *r, void *arg), void *arg);
+
 /**
  * Search a ring from its name
  *
diff --git a/lib/librte_ring/rte_ring_version.map 
b/lib/librte_ring/rte_ring_version.map
index d935efd0d..17c05c1b2 100644
--- a/lib/librte_ring/rte_ring_version.map
+++ b/lib/librte_ring/rte_ring_version.map
@@ -17,3 +17,10 @@ DPDK_2.2 {
rte_ring_free;
 
 } DPDK_2.0;
+
+EXPERIMENTAL {
+   global:
+
+   rte_ring_walk;
+};
+
-- 
2.17.1



[dpdk-dev] [PATCH v2 2/3] eal:add tailq walk routine

2018-12-16 Thread Keith Wiles
Signed-off-by: Keith Wiles 
---
V2
   Fix checkpatch warnings

 lib/librte_eal/common/eal_common_tailqs.c | 19 +++
 lib/librte_eal/common/include/rte_tailq.h | 13 +
 lib/librte_eal/rte_eal_version.map|  1 +
 3 files changed, 33 insertions(+)

diff --git a/lib/librte_eal/common/eal_common_tailqs.c 
b/lib/librte_eal/common/eal_common_tailqs.c
index babd3b30a..791dcc37b 100644
--- a/lib/librte_eal/common/eal_common_tailqs.c
+++ b/lib/librte_eal/common/eal_common_tailqs.c
@@ -69,6 +69,25 @@ rte_dump_tailq(FILE *f)
rte_rwlock_read_unlock(&mcfg->qlock);
 }
 
+void
+rte_tailq_walk(void (*iter)(const struct rte_tailq_head *, void *), void *arg)
+{
+   struct rte_mem_config *mcfg;
+   unsigned int i = 0;
+
+   if (!iter)
+   return;
+   mcfg = rte_eal_get_configuration()->mem_config;
+
+   rte_rwlock_read_lock(&mcfg->qlock);
+   for (i = 0; i < RTE_MAX_TAILQ; i++) {
+   const struct rte_tailq_head *tailq = &mcfg->tailq_head[i];
+
+   iter(tailq, arg);
+   }
+   rte_rwlock_read_unlock(&mcfg->qlock);
+}
+
 static struct rte_tailq_head *
 rte_eal_tailq_create(const char *name)
 {
diff --git a/lib/librte_eal/common/include/rte_tailq.h 
b/lib/librte_eal/common/include/rte_tailq.h
index 9b01abb2c..b9b1c6e75 100644
--- a/lib/librte_eal/common/include/rte_tailq.h
+++ b/lib/librte_eal/common/include/rte_tailq.h
@@ -18,6 +18,7 @@ extern "C" {
 #include 
 #include 
 #include 
+#include 
 
 /** dummy structure type used by the rte_tailq APIs */
 struct rte_tailq_entry {
@@ -85,6 +86,18 @@ struct rte_tailq_elem {
  */
 void rte_dump_tailq(FILE *f);
 
+/**
+ * Walk the tailq list and call the Iterator function given.
+ *
+ * @param func
+ *   Iterator function
+ * @param arg
+ *   pointer to user supplied argument passed to Iterator function
+ */
+void __rte_experimental
+   rte_tailq_walk(void (*iter)(const struct rte_tailq_head *, void *),
+   void *arg);
+
 /**
  * Lookup for a tail queue.
  *
diff --git a/lib/librte_eal/rte_eal_version.map 
b/lib/librte_eal/rte_eal_version.map
index 3fe78260d..9a7abc778 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -360,4 +360,5 @@ EXPERIMENTAL {
rte_service_may_be_active;
rte_socket_count;
rte_socket_id_by_idx;
+   rte_tailq_walk;
 };
-- 
2.17.1



[dpdk-dev] [PATCH v3 2/3] eal:add tailq walk routine

2018-12-16 Thread Keith Wiles
Add tailq walk routine for debugging and used in DFS.

Signed-off-by: Keith Wiles 
---
V3
   Fix checkpatch warnings adding a commit message.
   Must be using a different checkpatch then on my Ubuntu 18.04 system 
V2
   Fix checkpatch warnings

 lib/librte_eal/common/eal_common_tailqs.c | 19 +++
 lib/librte_eal/common/include/rte_tailq.h | 13 +
 lib/librte_eal/rte_eal_version.map|  1 +
 3 files changed, 33 insertions(+)

diff --git a/lib/librte_eal/common/eal_common_tailqs.c 
b/lib/librte_eal/common/eal_common_tailqs.c
index babd3b30a..791dcc37b 100644
--- a/lib/librte_eal/common/eal_common_tailqs.c
+++ b/lib/librte_eal/common/eal_common_tailqs.c
@@ -69,6 +69,25 @@ rte_dump_tailq(FILE *f)
rte_rwlock_read_unlock(&mcfg->qlock);
 }
 
+void
+rte_tailq_walk(void (*iter)(const struct rte_tailq_head *, void *), void *arg)
+{
+   struct rte_mem_config *mcfg;
+   unsigned int i = 0;
+
+   if (!iter)
+   return;
+   mcfg = rte_eal_get_configuration()->mem_config;
+
+   rte_rwlock_read_lock(&mcfg->qlock);
+   for (i = 0; i < RTE_MAX_TAILQ; i++) {
+   const struct rte_tailq_head *tailq = &mcfg->tailq_head[i];
+
+   iter(tailq, arg);
+   }
+   rte_rwlock_read_unlock(&mcfg->qlock);
+}
+
 static struct rte_tailq_head *
 rte_eal_tailq_create(const char *name)
 {
diff --git a/lib/librte_eal/common/include/rte_tailq.h 
b/lib/librte_eal/common/include/rte_tailq.h
index 9b01abb2c..b9b1c6e75 100644
--- a/lib/librte_eal/common/include/rte_tailq.h
+++ b/lib/librte_eal/common/include/rte_tailq.h
@@ -18,6 +18,7 @@ extern "C" {
 #include 
 #include 
 #include 
+#include 
 
 /** dummy structure type used by the rte_tailq APIs */
 struct rte_tailq_entry {
@@ -85,6 +86,18 @@ struct rte_tailq_elem {
  */
 void rte_dump_tailq(FILE *f);
 
+/**
+ * Walk the tailq list and call the Iterator function given.
+ *
+ * @param func
+ *   Iterator function
+ * @param arg
+ *   pointer to user supplied argument passed to Iterator function
+ */
+void __rte_experimental
+   rte_tailq_walk(void (*iter)(const struct rte_tailq_head *, void *),
+   void *arg);
+
 /**
  * Lookup for a tail queue.
  *
diff --git a/lib/librte_eal/rte_eal_version.map 
b/lib/librte_eal/rte_eal_version.map
index 3fe78260d..9a7abc778 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -360,4 +360,5 @@ EXPERIMENTAL {
rte_service_may_be_active;
rte_socket_count;
rte_socket_id_by_idx;
+   rte_tailq_walk;
 };
-- 
2.17.1



[dpdk-dev] [PATCH v3 3/3] ring:add ring walk routine

2018-12-16 Thread Keith Wiles
Add a ring walk routine for debugging and DFS.

Signed-off-by: Keith Wiles 
---
V3
   Fix checkpatch warnings adding a commit message.
   Must be using a different checkpatch then on my Ubuntu 18.04 system 
V2
   Fix checkpatch warnings.

 lib/librte_ring/rte_ring.c   | 20 
 lib/librte_ring/rte_ring.h   | 14 ++
 lib/librte_ring/rte_ring_version.map |  7 +++
 3 files changed, 41 insertions(+)

diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c
index d215acecc..fb5819e4b 100644
--- a/lib/librte_ring/rte_ring.c
+++ b/lib/librte_ring/rte_ring.c
@@ -280,3 +280,23 @@ rte_ring_lookup(const char *name)
 
return r;
 }
+
+void
+rte_ring_walk(void (*func)(struct rte_ring *r, void *arg), void *arg)
+{
+   const struct rte_tailq_entry *te;
+   struct rte_ring_list *ring_list;
+
+   if (!func)
+   return;
+
+   ring_list = RTE_TAILQ_CAST(rte_ring_tailq.head, rte_ring_list);
+
+   rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+   TAILQ_FOREACH(te, ring_list, next) {
+   func((struct rte_ring *) te->data, arg);
+   }
+
+   rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+}
diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index af5444a9f..b9391a655 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -769,6 +769,20 @@ rte_ring_get_capacity(const struct rte_ring *r)
  */
 void rte_ring_list_dump(FILE *f);
 
+/**
+ * Walk the list of ring entries and call the function provided
+ *
+ * @param func
+ *   The function to call for each ring entry using the following prototype
+ * void (*func)(struct rte_ring *r, void *arg)
+ * @param arg
+ *   argument for the call to function
+ * @return
+ *   None.
+ */
+void __rte_experimental
+rte_ring_walk(void (*func)(struct rte_ring *r, void *arg), void *arg);
+
 /**
  * Search a ring from its name
  *
diff --git a/lib/librte_ring/rte_ring_version.map 
b/lib/librte_ring/rte_ring_version.map
index d935efd0d..17c05c1b2 100644
--- a/lib/librte_ring/rte_ring_version.map
+++ b/lib/librte_ring/rte_ring_version.map
@@ -17,3 +17,10 @@ DPDK_2.2 {
rte_ring_free;
 
 } DPDK_2.0;
+
+EXPERIMENTAL {
+   global:
+
+   rte_ring_walk;
+};
+
-- 
2.17.1



Re: [dpdk-dev] [PATCH] eventdev: fix xstats documentation typo

2018-12-16 Thread Jerin Jacob Kollanukkaran
On Mon, 2018-12-03 at 14:05 -0600, Gage Eads wrote:
> The eventdev extended stats documentation referred to two non-
> existent
> functions, rte_eventdev_xstats_get and
> rte_eventdev_get_xstats_by_name.
> 
> Fixes: 3ed7fc039a ("eventdev: add extended stats")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Gage Eads 
> 

Applied to dpdk-next-eventdev/master. Thanks.




Re: [dpdk-dev] [PATCH v3 1/3] dfs:add FUSE based filesystem for DPDK

2018-12-16 Thread Luca Boccassi
On Sun, 2018-12-16 at 11:46 -0600, Keith Wiles wrote:
> --- /dev/null
> +++ b/lib/librte_dfs/meson.build
> @@ -0,0 +1,47 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018 Intel Corporation
> +
> +version = 1

You can leave the version out if it's 1, it's the default

> --- /dev/null
> +++ b/lib/librte_dfs/Makefile
> @@ -0,0 +1,51 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018 Intel Corporation
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +# library name
> +LIB = librte_dfs.a
> +
> +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
> +CFLAGS += -DALLOW_EXPERIMENTAL_API -D_FILE_OFFSET_BITS=64
> +CFLAGS += -D_GNU_SOURCE
> +CFLAGS += -I$(RTE_SDK)/drivers/bus/pci
> +LDLIBS += -lrte_eal -lrte_mempool -lrte_hash -lrte_ethdev
> -lrte_utils
> +LDLIBS += -lrte_ring -lrte_timer -lrte_rawdev -lrte_cryptodev
> +LDLIBS += -lpthread
> +LDLIBS += $(shell pkg-config --libs-only-l fuse3)
> +LDLIBS += $(shell pkg-config --libs-only-l jansson)

Why --libs-only-l ? If the libraries are not installed in the canonical
path (eg: build-root-without-chroot) it will break as it won't use the
-L

-- 
Kind regards,
Luca Boccassi


[dpdk-dev] [PATCH] eal:missing newline on RTE_LOG msg

2018-12-16 Thread Keith Wiles
Add a missing newline to a RTE_LOG message.

Signed-off-by: Keith Wiles 
---
 lib/librte_eal/common/rte_option.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/librte_eal/common/rte_option.c 
b/lib/librte_eal/common/rte_option.c
index 02d59a869..7605190c3 100644
--- a/lib/librte_eal/common/rte_option.c
+++ b/lib/librte_eal/common/rte_option.c
@@ -36,7 +36,7 @@ rte_option_register(struct rte_option *opt)
 {
TAILQ_FOREACH(option, &rte_option_list, next) {
if (strcmp(opt->opt_str, option->opt_str) == 0)
-   RTE_LOG(INFO, EAL, "Option %s has already been 
registered.",
+   RTE_LOG(INFO, EAL, "Option %s has already been 
registered.\n",
opt->opt_str);
return;
}
-- 
2.17.1



Re: [dpdk-dev] [EXT] [PATCH v2] eventdev: fix eth Tx adapter queue count checks

2018-12-16 Thread Jerin Jacob Kollanukkaran
On Thu, 2018-12-13 at 13:53 +0530, Nikhil Rao wrote:
> 
> rte_event_eth_tx_adapter_queue_add() - add a check
> that returns an error if the ethdev the zero Tx queues
> configured.
> 
> rte_event_eth_tx_adapter_queue_del() - remove the
> checks for ethdev queue count, instead check for
> queues added to the adapter which maybe different
> from the current ethdev queue count.
> 
> Fixes: a3bbf2e09756 ("eventdev: add eth Tx adapter implementation")
> Cc: sta...@dpdk.org
> Signed-off-by: Nikhil Rao 
> ---
>  lib/librte_eventdev/rte_event_eth_tx_adapter.c | 53
> +-
>  1 file changed, 36 insertions(+), 17 deletions(-)
> 
> v2:
> - enclosed macro parameter queue in ()
> 
> diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.c
> b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
> index ccf8a75..8431656 100644
> --- a/lib/librte_eventdev/rte_event_eth_tx_adapter.c
> +++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
> @@ -59,6 +59,19 @@
> return -EINVAL; \
>  } while (0)
> 
> +#define TXA_CHECK_TXQ(dev, queue) \
> +do {\
> +   if ((dev)->data->nb_tx_queues == 0) { \
> +   RTE_EDEV_LOG_ERR("No tx queues configured"); \
> +   return -EINVAL; \
> +   } \
> +   if (queue != -1 && (uint16_t)queue >= (dev)->data-

missing enclosure for queue to avoid side effects, ie.
if ((queue) != -1 && (uint16_t)(queue)


> >nb_tx_queues) { \
> +   RTE_EDEV_LOG_ERR("Invalid tx queue_id %" PRIu16, \
> +   (uint16_t)queue); \

(uint16_t)(queue)

> +   return -EINVAL; \
> +   } \
> +} while (0)


Another than above nits,

Acked-by: Jerin Jacob 

Please send the v3 asap so that I can include it in RC1.



Re: [dpdk-dev] [PATCH v4 16/32] net/ice: support device initialization

2018-12-16 Thread Lu, Wenzhuo
Hi David,
From: David Marchand [mailto:david.march...@redhat.com]
Sent: Friday, December 14, 2018 8:05 PM
To: Lu, Wenzhuo 
Cc: dev@dpdk.org; Yang, Qiming ; Li, Xiaoyun 
; Wu, Jingjing 
Subject: Re: [dpdk-dev] [PATCH v4 16/32] net/ice: support device initialization



On Fri, Dec 14, 2018 at 9:34 AM Wenzhuo Lu 
mailto:wenzhuo...@intel.com>> wrote:

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
new file mode 100644
index 000..085e848
--- /dev/null
+++ b/doc/guides/nics/features/ice.ini
@@ -0,0 +1,11 @@
+;
+; Supported features of the 'ice' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+BSD nic_uio  = Y
+Linux UIO= Y
+Linux VFIO   = Y
+x86-32   = Y
+x86-64   = Y

[snip]

+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_ice, rte_ice_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(ice, pci_id_ice_map);
+
+RTE_INIT(ice_init_log)
+{
+   ice_logtype_init = rte_log_register("pmd.net.ice.init");
+   if (ice_logtype_init >= 0)
+   rte_log_set_level(ice_logtype_init, RTE_LOG_NOTICE);
+   ice_logtype_driver = rte_log_register("pmd.net.ice.driver");
+   if (ice_logtype_driver >= 0)
+   rte_log_set_level(ice_logtype_driver, RTE_LOG_NOTICE);
+}

If this pmd is uio/vfio based, then you must report it via 
RTE_PMD_REGISTER_KMOD_DEP().
Thanks for reminder. Will add it.

--
David Marchand



Re: [dpdk-dev] [PATCH v4 1/2] net/i40e: support VF request more queues

2018-12-16 Thread Yan, Zhirun



> -Original Message-
> From: Zhang, Qi Z
> Sent: Friday, December 14, 2018 8:00 PM
> To: Yan, Zhirun ; dev@dpdk.org
> Cc: Wang, Haiyue 
> Subject: RE: [PATCH v4 1/2] net/i40e: support VF request more queues
> 
> 
> 
> > -Original Message-
> > From: Yan, Zhirun
> > Sent: Friday, December 14, 2018 10:37 PM
> > To: dev@dpdk.org; Zhang, Qi Z 
> > Cc: Yan, Zhirun ; Wang, Haiyue
> > 
> > Subject: [PATCH v4 1/2] net/i40e: support VF request more queues
> >
> > Before this patch, VF gets a default number of queues from the PF.
> > This patch enables VF to request a different number. When VF
> > configures more queues, it will send VIRTCHNL_OP_REQUEST_QUEUES to PF
> > to request more queues, if success, PF will reset the VF.
> >
> > User can run "port stop all", "port config port_id rxq/txq queue_num"
> > and "port start all" to reconfigure queue number.
> >
> > Signed-off-by: Zhirun Yan 
> > Signed-off-by: Haiyue Wang 
> > ---
> >  drivers/net/i40e/i40e_ethdev_vf.c | 62
> > ++-
> >  1 file changed, 60 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/net/i40e/i40e_ethdev_vf.c
> > b/drivers/net/i40e/i40e_ethdev_vf.c
> > index 05dc6596b..a568fb528 100644
> > --- a/drivers/net/i40e/i40e_ethdev_vf.c
> > +++ b/drivers/net/i40e/i40e_ethdev_vf.c
> > @@ -359,6 +359,25 @@ i40evf_execute_vf_cmd(struct rte_eth_dev *dev,
> > struct vf_cmd_info *args)
> > } while (i++ < MAX_TRY_TIMES);
> > _clear_cmd(vf);
> > break;
> > +   case VIRTCHNL_OP_REQUEST_QUEUES:
> > +/*ignore async reply, only wait for system message,*/
> > +/*vf_reset = true if get
> VIRTCHNL_EVENT_RESET_IMPENDING,*/
> > +/*if not, means request queues failed */
> > +   err = -1;
> > +   do {
> > +   ret = i40evf_read_pfmsg(dev, &info);
> > +   vf->cmd_retval = info.result;
> > +   if (ret == I40EVF_MSG_SYS && vf->vf_reset) {
> > +   err = 0;
> > +   break;
> > +   } else if (ret == I40EVF_MSG_ERR) {
> 
> Base on patch 2/2, in the case some error happen for example that no more
> free queue available, I40EVF_MSG_CMD will be returned.
> I think it should be "else if (ret == I40EVF_MSG_ERR || ret ==
> I40EVF_MSG_CMD)" here So we don't need to wait until end of loop in that
> case.

Yes,  I will modify it in the next version. Thanks.

> > +   break;
> > +   }
> > +   rte_delay_ms(ASQ_DELAY_MS);
> > +   /* If don't read msg or read sys event, continue */
> > +   } while (i++ < MAX_TRY_TIMES);
> > +   _clear_cmd(vf);
> > +   break;
> >
> > default:
> > /* for other adminq in running time, waiting the cmd done flag
> */
> > @@
> > -1012,6 +1031,28 @@ i40evf_add_vlan(struct rte_eth_dev *dev, uint16_t
> vlanid)
> > return err;
> >  }
> >
> > +static int
> > +i40evf_request_queues(struct rte_eth_dev *dev, uint16_t num) {
> > +   struct i40e_vf *vf =
> > I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > +   struct virtchnl_vf_res_request vfres;
> > +   struct vf_cmd_info args;
> > +   int err;
> > +
> > +   vfres.num_queue_pairs = num;
> > +
> > +   args.ops = VIRTCHNL_OP_REQUEST_QUEUES;
> > +   args.in_args = (u8 *)&vfres;
> > +   args.in_args_size = sizeof(vfres);
> > +   args.out_buffer = vf->aq_resp;
> > +   args.out_size = I40E_AQ_BUF_SZ;
> > +   err = i40evf_execute_vf_cmd(dev, &args);
> > +   if (err)
> > +   PMD_DRV_LOG(ERR, "fail to execute command
> > OP_REQUEST_QUEUES");
> > +
> > +   return err;
> > +}
> > +
> >  static int
> >  i40evf_del_vlan(struct rte_eth_dev *dev, uint16_t vlanid)  { @@
> > -1516,8
> > +1557,11 @@ RTE_PMD_REGISTER_KMOD_DEP(net_i40e_vf, "* igb_uio |
> > vfio-pci");  static int  i40evf_dev_configure(struct rte_eth_dev *dev)
> > {
> > +   struct i40e_vf *vf =
> > I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > struct i40e_adapter *ad =
> > I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> > +   uint16_t num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
> > +   dev->data->nb_tx_queues);
> >
> > /* Initialize to TRUE. If any of Rx queues doesn't meet the bulk
> >  * allocation or vector Rx preconditions we will reset it.
> > @@ -1527,6 +1571,20 @@ i40evf_dev_configure(struct rte_eth_dev *dev)
> > ad->tx_simple_allowed = true;
> > ad->tx_vec_allowed = true;
> >
> > +   if (num_queue_pairs > vf->vsi_res->num_queue_pairs) {
> > +   int ret = 0;
> > +
> > +   PMD_DRV_LOG(INFO, "change queue pairs from %u to %u",
> > +   vf->vsi_res->num_queue_pairs, num_queue_pairs);
> > +   ret = i40evf_request_queues(dev, num_queue_pairs);
> > +   if (ret != 0)
> > +   return ret;
> > +
> > +   ret = i40evf_dev_reset(dev);
> > +   if (ret 

[dpdk-dev] [PATCH v5 0/2] Support request more queues

2018-12-16 Thread Zhirun Yan
V5
-  modify the loop conditions (ret == I40EVF_MSG_ERR || ret == I40EVF_MSG_CMD)
if there is no free queue available, just end the loop.

DPDK VF send VIRTCHNL_OP_REQUEST_QUEUES to kernel PF or DPDK VF for
requesting more queues, then PF will allocate more queues.

Zhirun Yan (2):
  net/i40e: support VF request more queues
  net/i40e: support PF respond VF request more queues

 drivers/net/i40e/i40e_ethdev_vf.c | 62 -
 drivers/net/i40e/i40e_pf.c| 65 +++
 2 files changed, 125 insertions(+), 2 deletions(-)

-- 
2.17.1



[dpdk-dev] [PATCH v5 2/2] net/i40e: support PF respond VF request more queues

2018-12-16 Thread Zhirun Yan
This patch respond the VIRTCHNL_OP_REQUEST_QUEUES msg from VF, and
process to allocated more queues for the requested VF. If successful,
PF will notify VF to reset. If unsuccessful, PF will send message to
inform VF.

Signed-off-by: Zhirun Yan 
Signed-off-by: Haiyue Wang 
---
 drivers/net/i40e/i40e_pf.c | 65 ++
 1 file changed, 65 insertions(+)

diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index dd3962d38..da0e5d6c5 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -1218,6 +1218,66 @@ i40e_notify_vf_link_status(struct rte_eth_dev *dev, 
struct i40e_pf_vf *vf)
I40E_SUCCESS, (uint8_t *)&event, sizeof(event));
 }
 
+/**
+ * i40e_vc_notify_vf_reset
+ * @vf: pointer to the VF structure
+ *
+ * indicate a pending reset to the given VF
+ **/
+static void
+i40e_vc_notify_vf_reset(struct i40e_pf_vf *vf)
+{
+   struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf);
+   struct virtchnl_pf_event pfe;
+   int abs_vf_id;
+   uint16_t vf_id = vf->vf_idx;
+
+   abs_vf_id = vf_id + hw->func_caps.vf_base_id;
+   pfe.event = VIRTCHNL_EVENT_RESET_IMPENDING;
+   pfe.severity = PF_EVENT_SEVERITY_CERTAIN_DOOM;
+   i40e_aq_send_msg_to_vf(hw, abs_vf_id, VIRTCHNL_OP_EVENT, 0, (u8 *)&pfe,
+  sizeof(struct virtchnl_pf_event), NULL);
+}
+
+static int
+i40e_pf_host_process_cmd_request_queues(struct i40e_pf_vf *vf, uint8_t *msg)
+{
+   struct virtchnl_vf_res_request *vfres =
+   (struct virtchnl_vf_res_request *)msg;
+   struct i40e_pf *pf;
+   uint32_t req_pairs = vfres->num_queue_pairs;
+   uint32_t cur_pairs = vf->vsi->nb_used_qps;
+
+   pf = vf->pf;
+
+   if (req_pairs <= 0) {
+   PMD_DRV_LOG(ERR, "VF %d tried to request %d queues. 
Ignoring.\n",
+   vf->vf_idx,
+   I40E_MAX_QP_NUM_PER_VF);
+   } else if (req_pairs > I40E_MAX_QP_NUM_PER_VF) {
+   PMD_DRV_LOG(ERR, "VF %d tried to request more than %d 
queues.\n",
+   vf->vf_idx,
+   I40E_MAX_QP_NUM_PER_VF);
+   vfres->num_queue_pairs = I40E_MAX_QP_NUM_PER_VF;
+   } else if (req_pairs > cur_pairs + pf->qp_pool.num_free) {
+   PMD_DRV_LOG(ERR, "VF %d requested %d more queues, but noly %d 
left\n",
+   vf->vf_idx,
+   req_pairs - cur_pairs,
+   pf->qp_pool.num_free);
+   vfres->num_queue_pairs = pf->qp_pool.num_free + cur_pairs;
+   } else {
+   i40e_vc_notify_vf_reset(vf);
+   vf->vsi->nb_qps = req_pairs;
+   pf->vf_nb_qps = req_pairs;
+   i40e_pf_host_process_cmd_reset_vf(vf);
+
+   return 0;
+   }
+
+   return i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_REQUEST_QUEUES, 0,
+   (u8 *)vfres, sizeof(*vfres));
+}
+
 void
 i40e_pf_host_handle_vf_msg(struct rte_eth_dev *dev,
   uint16_t abs_vf_id, uint32_t opcode,
@@ -1351,6 +1411,11 @@ i40e_pf_host_handle_vf_msg(struct rte_eth_dev *dev,
PMD_DRV_LOG(INFO, "OP_CONFIG_RSS_KEY received");
i40e_pf_host_process_cmd_set_rss_key(vf, msg, msglen, b_op);
break;
+   case VIRTCHNL_OP_REQUEST_QUEUES:
+   PMD_DRV_LOG(INFO, "OP_REQUEST_QUEUES received");
+   i40e_pf_host_process_cmd_request_queues(vf, msg);
+   break;
+
/* Don't add command supported below, which will
 * return an error code.
 */
-- 
2.17.1



[dpdk-dev] [PATCH v5 1/2] net/i40e: support VF request more queues

2018-12-16 Thread Zhirun Yan
Before this patch, VF gets a default number of queues from the PF.
This patch enables VF to request a different number. When VF configures
more queues, it will send VIRTCHNL_OP_REQUEST_QUEUES to PF to request
more queues, if success, PF will reset the VF.

User can run "port stop all", "port config port_id rxq/txq queue_num"
and "port start all" to reconfigure queue number.

Signed-off-by: Zhirun Yan 
Signed-off-by: Haiyue Wang 
---
 drivers/net/i40e/i40e_ethdev_vf.c | 62 ++-
 1 file changed, 60 insertions(+), 2 deletions(-)

diff --git a/drivers/net/i40e/i40e_ethdev_vf.c 
b/drivers/net/i40e/i40e_ethdev_vf.c
index 05dc6596b..498e86649 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -359,6 +359,25 @@ i40evf_execute_vf_cmd(struct rte_eth_dev *dev, struct 
vf_cmd_info *args)
} while (i++ < MAX_TRY_TIMES);
_clear_cmd(vf);
break;
+   case VIRTCHNL_OP_REQUEST_QUEUES:
+/*ignore async reply, only wait for system message,*/
+/*vf_reset = true if get VIRTCHNL_EVENT_RESET_IMPENDING,*/
+/*if not, means request queues failed */
+   err = -1;
+   do {
+   ret = i40evf_read_pfmsg(dev, &info);
+   vf->cmd_retval = info.result;
+   if (ret == I40EVF_MSG_SYS && vf->vf_reset) {
+   err = 0;
+   break;
+   } else if (ret == I40EVF_MSG_ERR || ret == 
I40EVF_MSG_CMD) {
+   break;
+   }
+   rte_delay_ms(ASQ_DELAY_MS);
+   /* If don't read msg or read sys event, continue */
+   } while (i++ < MAX_TRY_TIMES);
+   _clear_cmd(vf);
+   break;
 
default:
/* for other adminq in running time, waiting the cmd done flag 
*/
@@ -1012,6 +1031,28 @@ i40evf_add_vlan(struct rte_eth_dev *dev, uint16_t vlanid)
return err;
 }
 
+static int
+i40evf_request_queues(struct rte_eth_dev *dev, uint16_t num)
+{
+   struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+   struct virtchnl_vf_res_request vfres;
+   struct vf_cmd_info args;
+   int err;
+
+   vfres.num_queue_pairs = num;
+
+   args.ops = VIRTCHNL_OP_REQUEST_QUEUES;
+   args.in_args = (u8 *)&vfres;
+   args.in_args_size = sizeof(vfres);
+   args.out_buffer = vf->aq_resp;
+   args.out_size = I40E_AQ_BUF_SZ;
+   err = i40evf_execute_vf_cmd(dev, &args);
+   if (err)
+   PMD_DRV_LOG(ERR, "fail to execute command OP_REQUEST_QUEUES");
+
+   return err;
+}
+
 static int
 i40evf_del_vlan(struct rte_eth_dev *dev, uint16_t vlanid)
 {
@@ -1516,8 +1557,11 @@ RTE_PMD_REGISTER_KMOD_DEP(net_i40e_vf, "* igb_uio | 
vfio-pci");
 static int
 i40evf_dev_configure(struct rte_eth_dev *dev)
 {
+   struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
struct i40e_adapter *ad =
I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+   uint16_t num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
+   dev->data->nb_tx_queues);
 
/* Initialize to TRUE. If any of Rx queues doesn't meet the bulk
 * allocation or vector Rx preconditions we will reset it.
@@ -1527,6 +1571,20 @@ i40evf_dev_configure(struct rte_eth_dev *dev)
ad->tx_simple_allowed = true;
ad->tx_vec_allowed = true;
 
+   if (num_queue_pairs > vf->vsi_res->num_queue_pairs) {
+   int ret = 0;
+
+   PMD_DRV_LOG(INFO, "change queue pairs from %u to %u",
+   vf->vsi_res->num_queue_pairs, num_queue_pairs);
+   ret = i40evf_request_queues(dev, num_queue_pairs);
+   if (ret != 0)
+   return ret;
+
+   ret = i40evf_dev_reset(dev);
+   if (ret != 0)
+   return ret;
+   }
+
return i40evf_init_vlan(dev);
 }
 
@@ -2145,8 +2203,8 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
 {
struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
 
-   dev_info->max_rx_queues = vf->vsi_res->num_queue_pairs;
-   dev_info->max_tx_queues = vf->vsi_res->num_queue_pairs;
+   dev_info->max_rx_queues = I40E_MAX_QP_NUM_PER_VF;
+   dev_info->max_tx_queues = I40E_MAX_QP_NUM_PER_VF;
dev_info->min_rx_bufsize = I40E_BUF_SIZE_MIN;
dev_info->max_rx_pktlen = I40E_FRAME_SIZE_MAX;
dev_info->hash_key_size = (I40E_VFQF_HKEY_MAX_INDEX + 1) * 
sizeof(uint32_t);
-- 
2.17.1



Re: [dpdk-dev] [RFC v2] /net: memory interface (memif)

2018-12-16 Thread Honnappa Nagarahalli
> >
> >> On Dec 10, 2018, at 4:06 AM, Jakub Grajciar  wrote:
> >
> > I do not like being the coding style police, but that is most of the 
> > comments
> here and I will try to test this one later this week. Plus I am sure I missed 
> some
> style problems, if you have not read the coding style for DPDK please have a
> read.
> >
> > http://doc.dpdk.org/guides/contributing/coding_style.html
> >
> > One comment, why did you include all of the code to handle memif instead
> of including the libmemif.a from VPP. I worry if libmemif is changed then we
> have a breakage. I do not mind the PMD being standalone and I do like not
> having the dependence.
Just for my understanding, do you mean to say we could include the libmemif.a 
as a binary in DPDK?

IMO, I would like to view DPDK as the device abstraction and VPP as the 
protocol stack built on top. From this perspective, it is good to have 
standalone memif in DPDK.

> >
> > As I did not dive into the code much it does look reasonable and I hope to
> give it a try later this week.
> >>
> 
> A couple more items, do you plan on writing the documentation for the PMD
> and provide an example program?
+1, would be good to have a cover letter.
I would like to run this on Arm platforms, mostly in the beginning of Jan.

> 
> Regards,
> Keith



[dpdk-dev] [PATCH v3] eventdev: fix eth Tx adapter queue count checks

2018-12-16 Thread Nikhil Rao
rte_event_eth_tx_adapter_queue_add() - add a check
that returns an error if the ethdev has zero Tx queues
configured.

rte_event_eth_tx_adapter_queue_del() - remove the
checks for ethdev queue count, instead check for
queues added to the adapter which maybe different
from the current ethdev queue count.

Fixes: a3bbf2e09756 ("eventdev: add eth Tx adapter implementation")
Cc: sta...@dpdk.org
Signed-off-by: Nikhil Rao 
---

v2:
- none (missed adding changes, now in v3)
v3:
- enclosed macro parameter queue in ()


 lib/librte_eventdev/rte_event_eth_tx_adapter.c | 54 ++
 1 file changed, 37 insertions(+), 17 deletions(-)

diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.c 
b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
index ccf8a75..67216a3 100644
--- a/lib/librte_eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
@@ -59,6 +59,20 @@
return -EINVAL; \
 } while (0)
 
+#define TXA_CHECK_TXQ(dev, queue) \
+do {\
+   if ((dev)->data->nb_tx_queues == 0) { \
+   RTE_EDEV_LOG_ERR("No tx queues configured"); \
+   return -EINVAL; \
+   } \
+   if ((queue) != -1 && \
+   (uint16_t)(queue) >= (dev)->data->nb_tx_queues) { \
+   RTE_EDEV_LOG_ERR("Invalid tx queue_id %" PRIu16, \
+   (uint16_t)(queue)); \
+   return -EINVAL; \
+   } \
+} while (0)
+
 /* Tx retry callback structure */
 struct txa_retry {
/* Ethernet port id */
@@ -795,20 +809,35 @@ static int txa_service_queue_del(uint8_t id,
struct rte_eth_dev_tx_buffer *tb;
uint16_t port_id;
 
+   txa = txa_service_id_to_data(id);
+   port_id = dev->data->port_id;
+
if (tx_queue_id == -1) {
-   uint16_t i;
-   int ret = -1;
+   uint16_t i, q, nb_queues;
+   int ret = 0;
 
-   for (i = 0; i < dev->data->nb_tx_queues; i++) {
-   ret = txa_service_queue_del(id, dev, i);
-   if (ret != 0)
-   break;
+   nb_queues = txa->nb_queues;
+   if (nb_queues == 0)
+   return 0;
+
+   i = 0;
+   q = 0;
+   tqi = txa->txa_ethdev[port_id].queues;
+
+   while (i < nb_queues) {
+
+   if (tqi[q].added) {
+   ret = txa_service_queue_del(id, dev, q);
+   if (ret != 0)
+   break;
+   }
+   i++;
+   q++;
}
return ret;
}
 
txa = txa_service_id_to_data(id);
-   port_id = dev->data->port_id;
 
tqi = txa_service_queue(txa, port_id, tx_queue_id);
if (tqi == NULL || !tqi->added)
@@ -999,11 +1028,7 @@ static int txa_service_queue_del(uint8_t id,
TXA_CHECK_OR_ERR_RET(id);
 
eth_dev = &rte_eth_devices[eth_dev_id];
-   if (queue != -1 && (uint16_t)queue >= eth_dev->data->nb_tx_queues) {
-   RTE_EDEV_LOG_ERR("Invalid tx queue_id %" PRIu16,
-   (uint16_t)queue);
-   return -EINVAL;
-   }
+   TXA_CHECK_TXQ(eth_dev, queue);
 
caps = 0;
if (txa_dev_caps_get(id))
@@ -1034,11 +1059,6 @@ static int txa_service_queue_del(uint8_t id,
TXA_CHECK_OR_ERR_RET(id);
 
eth_dev = &rte_eth_devices[eth_dev_id];
-   if (queue != -1 && (uint16_t)queue >= eth_dev->data->nb_tx_queues) {
-   RTE_EDEV_LOG_ERR("Invalid tx queue_id %" PRIu16,
-   (uint16_t)queue);
-   return -EINVAL;
-   }
 
caps = 0;
 
-- 
1.8.3.1



Re: [dpdk-dev] [RFC v2] /net: memory interface (memif)

2018-12-16 Thread Honnappa Nagarahalli
> > >> On Dec 10, 2018, at 4:06 AM, Jakub Grajciar  wrote:
> > >
> > > I do not like being the coding style police, but that is most of the
> > > comments
> > here and I will try to test this one later this week. Plus I am sure I
> > missed some style problems, if you have not read the coding style for
> > DPDK please have a read.
> > >
> > > http://doc.dpdk.org/guides/contributing/coding_style.html
> > >
> > > One comment, why did you include all of the code to handle memif
> > > instead
> > of including the libmemif.a from VPP. I worry if libmemif is changed
> > then we have a breakage. I do not mind the PMD being standalone and I
> > do like not having the dependence.
> Just for my understanding, do you mean to say we could include the
> libmemif.a as a binary in DPDK?
> 
> IMO, I would like to view DPDK as the device abstraction and VPP as the
> protocol stack built on top. From this perspective, it is good to have
> standalone memif in DPDK.
> 
> > >
> > > As I did not dive into the code much it does look reasonable and I
> > > hope to
> > give it a try later this week.
> > >>
> >
> > A couple more items, do you plan on writing the documentation for the
> > PMD and provide an example program?
> +1, would be good to have a cover letter.
Please ignore, I already see V3 having some documentation.

> I would like to run this on Arm platforms, mostly in the beginning of Jan.
> 
> >
> > Regards,
> > Keith



Re: [dpdk-dev] [PATCH v4 16/32] net/ice: support device initialization

2018-12-16 Thread Lu, Wenzhuo
Hi Ferruh,

> -Original Message-
> From: Yigit, Ferruh
> Sent: Friday, December 14, 2018 5:46 PM
> To: Lu, Wenzhuo ; dev@dpdk.org
> Cc: Yang, Qiming ; Li, Xiaoyun
> ; Wu, Jingjing 
> Subject: Re: [dpdk-dev] [PATCH v4 16/32] net/ice: support device
> initialization
> 
> On 12/14/2018 8:35 AM, Wenzhuo Lu wrote:
> > +ifeq ($(CONFIG_RTE_TOOLCHAIN_ICC),y)
> > +CFLAGS_BASE_DRIVER = -wd593 -wd188
> 
> This is causing following warning for icc [1], new icc versions require the
> syntax "-diag-disable ###" instead of "-wd###", please check [2].
> 
> 
> [1]
> command line remark #10010: option '-wd593' is deprecated and will be
> removed in a future release. See '-help deprecated'
> 
> [2]
> Commit f16d0b36f816 ("drivers/net: fix icc deprecated parameter warning")
> 
> 
> $ icc --version
> icc (ICC) 19.0.1.144 20181018
Thanks for the comments. Will change it.


Re: [dpdk-dev] [PATCH v1 0/2] reimplement rwlock and add relevant perf test case

2018-12-16 Thread Honnappa Nagarahalli
Adding other platform maintainers as it affects all platforms.

> -Original Message-
> From: Gavin Hu (Arm Technology China) 
> Sent: Thursday, December 13, 2018 7:30 PM
> To: Stephen Hemminger ; Joyce Kong (Arm
> Technology China) 
> Cc: dev@dpdk.org; nd ; tho...@monjalon.net;
> jerin.ja...@caviumnetworks.com; hemant.agra...@nxp.com; Honnappa
> Nagarahalli 
> Subject: RE: [dpdk-dev] [PATCH v1 0/2] reimplement rwlock and add relevant
> perf test case
> 
> Hi Stephen,
> 
> Thanks for your comment and sharing the link!
> We are looking into it and it may take more time for performance profiling.
> 
> Best Regards,
> Gavin
> 
> > -Original Message-
> > From: Stephen Hemminger 
> > Sent: Thursday, December 13, 2018 1:27 PM
> > To: Joyce Kong (Arm Technology China) 
> > Cc: dev@dpdk.org; nd ; tho...@monjalon.net;
> > jerin.ja...@caviumnetworks.com; hemant.agra...@nxp.com; Honnappa
> > Nagarahalli ; Gavin Hu (Arm Technology
> > China) 
> > Subject: Re: [dpdk-dev] [PATCH v1 0/2] reimplement rwlock and add
> > relevant perf test case
> >
> > On Thu, 13 Dec 2018 11:37:43 +0800
> > Joyce Kong  wrote:
> >
> > > v1: reimplement rwlock with __atomic builtins, and add a rwlock perf test
> > > on all available cores to benchmark the improvement.
> > >
> > > We tested the patches on three arm64 platforms, ThundeX2 gained 20%
> > > performance, Qualcomm gained 36% and the 4-Cortex-A72 Marvell
> > MACCHIATObin gained 19.6%.
> > > Below is the detailed test result on ThunderX2:
> > >
> > > *** rwlock_autotest without __atomic builtins *** Rwlock Perf Test
> > > on
> > > 128 cores...
> > > Core [0] count = 281
> > > Core [1] count = 252
> > > Core [2] count = 290
> > > Core [3] count = 259
> > > Core [4] count = 287
> > > ...
> > > Core [209] count = 3
> > > Core [210] count = 31
> > > Core [211] count = 120
> > > Total count = 18537
> > >
> > > *** rwlock_autotest with __atomic builtins *** Rwlock Perf Test on
> > > 128 cores...
> > > Core [0] count = 346
> > > Core [1] count = 355
> > > Core [2] count = 259
> > > Core [3] count = 285
> > > Core [4] count = 320
> > > ...
> > > Core [209] count = 2
> > > Core [210] count = 23
> > > Core [211] count = 63
> > > Total count = 22194
> > >
> > > Gavin Hu (1):
> > >   rwlock: reimplement with __atomic builtins
> > >
> > > Joyce Kong (1):
> > >   test/rwlock: add perf test case
> > >
> > >  lib/librte_eal/common/include/generic/rte_rwlock.h | 16 ++---
> > >  test/test/test_rwlock.c| 71 
> > > ++
> > >  2 files changed, 79 insertions(+), 8 deletions(-)
> > >
> >
> > Did you consider using a better algorithm not just better primitives.
> > See https://locklessinc.com/articles/locks/ for a more complete
> > discussion of alternatives like ticket locks.


Re: [dpdk-dev] [PATCH v5 0/2] Support request more queues

2018-12-16 Thread Zhang, Qi Z



> -Original Message-
> From: Yan, Zhirun
> Sent: Monday, December 17, 2018 7:11 PM
> To: dev@dpdk.org; Zhang, Qi Z 
> Cc: Yan, Zhirun 
> Subject: [PATCH v5 0/2] Support request more queues
> 
> V5
> -  modify the loop conditions (ret == I40EVF_MSG_ERR || ret ==
> I40EVF_MSG_CMD) if there is no free queue available, just end the loop.
> 
> DPDK VF send VIRTCHNL_OP_REQUEST_QUEUES to kernel PF or DPDK VF for
> requesting more queues, then PF will allocate more queues.
> 
> Zhirun Yan (2):
>   net/i40e: support VF request more queues
>   net/i40e: support PF respond VF request more queues
> 
>  drivers/net/i40e/i40e_ethdev_vf.c | 62 -
>  drivers/net/i40e/i40e_pf.c| 65
> +++
>  2 files changed, 125 insertions(+), 2 deletions(-)
> 
> --
> 2.17.1

Acked-by Qi Zhang 

Applied to dpdk-next-net-intel.

Thanks
Qi



Re: [dpdk-dev] [RFC] cryptodev/asymm: propose changes to modexp and modinv API

2018-12-16 Thread Verma, Shally
HI Arek

Sorry for late response. Please see response inline

From: Kusztal, ArkadiuszX  
Sent: 13 December 2018 01:56
To: Verma, Shally 
Cc: dev@dpdk.org; Trahe, Fiona ; Doherty, Declan 

Subject: [RFC] cryptodev/asymm: propose changes to modexp and modinv API

External Email
Hi Shally,

I'm implementing a crypto asymmetric PMD and have some concerns about the API 
which I 
will work through over the next few months. Starting with modexp and modinv I 
have
the following questions / suggestions:

  rte_crypto_asym.h:233
 rte_crypto_param modulus;
 /**< modulus
 * Prime modulus of the modexp transform operation 
in octet-string
 * network byte order format.
 */
 [AK] - Why prime? RSA for example use semi-prime 
or "RSA multi-prime".
 It should be just any positive integer.
[Shally] Hmm.. yes you're right . by the purpose of it , it is a semi-prime 
input.. so prime shouldn't be mentioned here.
 [AK] - If session API layer should check if it is 
non-zero and set flag accordingly.
[Shally] Sorry I didn't get this.. which flag you mean here? if modulus value 0 
is passed, it should be considered as INVALID_PARAM.
 
  rte_crypto_asym.h:253
 rte_crypto_param modulus;
 /**<
 * Pointer to the prime modulus data for modular
 * inverse operation in octet-string network byte
 * order format.
 */
 [AK] - Same situation as for mod exp. Just any 
number.
[Shally] Yea. It should be reworded as modulus data instead of *prime* modulus 
data

 For example when using with RSA Carmichael and 
Euler Totient function will even
 have composite factors. 
  
  rte_crypto_asym.h:323
 struct rte_crypto_mod_op_param {
 [AK] - There should be a result field. It size 
should be equal to the size
 of modulus. Same apply to mod mult inverse. It 
should be driver responsability to check if result
 will not overflow
[Shally] so these are in-place operation. Output will be written back to base 
param. It also imply length of allocated array should be >= modulus length 
which is passed in session param.

 [AK] - Any particular reason modulus and exponent 
is in session? Not saying
 it is wrong but is it for DH, RSA use cases only?
[Shally] no that's not the intent. For RSA and DH respective xforms have been 
defined. It is kept in session envisioning modulus and exponent wont change 
frequently across operation but only base value. 
So once context is loaded with modulus and exponent , app can call modexp on 
different base values.

 rte_crypto.h:39
 enum rte_crypto_op_status {
 [AK] - There will be many more status options in 
asymmetric,
 could we probably create new one for asymmetric 
crypto? Even if asymmetric and symmetric
 overlap?
 For mod exp, mod inv potentially it will be:
    DIVIDING_BY_ZERO_ERROR, INVERSE_NOT_EXISTS_ERROR... 
    
[Shally] So far RTE_CRYPTO_OP_STATUS_INVALID_PARAM has been sufficient for such 
cases. Do you have any use-cases where you need specific error code to indicate 
asym specific error codes?

  rte_crypto_asym.h:33
 size_t length;
 /**< length of data in bytes */
 [AK] - Is it guaranteed to be length of actual 
data, not allocated memory (i mean no leading 0ed bytes), so the most 
significant bit will be in data[0]?
[Shally] it should be length of actual data not length of allocated memory to 
an array. 
However it might create bit confusion on modular exponentiation op param as 
that expect length passed should tell actual data length in base array but 
array itself should be allocated upto modulus length.

 [AK] - could it be uint16/32_t instead as size_t 
can have different sizes in different implementations, uint16_t should be enough
 for all algorithms big integer sizes 
[Shally] no hard choices here though. But size_t would never be less than 
uint16_t so it guarantee to be large enough for any machines
  
  rte_crypto_asym.h:74, 250, 257, 351
 /**< Modular Inverse
  

[dpdk-dev] [PATCH] gro: fix overflow of payload length calculation

2018-12-16 Thread Jiayu Hu
When the packet length is smaller than the header length,
the calculated payload length will be overflowed and result
in incorrect reassembly behaviors.

Fixes: 1e4cf4d6d4fb ("gro: cleanup")
Fixes: 9e0b9d2ec0f4 ("gro: support VxLAN GRO")
Cc: sta...@dpdk.org

Signed-off-by: Jiayu Hu 
---
 lib/librte_gro/gro_tcp4.c   | 3 ++-
 lib/librte_gro/gro_vxlan_tcp4.c | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/lib/librte_gro/gro_tcp4.c b/lib/librte_gro/gro_tcp4.c
index 2c0f35c..2fe9aab 100644
--- a/lib/librte_gro/gro_tcp4.c
+++ b/lib/librte_gro/gro_tcp4.c
@@ -198,7 +198,8 @@ gro_tcp4_reassemble(struct rte_mbuf *pkt,
struct ipv4_hdr *ipv4_hdr;
struct tcp_hdr *tcp_hdr;
uint32_t sent_seq;
-   uint16_t tcp_dl, ip_id, hdr_len, frag_off;
+   int32_t tcp_dl;
+   uint16_t ip_id, hdr_len, frag_off;
uint8_t is_atomic;
 
struct tcp4_flow_key key;
diff --git a/lib/librte_gro/gro_vxlan_tcp4.c b/lib/librte_gro/gro_vxlan_tcp4.c
index ca86f01..955ae4b 100644
--- a/lib/librte_gro/gro_vxlan_tcp4.c
+++ b/lib/librte_gro/gro_vxlan_tcp4.c
@@ -295,7 +295,8 @@ gro_vxlan_tcp4_reassemble(struct rte_mbuf *pkt,
struct udp_hdr *udp_hdr;
struct vxlan_hdr *vxlan_hdr;
uint32_t sent_seq;
-   uint16_t tcp_dl, frag_off, outer_ip_id, ip_id;
+   int32_t tcp_dl;
+   uint16_t frag_off, outer_ip_id, ip_id;
uint8_t outer_is_atomic, is_atomic;
 
struct vxlan_tcp4_flow_key key;
-- 
2.7.4



[dpdk-dev] [PATCH] net/ixgbe: enable x550 flexible byte filter

2018-12-16 Thread Zhao Wei
There is need for users to use flexible byte filter on x550.
This patch enable it.

Fixes: 82fb702077f6 ("ixgbe: support new flow director modes for X550")
Fixes: 11777435c727 ("net/ixgbe: parse flow director filter")

Signed-off-by: Wei Zhao 
---
 drivers/net/ixgbe/ixgbe_fdir.c |   9 +-
 drivers/net/ixgbe/ixgbe_flow.c | 274 -
 2 files changed, 195 insertions(+), 88 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index e559f0f..deb9a21 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -307,6 +307,8 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev)
/* flex byte mask */
if (info->mask.flex_bytes_mask == 0)
fdirm |= IXGBE_FDIRM_FLEX;
+   if (info->mask.src_ipv4_mask == 0 && info->mask.dst_ipv4_mask == 0)
+   fdirm |= IXGBE_FDIRM_L3P;
 
IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm);
 
@@ -356,8 +358,7 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev)
/* mask VM pool and DIPv6 since there are currently not supported
 * mask FLEX byte, it will be set in flex_conf
 */
-   uint32_t fdirm = IXGBE_FDIRM_POOL | IXGBE_FDIRM_DIPv6 |
-IXGBE_FDIRM_FLEX;
+   uint32_t fdirm = IXGBE_FDIRM_POOL | IXGBE_FDIRM_DIPv6;
uint32_t fdiripv6m;
enum rte_fdir_mode mode = dev->data->dev_conf.fdir_conf.mode;
uint16_t mac_mask;
@@ -385,6 +386,10 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev)
return -EINVAL;
}
 
+   /* flex byte mask */
+   if (info->mask.flex_bytes_mask == 0)
+   fdirm |= IXGBE_FDIRM_FLEX;
+
IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm);
 
fdiripv6m = ((u32)0xU << IXGBE_FDIRIP6M_DIPM_SHIFT);
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index f0fafeb..dc210c5 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -1622,9 +1622,9 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
const struct rte_flow_item_raw *raw_mask;
const struct rte_flow_item_raw *raw_spec;
uint8_t j;
-
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
+
if (!pattern) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM_NUM,
@@ -1651,9 +1651,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 * value. So, we need not do anything for the not provided fields later.
 */
memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
-   memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask));
-   rule->mask.vlan_tci_mask = 0;
-   rule->mask.flex_bytes_mask = 0;
+   memset(&rule->mask, 0, sizeof(struct ixgbe_hw_fdir_mask));
 
/**
 * The first not void item should be
@@ -1665,7 +1663,8 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
item->type != RTE_FLOW_ITEM_TYPE_TCP &&
item->type != RTE_FLOW_ITEM_TYPE_UDP &&
-   item->type != RTE_FLOW_ITEM_TYPE_SCTP) {
+   item->type != RTE_FLOW_ITEM_TYPE_SCTP &&
+   item->type != RTE_FLOW_ITEM_TYPE_RAW) {
memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2201,6 +2200,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
}
 
raw_mask = item->mask;
+   rule->b_mask = TRUE;
 
/* check mask */
if (raw_mask->relative != 0x1 ||
@@ -2217,6 +2217,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
}
 
raw_spec = item->spec;
+   rule->b_spec = TRUE;
 
/* check spec */
if (raw_spec->relative != 0 ||
@@ -2323,6 +2324,8 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr 
*attr,
const struct rte_flow_item_eth *eth_mask;
const struct rte_flow_item_vlan *vlan_spec;
const struct rte_flow_item_vlan *vlan_mask;
+   const struct rte_flow_item_raw *raw_mask;
+   const struct rte_flow_item_raw *raw_spec;
uint32_t j;
 
if (!pattern) {
@@ -2351,8 +2354,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr 
*attr,
 * value. So, we need not do anything for the not provided fields later.
 */
memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
-   memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask));
-   rule->mask.vlan_tci_mask = 0;
+   memset(&rule->mask, 0, sizeof(struct ixgbe_hw_fdir_mask));
 
/**
 * The first not void item should be
@@ -2364,7 +2366,8 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr 
*attr,
item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
it

Re: [dpdk-dev] [PATCH] net/ixgbe: enable x550 flexible byte filter

2018-12-16 Thread Zhao1, Wei
Add yuan.p...@intel.com into mail loop

> -Original Message-
> From: Zhao1, Wei
> Sent: Monday, December 17, 2018 1:53 PM
> To: dev@dpdk.org
> Cc: adrien.mazarg...@6wind.com; sta...@dpdk.org; Lu, Wenzhuo
> ; Zhang, Qi Z ; Zhao1, Wei
> 
> Subject: [PATCH] net/ixgbe: enable x550 flexible byte filter
> 
> There is need for users to use flexible byte filter on x550.
> This patch enable it.
> 
> Fixes: 82fb702077f6 ("ixgbe: support new flow director modes for X550")
> Fixes: 11777435c727 ("net/ixgbe: parse flow director filter")
> 
> Signed-off-by: Wei Zhao 
> ---
>  drivers/net/ixgbe/ixgbe_fdir.c |   9 +-
>  drivers/net/ixgbe/ixgbe_flow.c | 274 --
> ---
>  2 files changed, 195 insertions(+), 88 deletions(-)
> 
> diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
> index e559f0f..deb9a21 100644
> --- a/drivers/net/ixgbe/ixgbe_fdir.c
> +++ b/drivers/net/ixgbe/ixgbe_fdir.c
> @@ -307,6 +307,8 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev)
>   /* flex byte mask */
>   if (info->mask.flex_bytes_mask == 0)
>   fdirm |= IXGBE_FDIRM_FLEX;
> + if (info->mask.src_ipv4_mask == 0 && info->mask.dst_ipv4_mask ==
> 0)
> + fdirm |= IXGBE_FDIRM_L3P;
> 
>   IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm);
> 
> @@ -356,8 +358,7 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev)
>   /* mask VM pool and DIPv6 since there are currently not supported
>* mask FLEX byte, it will be set in flex_conf
>*/
> - uint32_t fdirm = IXGBE_FDIRM_POOL | IXGBE_FDIRM_DIPv6 |
> -  IXGBE_FDIRM_FLEX;
> + uint32_t fdirm = IXGBE_FDIRM_POOL | IXGBE_FDIRM_DIPv6;
>   uint32_t fdiripv6m;
>   enum rte_fdir_mode mode = dev->data->dev_conf.fdir_conf.mode;
>   uint16_t mac_mask;
> @@ -385,6 +386,10 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev)
>   return -EINVAL;
>   }
> 
> + /* flex byte mask */
> + if (info->mask.flex_bytes_mask == 0)
> + fdirm |= IXGBE_FDIRM_FLEX;
> +
>   IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm);
> 
>   fdiripv6m = ((u32)0xU << IXGBE_FDIRIP6M_DIPM_SHIFT); diff --
> git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c index
> f0fafeb..dc210c5 100644
> --- a/drivers/net/ixgbe/ixgbe_flow.c
> +++ b/drivers/net/ixgbe/ixgbe_flow.c
> @@ -1622,9 +1622,9 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev
> *dev,
>   const struct rte_flow_item_raw *raw_mask;
>   const struct rte_flow_item_raw *raw_spec;
>   uint8_t j;
> -
>   struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> 
> +
>   if (!pattern) {
>   rte_flow_error_set(error, EINVAL,
>   RTE_FLOW_ERROR_TYPE_ITEM_NUM,
> @@ -1651,9 +1651,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev
> *dev,
>* value. So, we need not do anything for the not provided fields
> later.
>*/
>   memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
> - memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask));
> - rule->mask.vlan_tci_mask = 0;
> - rule->mask.flex_bytes_mask = 0;
> + memset(&rule->mask, 0, sizeof(struct ixgbe_hw_fdir_mask));
> 
>   /**
>* The first not void item should be
> @@ -1665,7 +1663,8 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev
> *dev,
>   item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
>   item->type != RTE_FLOW_ITEM_TYPE_TCP &&
>   item->type != RTE_FLOW_ITEM_TYPE_UDP &&
> - item->type != RTE_FLOW_ITEM_TYPE_SCTP) {
> + item->type != RTE_FLOW_ITEM_TYPE_SCTP &&
> + item->type != RTE_FLOW_ITEM_TYPE_RAW) {
>   memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
>   rte_flow_error_set(error, EINVAL,
>   RTE_FLOW_ERROR_TYPE_ITEM,
> @@ -2201,6 +2200,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev
> *dev,
>   }
> 
>   raw_mask = item->mask;
> + rule->b_mask = TRUE;
> 
>   /* check mask */
>   if (raw_mask->relative != 0x1 ||
> @@ -2217,6 +2217,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev
> *dev,
>   }
> 
>   raw_spec = item->spec;
> + rule->b_spec = TRUE;
> 
>   /* check spec */
>   if (raw_spec->relative != 0 ||
> @@ -2323,6 +2324,8 @@ ixgbe_parse_fdir_filter_tunnel(const struct
> rte_flow_attr *attr,
>   const struct rte_flow_item_eth *eth_mask;
>   const struct rte_flow_item_vlan *vlan_spec;
>   const struct rte_flow_item_vlan *vlan_mask;
> + const struct rte_flow_item_raw *raw_mask;
> + const struct rte_flow_item_raw *raw_spec;
>   uint32_t j;
> 
>   if (!pattern) {
> @@ -2351,8 +2354,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct
> rte_flow_attr *attr,
>* value. So, we need not do anything for the not provided fields
> later.
>*/
>   memset(rule, 0, sizeof(str

Re: [dpdk-dev] [PATCH v4 30/32] net/ice: support basic RX/TX

2018-12-16 Thread Lu, Wenzhuo
Hi Ferruh,


> -Original Message-
> From: Yigit, Ferruh
> Sent: Friday, December 14, 2018 9:00 PM
> To: Lu, Wenzhuo ; dev@dpdk.org
> Cc: Yang, Qiming ; Li, Xiaoyun
> ; Wu, Jingjing ; Thomas
> Monjalon 
> Subject: Re: [dpdk-dev] [PATCH v4 30/32] net/ice: support basic RX/TX
> 
> On 12/14/2018 8:35 AM, Wenzhuo Lu wrote:
> > Signed-off-by: Wenzhuo Lu 
> > Signed-off-by: Qiming Yang 
> > Signed-off-by: Xiaoyun Li 
> > Signed-off-by: Jingjing Wu 
> 
> <...>
> 
> > +
> > +   /* Check to make sure the last descriptor to clean is done */
> > +   desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
> > +   if (!(txd[desc_to_clean_to].cmd_type_offset_bsz &
> > +   rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))) {
> > +   PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done
> "
> > +   "(port=%d queue=%d) value=0x%lx\n",
> > +   desc_to_clean_to,
> > +   txq->port_id, txq->queue_id,
> > +   txd[desc_to_clean_to].cmd_type_offset_bsz);
> 
> Causing build error for i686 [1], should use PRIx64 for 64bit variables.
> 
> Perhaps we should create a rule in checkpatch to check and warn %lx %lu
> formats `git grep -n '%l[xud]' drivers/net/ice/` shows only this occurrence in
> 'ice' but there are more in other drivers...
> 
> 
> [1]
> In file included from .../i686-native-linuxapp-gcc/include/rte_ethdev.h:150,
>  from 
> .../i686-native-linuxapp-gcc/include/rte_ethdev_driver.h:18,
>  from .../drivers/net/ice/ice_lan_rxtx.c:5:
> .../drivers/net/ice/ice_lan_rxtx.c: In function ‘ice_xmit_cleanup’:
> .../drivers/net/ice/ice_lan_rxtx.c:1776:46: error: format ‘%lx’ expects
> argument of type ‘long unsigned int’, but argument 8 has type ‘uint64_t’
> {aka ‘volatile long long unsigned int’} [-Werror=format=]
>  txd[desc_to_clean_to].cmd_type_offset_bsz);
>   ^
> .../i686-native-linuxapp-gcc/include/rte_log.h:322:25: note: in definition of
> macro ‘RTE_LOG’
> RTE_LOGTYPE_ ## t, # t ": " __VA_ARGS__)
>  ^
> .../drivers/net/ice/ice_lan_rxtx.c:1772:3: note: in expansion of macro
> ‘PMD_TX_FREE_LOG’
>PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done "
>^~~
Thanks for the check. Will correct it.


[dpdk-dev] raw pattern for rte_flow

2018-12-16 Thread Zhao1, Wei
Hi,adrien
By now, we need to enable flexible byte filter for ixgbe, but PMD can not work 
well.
Because in RTE_FLOW_ITEM_TYPE_RAW type pattern, the key parameters (const 
uint8_t *pattern)  in struct rte_flow_item_raw,
which we get rte_flow command line is ASIC number not the actual number.
For example, if we  type in the following command, PMD will get “0x6463” for 
“cd” not “11011100”, this make filter hard to search some specific parameters.
AND also, it is realted to all type of NIC, not only IXGBE .


Flow create 0 ingress pattern raw relative spec 0 relative mask 1 search spec 0 
search mask 1 offset spec 54 offset mask 0x limit spec 0 limit mask 
0x pattern is cd / end actions queue index 2 / end

struct rte_flow_item_raw {
uint32_t relative:1; /**< Look for pattern after the previous 
item. */
uint32_t search:1; /**< Search pattern from offset (see also 
limit). */
uint32_t reserved:30; /**< Reserved, must be set to zero. */
int32_t offset; /**< Absolute or relative offset for pattern. */
uint16_t limit; /**< Search area limit for start of pattern. */
uint16_t length; /**< Pattern length. */
const uint8_t *pattern; /**< Byte string to look for. */
};


Re: [dpdk-dev] raw pattern for rte_flow

2018-12-16 Thread Zhao1, Wei
More info for this problem:
Old type filter can get actual number from CLI because there is a function 
xdigit2val(unsigned char c)  for it.
Maybe flow CLI also need one.

From: Zhao1, Wei
Sent: Monday, December 17, 2018 3:06 PM
To: adrien.mazarg...@6wind.com
Cc: Peng, Yuan ; dev@dpdk.org; Zhang, Qi Z 
; Lu, Wenzhuo 
Subject: raw pattern for rte_flow

Hi,adrien
By now, we need to enable flexible byte filter for ixgbe, but PMD can not work 
well.
Because in RTE_FLOW_ITEM_TYPE_RAW type pattern, the key parameters (const 
uint8_t *pattern)  in struct rte_flow_item_raw,
which we get rte_flow command line is ASIC number not the actual number.
For example, if we  type in the following command, PMD will get “0x6463” for 
“cd” not “11011100”, this make filter hard to search some specific parameters.
AND also, it is realted to all type of NIC, not only IXGBE .


Flow create 0 ingress pattern raw relative spec 0 relative mask 1 search spec 0 
search mask 1 offset spec 54 offset mask 0x limit spec 0 limit mask 
0x pattern is cd / end actions queue index 2 / end

struct rte_flow_item_raw {
uint32_t relative:1; /**< Look for pattern after the previous 
item. */
uint32_t search:1; /**< Search pattern from offset (see also 
limit). */
uint32_t reserved:30; /**< Reserved, must be set to zero. */
int32_t offset; /**< Absolute or relative offset for pattern. */
uint16_t limit; /**< Search area limit for start of pattern. */
uint16_t length; /**< Pattern length. */
const uint8_t *pattern; /**< Byte string to look for. */
};


Re: [dpdk-dev] [PATCH v2 2/3] eal: add new rte color definition

2018-12-16 Thread Pattan, Reshma


> -Original Message-
> From: Mattias Rönnblom [mailto:mattias.ronnb...@ericsson.com]
> Sent: Saturday, December 15, 2018 2:16 PM
> To: Ananyev, Konstantin ; Pattan, Reshma
> ; dev@dpdk.org; Dumitrescu, Cristian
> ; jerin.ja...@caviumnetworks.com; Singh,
> Jasvinder 
> Subject: Re: [dpdk-dev] [PATCH v2 2/3] eal: add new rte color definition
> 
> On 2018-12-15 00:35, Ananyev, Konstantin wrote:
> > Hi Reshma,
> >
> >> diff --git a/lib/librte_eal/common/include/rte_color.h
> >> b/lib/librte_eal/common/include/rte_color.h
> >> new file mode 100644
> >> index 0..f4387071b
> >> --- /dev/null
> >> +++ b/lib/librte_eal/common/include/rte_color.h
> >> @@ -0,0 +1,18 @@
> >> +/* SPDX-License-Identifier: BSD-3-Clause
> >> + * Copyright(c) 2018 Intel Corporation  */
> >> +
> >> +#ifndef _RTE_COLOR_H_
> >> +#define _RTE_COLOR_H_
> >> +
> >> +/**
> >> + * Color
> >> + */
> >> +enum rte_color {
> >> +  RTE_COLOR_GREEN = 0, /**< Green */
> >> +  RTE_COLOR_YELLOW, /**< Yellow */
> >> +  RTE_COLOR_RED, /**< Red */
> >> +  RTE_COLORS /**< Number of colors */ };
> >
> > Does it really belong to EAL?
> > Konstantin
> >
> 
> If this is supposed to be a generic type, we definitely need RTE_COLOR_BLACK
> as well, or RTE_COLOR_VERY_VERY_DARK_GREY.
> 

Ok, I can add RTE_COLOR_BLACK after RTE_COLOR_RED now. 

Thanks,
Reshma


[dpdk-dev] [PATCH v5 02/31] net/ice/base: add basic structures

2018-12-16 Thread Wenzhuo Lu
From: Paul M Stillwell Jr 

Add the structures required by the NIC.

Signed-off-by: Paul M Stillwell Jr 
---
 drivers/net/ice/base/ice_type.h | 869 
 1 file changed, 869 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_type.h

diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
new file mode 100644
index 000..256bf3f
--- /dev/null
+++ b/drivers/net/ice/base/ice_type.h
@@ -0,0 +1,869 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_TYPE_H_
+#define _ICE_TYPE_H_
+
+#define ETH_ALEN   6
+
+#define ETH_HEADER_LEN 14
+
+#define BIT(a) (1UL << (a))
+#define BIT_ULL(a) (1ULL << (a))
+
+#define BITS_PER_BYTE  8
+
+#define ICE_BYTES_PER_WORD 2
+#define ICE_BYTES_PER_DWORD4
+#define ICE_MAX_TRAFFIC_CLASS  8
+
+
+#include "ice_status.h"
+#include "ice_hw_autogen.h"
+#include "ice_devids.h"
+#include "ice_osdep.h"
+#include "ice_controlq.h"
+#include "ice_lan_tx_rx.h"
+#include "ice_flex_type.h"
+#include "ice_protocol_type.h"
+
+static inline bool ice_is_tc_ena(ice_bitmap_t bitmap, u8 tc)
+{
+   return ice_is_bit_set(&bitmap, tc);
+}
+
+#ifndef DIV_64BIT
+#define DIV_64BIT(n, d) ((n) / (d))
+#endif /* DIV_64BIT */
+
+static inline u64 round_up_64bit(u64 a, u32 b)
+{
+   return DIV_64BIT(((a) + (b) / 2), (b));
+}
+
+static inline u32 ice_round_to_num(u32 N, u32 R)
+{
+   return N) % (R)) < ((R) / 2)) ? (((N) / (R)) * (R)) :
+   N) + (R) - 1) / (R)) * (R)));
+}
+
+/* Driver always calls main vsi_handle first */
+#define ICE_MAIN_VSI_HANDLE0
+
+/* Switch from ms to the 1usec global time (this is the GTIME resolution) */
+#define ICE_MS_TO_GTIME(time)  ((time) * 1000)
+
+/* Data type manipulation macros. */
+#define ICE_HI_DWORD(x)((u32)x) >> 16) >> 16) & 
0x))
+#define ICE_LO_DWORD(x)((u32)((x) & 0x))
+#define ICE_HI_WORD(x) ((u16)(((x) >> 16) & 0x))
+
+/* debug masks - set these bits in hw->debug_mask to control output */
+#define ICE_DBG_INIT   BIT_ULL(1)
+#define ICE_DBG_RELEASEBIT_ULL(2)
+
+#define ICE_DBG_LINK   BIT_ULL(4)
+#define ICE_DBG_PHYBIT_ULL(5)
+#define ICE_DBG_QCTX   BIT_ULL(6)
+#define ICE_DBG_NVMBIT_ULL(7)
+#define ICE_DBG_LANBIT_ULL(8)
+#define ICE_DBG_FLOW   BIT_ULL(9)
+#define ICE_DBG_DCBBIT_ULL(10)
+#define ICE_DBG_DIAG   BIT_ULL(11)
+#define ICE_DBG_FD BIT_ULL(12)
+#define ICE_DBG_SW BIT_ULL(13)
+#define ICE_DBG_SCHED  BIT_ULL(14)
+
+#define ICE_DBG_PKGBIT_ULL(16)
+#define ICE_DBG_RESBIT_ULL(17)
+#define ICE_DBG_AQ_MSG BIT_ULL(24)
+#define ICE_DBG_AQ_DESCBIT_ULL(25)
+#define ICE_DBG_AQ_DESC_BUFBIT_ULL(26)
+#define ICE_DBG_AQ_CMD BIT_ULL(27)
+#define ICE_DBG_AQ (ICE_DBG_AQ_MSG | \
+ICE_DBG_AQ_DESC| \
+ICE_DBG_AQ_DESC_BUF| \
+ICE_DBG_AQ_CMD)
+
+#define ICE_DBG_USER   BIT_ULL(31)
+#define ICE_DBG_ALL0xULL
+
+
+
+
+
+
+enum ice_aq_res_ids {
+   ICE_NVM_RES_ID = 1,
+   ICE_SPD_RES_ID,
+   ICE_CHANGE_LOCK_RES_ID,
+   ICE_GLOBAL_CFG_LOCK_RES_ID
+};
+
+/* FW update timeout definitions are in milliseconds */
+#define ICE_NVM_TIMEOUT18
+#define ICE_CHANGE_LOCK_TIMEOUT1000
+#define ICE_GLOBAL_CFG_LOCK_TIMEOUT3000
+
+enum ice_aq_res_access_type {
+   ICE_RES_READ = 1,
+   ICE_RES_WRITE
+};
+
+struct ice_driver_ver {
+   u8 major_ver;
+   u8 minor_ver;
+   u8 build_ver;
+   u8 subbuild_ver;
+   u8 driver_string[32];
+};
+
+enum ice_fc_mode {
+   ICE_FC_NONE = 0,
+   ICE_FC_RX_PAUSE,
+   ICE_FC_TX_PAUSE,
+   ICE_FC_FULL,
+   ICE_FC_PFC,
+   ICE_FC_DFLT
+};
+
+enum ice_fec_mode {
+   ICE_FEC_NONE = 0,
+   ICE_FEC_RS,
+   ICE_FEC_BASER,
+   ICE_FEC_AUTO
+};
+
+enum ice_set_fc_aq_failures {
+   ICE_SET_FC_AQ_FAIL_NONE = 0,
+   ICE_SET_FC_AQ_FAIL_GET,
+   ICE_SET_FC_AQ_FAIL_SET,
+   ICE_SET_FC_AQ_FAIL_UPDATE
+};
+
+/* These are structs for managing the hardware information and the operations 
*/
+/* MAC types */
+enum ice_mac_type {
+   ICE_MAC_UNKNOWN = 0,
+   ICE_MAC_GENERIC,
+};
+
+/* Media Types */
+enum ice_media_type {
+   ICE_MEDIA_UNKNOWN = 0,
+   ICE_MEDIA_FIBER,
+   ICE_MEDIA_BASET,
+   ICE_MEDIA_BACKPLANE,
+   ICE_MEDIA_DA,
+};
+
+/* Software VSI types. */
+enum ice_vsi_type {
+   ICE_VSI_PF = 0,
+#ifdef ADQ_SUPPORT
+   ICE_VSI_CHNL = 4,
+#endif /* ADQ_SUPPORT */
+};
+
+struct ice_link_status {
+   /* Refer to ice_aq_phy_type for bits definition */
+   u64 phy_type_low;
+   u64 phy_type_high;
+   u8 topo_med

[dpdk-dev] [PATCH v5 03/31] net/ice/base: add admin queue structures and commands

2018-12-16 Thread Wenzhuo Lu
From: Paul M Stillwell Jr 

Add the commands, error codes, and structures for
the admin queue.

Signed-off-by: Paul M Stillwell Jr 
---
 drivers/net/ice/base/ice_adminq_cmd.h | 1891 +
 1 file changed, 1891 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_adminq_cmd.h

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h 
b/drivers/net/ice/base/ice_adminq_cmd.h
new file mode 100644
index 000..9332f84
--- /dev/null
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -0,0 +1,1891 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_ADMINQ_CMD_H_
+#define _ICE_ADMINQ_CMD_H_
+
+/* This header file defines the Admin Queue commands, error codes and
+ * descriptor format. It is shared between Firmware and Software.
+ */
+
+
+#define ICE_MAX_VSI768
+#define ICE_AQC_TOPO_MAX_LEVEL_NUM 0x9
+#define ICE_AQ_SET_MAC_FRAME_SIZE_MAX  9728
+
+
+struct ice_aqc_generic {
+   __le32 param0;
+   __le32 param1;
+   __le32 addr_high;
+   __le32 addr_low;
+};
+
+
+/* Get version (direct 0x0001) */
+struct ice_aqc_get_ver {
+   __le32 rom_ver;
+   __le32 fw_build;
+   u8 fw_branch;
+   u8 fw_major;
+   u8 fw_minor;
+   u8 fw_patch;
+   u8 api_branch;
+   u8 api_major;
+   u8 api_minor;
+   u8 api_patch;
+};
+
+
+
+/* Queue Shutdown (direct 0x0003) */
+struct ice_aqc_q_shutdown {
+   __le32 driver_unloading;
+#define ICE_AQC_DRIVER_UNLOADING   BIT(0)
+   u8 reserved[12];
+};
+
+
+
+
+/* Request resource ownership (direct 0x0008)
+ * Release resource ownership (direct 0x0009)
+ */
+struct ice_aqc_req_res {
+   __le16 res_id;
+#define ICE_AQC_RES_ID_NVM 1
+#define ICE_AQC_RES_ID_SDP 2
+#define ICE_AQC_RES_ID_CHNG_LOCK   3
+#define ICE_AQC_RES_ID_GLBL_LOCK   4
+   __le16 access_type;
+#define ICE_AQC_RES_ACCESS_READ1
+#define ICE_AQC_RES_ACCESS_WRITE   2
+
+   /* Upon successful completion, FW writes this value and driver is
+* expected to release resource before timeout. This value is provided
+* in milliseconds.
+*/
+   __le32 timeout;
+#define ICE_AQ_RES_NVM_READ_DFLT_TIMEOUT_MS3000
+#define ICE_AQ_RES_NVM_WRITE_DFLT_TIMEOUT_MS   18
+#define ICE_AQ_RES_CHNG_LOCK_DFLT_TIMEOUT_MS   1000
+#define ICE_AQ_RES_GLBL_LOCK_DFLT_TIMEOUT_MS   3000
+   /* For SDP: pin id of the SDP */
+   __le32 res_number;
+   /* Status is only used for ICE_AQC_RES_ID_GLBL_LOCK */
+   __le16 status;
+#define ICE_AQ_RES_GLBL_SUCCESS0
+#define ICE_AQ_RES_GLBL_IN_PROG1
+#define ICE_AQ_RES_GLBL_DONE   2
+   u8 reserved[2];
+};
+
+
+/* Get function capabilities (indirect 0x000A)
+ * Get device capabilities (indirect 0x000B)
+ */
+struct ice_aqc_list_caps {
+   u8 cmd_flags;
+   u8 pf_index;
+   u8 reserved[2];
+   __le32 count;
+   __le32 addr_high;
+   __le32 addr_low;
+};
+
+
+/* Device/Function buffer entry, repeated per reported capability */
+struct ice_aqc_list_caps_elem {
+   __le16 cap;
+#define ICE_AQC_CAPS_VALID_FUNCTIONS   0x0005
+#define ICE_AQC_CAPS_VSI   0x0017
+#define ICE_AQC_CAPS_RSS   0x0040
+#define ICE_AQC_CAPS_RXQS  0x0041
+#define ICE_AQC_CAPS_TXQS  0x0042
+#define ICE_AQC_CAPS_MSIX  0x0043
+#define ICE_AQC_CAPS_MAX_MTU   0x0047
+
+   u8 major_ver;
+   u8 minor_ver;
+   /* Number of resources described by this capability */
+   __le32 number;
+   /* Only meaningful for some types of resources */
+   __le32 logical_id;
+   /* Only meaningful for some types of resources */
+   __le32 phys_id;
+   __le64 rsvd1;
+   __le64 rsvd2;
+};
+
+
+/* Manage MAC address, read command - indirect (0x0107)
+ * This struct is also used for the response
+ */
+struct ice_aqc_manage_mac_read {
+   __le16 flags; /* Zeroed by device driver */
+#define ICE_AQC_MAN_MAC_LAN_ADDR_VALID BIT(4)
+#define ICE_AQC_MAN_MAC_SAN_ADDR_VALID BIT(5)
+#define ICE_AQC_MAN_MAC_PORT_ADDR_VALIDBIT(6)
+#define ICE_AQC_MAN_MAC_WOL_ADDR_VALID BIT(7)
+#define ICE_AQC_MAN_MAC_READ_S 4
+#define ICE_AQC_MAN_MAC_READ_M (0xF << ICE_AQC_MAN_MAC_READ_S)
+   u8 lport_num;
+   u8 lport_num_valid;
+#define ICE_AQC_MAN_MAC_PORT_NUM_IS_VALID  BIT(0)
+   u8 num_addr; /* Used in response */
+   u8 reserved[3];
+   __le32 addr_high;
+   __le32 addr_low;
+};
+
+
+/* Response buffer format for manage MAC read command */
+struct ice_aqc_manage_mac_read_resp {
+   u8 lport_num;
+   u8 addr_type;
+#define ICE_AQC_MAN_MAC_ADDR_TYPE_LAN  0
+#define ICE_AQC_MAN_MAC_ADDR_TYPE_WOL  1
+   u8 mac_addr[ETH_ALEN];
+};
+

[dpdk-dev] [PATCH v5 04/31] net/ice/base: add sideband queue info

2018-12-16 Thread Wenzhuo Lu
From: Paul M Stillwell Jr 

Add the commands, error codes, and structures
for the sideband queue.

Signed-off-by: Paul M Stillwell Jr 
---
 drivers/net/ice/base/ice_sbq_cmd.h | 93 ++
 1 file changed, 93 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_sbq_cmd.h

diff --git a/drivers/net/ice/base/ice_sbq_cmd.h 
b/drivers/net/ice/base/ice_sbq_cmd.h
new file mode 100644
index 000..6dff378
--- /dev/null
+++ b/drivers/net/ice/base/ice_sbq_cmd.h
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SBQ_CMD_H_
+#define _ICE_SBQ_CMD_H_
+
+/* This header file defines the Sideband Queue commands, error codes and
+ * descriptor format. It is shared between Firmware and Software.
+ */
+
+/* Sideband Queue command structure and opcodes */
+enum ice_sbq_opc {
+   /* Sideband Queue commands */
+   ice_sbq_opc_neigh_dev_req   = 0x0C00,
+   ice_sbq_opc_neigh_dev_ev= 0x0C01
+};
+
+/* Sideband Queue descriptor. Indirect command
+ * and non posted
+ */
+struct ice_sbq_cmd_desc {
+   __le16 flags;
+   __le16 opcode;
+   __le16 datalen;
+   __le16 cmd_retval;
+
+   /* Opaque message data */
+   __le32 cookie_high;
+   __le32 cookie_low;
+
+   union {
+   __le16 cmd_len;
+   __le16 cmpl_len;
+   } param0;
+
+   u8 reserved[6];
+   __le32 addr_high;
+   __le32 addr_low;
+};
+
+struct ice_sbq_evt_desc {
+   __le16 flags;
+   __le16 opcode;
+   __le16 datalen;
+   __le16 cmd_retval;
+   u8 data[24];
+};
+
+enum ice_sbq_msg_dev {
+   rmn_0   = 0x02,
+   rmn_1   = 0x03,
+   rmn_2   = 0x04,
+   cgu = 0x06
+};
+
+enum ice_sbq_msg_opcode {
+   ice_sbq_msg_rd  = 0x00,
+   ice_sbq_msg_wr  = 0x01
+};
+
+#define ICE_SBQ_MSG_FLAGS  0x40
+#define ICE_SBQ_MSG_SBE_FBE0x0F
+
+struct ice_sbq_msg_req {
+   u8 dest_dev;
+   u8 src_dev;
+   u8 opcode;
+   u8 flags;
+   u8 sbe_fbe;
+   u8 func_id;
+   __le16 msg_addr_low;
+   __le32 msg_addr_high;
+   __le32 data;
+};
+
+struct ice_sbq_msg_cmpl {
+   u8 dest_dev;
+   u8 src_dev;
+   u8 opcode;
+   u8 flags;
+   __le32 data;
+};
+
+/* Internal struct */
+struct ice_sbq_msg_input {
+   u8 dest_dev;
+   u8 opcode;
+   u16 msg_addr_low;
+   u32 msg_addr_high;
+   u32 data;
+};
+#endif /* _ICE_SBQ_CMD_H_ */
-- 
1.9.3



[dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE

2018-12-16 Thread Wenzhuo Lu
This patch set adds the support of a new net PMD,
Intel® Ethernet Network Adapters E810, also
called ice.

Below features are enabled by this patch set,

Basic features:
1, Basic device operations: probe, initialization, start/stop, configure, info 
get.
2, RX/TX queue operations: setup/release, start/stop, info get.
3, RX/TX.

HW Offload features:
1, CRC Stripping/insertion.
2, L2/L3 checksum strip/insertion.
3, PVID set.
4, TPID change.
5, TSO (LRO/RSC not supported).

Stats:
1, statics & xstatics.

Switch functions:
1, MAC Filter Add/Delete.
2, VLAN Filter Add/Delete.

Power saving:
1, RX interrupt mode.

Misc:
1, Interrupt For Link Status.
2, firmware info query.
3, Jumbo Frame Support.
4, ptype check.
5, EEPROM check and set.

---
v2:
 - Fix shared lib compile issue.
 - Add meson build support.
 - Update documents.
 - Fix more checkpatch issues.

v3:
 - Removed the support of secondary process.
 - Splitted the base code to more patches.
 - Pass NULL to rte_zmalloc.
 - Changed some magic numbers to macros.
 - Fixed the wrong implementation of a specific bitmapi.

v4:
 - Moved meson build forward.
 - Updated and splitted the document to related patches.
 - Updated the device info.
 - Removed unnecessary compile config.
 - Removed the code of ops rx_descriptor_done.
 - Adjusted the order of the functions.
 - Added error print for MAC setting.

v5:
 - Removed ice_dcb.c/h.
 - Fixed compile error of icc and i686.
 - Announced dependence of uio and vfio.

Paul M Stillwell Jr (13):
  net/ice/base: add registers for Intel(R) E800 Series NIC
  net/ice/base: add basic structures
  net/ice/base: add admin queue structures and commands
  net/ice/base: add sideband queue info
  net/ice/base: add device IDs for Intel(r) E800 Series NICs
  net/ice/base: add control queue information
  net/ice/base: add basic transmit scheduler
  net/ice/base: add virtual switch code
  net/ice/base: add code to work with the NVM
  net/ice/base: add common functions
  net/ice/base: add various headers
  net/ice/base: add protocol structures and defines
  net/ice/base: add structures for RX/TX queues

Wenzhuo Lu (18):
  net/ice/base: add OS specific implementation
  net/ice: support device initialization
  net/ice: support device and queue ops
  net/ice: support getting device information
  net/ice: support packet type getting
  net/ice: support link update
  net/ice: support MTU setting
  net/ice: support MAC ops
  net/ice: support VLAN ops
  net/ice: support RSS
  net/ice: support RX queue interruption
  net/ice: support FW version getting
  net/ice: support EEPROM information getting
  net/ice: support statistics
  net/ice: support queue information getting
  net/ice: support basic RX/TX
  net/ice: support advance RX/TX
  net/ice: support descriptor ops

 MAINTAINERS  |8 +
 config/common_base   |9 +
 doc/guides/nics/features/ice.ini |   38 +
 doc/guides/nics/ice.rst  |  104 +
 doc/guides/nics/index.rst|1 +
 doc/guides/rel_notes/release_19_02.rst   |5 +
 drivers/net/Makefile |1 +
 drivers/net/ice/Makefile |   55 +
 drivers/net/ice/base/README  |   22 +
 drivers/net/ice/base/ice_adminq_cmd.h| 1891 ++
 drivers/net/ice/base/ice_alloc.h |   22 +
 drivers/net/ice/base/ice_common.c| 3521 +++
 drivers/net/ice/base/ice_common.h|  186 +
 drivers/net/ice/base/ice_controlq.c  | 1098 
 drivers/net/ice/base/ice_controlq.h  |   97 +
 drivers/net/ice/base/ice_devids.h|   17 +
 drivers/net/ice/base/ice_flex_type.h |   19 +
 drivers/net/ice/base/ice_flow.h  |8 +
 drivers/net/ice/base/ice_hw_autogen.h| 9815 ++
 drivers/net/ice/base/ice_lan_tx_rx.h | 2291 +++
 drivers/net/ice/base/ice_nvm.c   |  387 ++
 drivers/net/ice/base/ice_osdep.h |  524 ++
 drivers/net/ice/base/ice_protocol_type.h |  248 +
 drivers/net/ice/base/ice_sbq_cmd.h   |   93 +
 drivers/net/ice/base/ice_sched.c | 5380 
 drivers/net/ice/base/ice_sched.h |  210 +
 drivers/net/ice/base/ice_status.h|   45 +
 drivers/net/ice/base/ice_switch.c| 2812 +
 drivers/net/ice/base/ice_switch.h|  333 +
 drivers/net/ice/base/ice_type.h  |  869 +++
 drivers/net/ice/base/meson.build |   27 +
 drivers/net/ice/ice_ethdev.c | 3243 ++
 drivers/net/ice/ice_ethdev.h |  318 +
 drivers/net/ice/ice_lan_rxtx.c   | 2872 +
 drivers/net/ice/ice_logs.h   |   45 +
 drivers/net/ice/ice_rxtx.h   |  154 +
 drivers/net/ice/meson.build  |   13 +
 drivers/net/ice/rte_pmd_ice_version.map  |4 +
 drivers/net/meson.build  |1 +
 mk/rte.app.mk|1 +
 40 files changed, 36787 insertions(+)
 create mode 100644 doc/guides/nics/features/i

[dpdk-dev] [PATCH v5 06/31] net/ice/base: add control queue information

2018-12-16 Thread Wenzhuo Lu
From: Paul M Stillwell Jr 

Add the structures for the control queues.

Signed-off-by: Paul M Stillwell Jr 
---
 drivers/net/ice/base/ice_controlq.c | 1098 +++
 drivers/net/ice/base/ice_controlq.h |   97 
 2 files changed, 1195 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_controlq.c
 create mode 100644 drivers/net/ice/base/ice_controlq.h

diff --git a/drivers/net/ice/base/ice_controlq.c 
b/drivers/net/ice/base/ice_controlq.c
new file mode 100644
index 000..fb82c23
--- /dev/null
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -0,0 +1,1098 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+
+
+#define ICE_CQ_INIT_REGS(qinfo, prefix)\
+do {   \
+   (qinfo)->sq.head = prefix##_ATQH;   \
+   (qinfo)->sq.tail = prefix##_ATQT;   \
+   (qinfo)->sq.len = prefix##_ATQLEN;  \
+   (qinfo)->sq.bah = prefix##_ATQBAH;  \
+   (qinfo)->sq.bal = prefix##_ATQBAL;  \
+   (qinfo)->sq.len_mask = prefix##_ATQLEN_ATQLEN_M;\
+   (qinfo)->sq.len_ena_mask = prefix##_ATQLEN_ATQENABLE_M; \
+   (qinfo)->sq.head_mask = prefix##_ATQH_ATQH_M;   \
+   (qinfo)->rq.head = prefix##_ARQH;   \
+   (qinfo)->rq.tail = prefix##_ARQT;   \
+   (qinfo)->rq.len = prefix##_ARQLEN;  \
+   (qinfo)->rq.bah = prefix##_ARQBAH;  \
+   (qinfo)->rq.bal = prefix##_ARQBAL;  \
+   (qinfo)->rq.len_mask = prefix##_ARQLEN_ARQLEN_M;\
+   (qinfo)->rq.len_ena_mask = prefix##_ARQLEN_ARQENABLE_M; \
+   (qinfo)->rq.head_mask = prefix##_ARQH_ARQH_M;   \
+} while (0)
+
+/**
+ * ice_adminq_init_regs - Initialize AdminQ registers
+ * @hw: pointer to the hardware structure
+ *
+ * This assumes the alloc_sq and alloc_rq functions have already been called
+ */
+static void ice_adminq_init_regs(struct ice_hw *hw)
+{
+   struct ice_ctl_q_info *cq = &hw->adminq;
+
+   ICE_CQ_INIT_REGS(cq, PF_FW);
+}
+
+/**
+ * ice_mailbox_init_regs - Initialize Mailbox registers
+ * @hw: pointer to the hardware structure
+ *
+ * This assumes the alloc_sq and alloc_rq functions have already been called
+ */
+static void ice_mailbox_init_regs(struct ice_hw *hw)
+{
+   struct ice_ctl_q_info *cq = &hw->mailboxq;
+
+   ICE_CQ_INIT_REGS(cq, PF_MBX);
+}
+
+
+/**
+ * ice_check_sq_alive
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Returns true if Queue is enabled else false.
+ */
+bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+   /* check both queue-length and queue-enable fields */
+   if (cq->sq.len && cq->sq.len_mask && cq->sq.len_ena_mask)
+   return (rd32(hw, cq->sq.len) & (cq->sq.len_mask |
+   cq->sq.len_ena_mask)) ==
+   (cq->num_sq_entries | cq->sq.len_ena_mask);
+
+   return false;
+}
+
+/**
+ * ice_alloc_ctrlq_sq_ring - Allocate Control Transmit Queue (ATQ) rings
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_ctrlq_sq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+   size_t size = cq->num_sq_entries * sizeof(struct ice_aq_desc);
+
+   cq->sq.desc_buf.va = ice_alloc_dma_mem(hw, &cq->sq.desc_buf, size);
+   if (!cq->sq.desc_buf.va)
+   return ICE_ERR_NO_MEMORY;
+
+   cq->sq.cmd_buf = ice_calloc(hw, cq->num_sq_entries,
+   sizeof(struct ice_sq_cd));
+   if (!cq->sq.cmd_buf) {
+   ice_free_dma_mem(hw, &cq->sq.desc_buf);
+   return ICE_ERR_NO_MEMORY;
+   }
+
+   return ICE_SUCCESS;
+}
+
+/**
+ * ice_alloc_ctrlq_rq_ring - Allocate Control Receive Queue (ARQ) rings
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_ctrlq_rq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+   size_t size = cq->num_rq_entries * sizeof(struct ice_aq_desc);
+
+   cq->rq.desc_buf.va = ice_alloc_dma_mem(hw, &cq->rq.desc_buf, size);
+   if (!cq->rq.desc_buf.va)
+   return ICE_ERR_NO_MEMORY;
+   return ICE_SUCCESS;
+}
+
+/**
+ * ice_free_cq_ring - Free control queue ring
+ * @hw: pointer to the hardware structure
+ * @ring: pointer to the specific control queue ring
+ *
+ * This assumes the posted buffers have already been cleaned
+ * and de-allocated
+ */
+static void ice_free_cq_ring(struct ice_hw *hw, struct ice_ctl_q_ring *ring)
+{
+   ice_free_dma_mem(hw, &ring->desc_buf);
+}
+
+/**
+ * ice_alloc_rq_bufs - Allocate pre-posted buffers for the ARQ
+ * @hw: pointer to the hard

[dpdk-dev] [PATCH v5 05/31] net/ice/base: add device IDs for Intel(r) E800 Series NICs

2018-12-16 Thread Wenzhuo Lu
From: Paul M Stillwell Jr 

Add all the device IDs that represent the NIC.

Signed-off-by: Paul M Stillwell Jr 
---
 drivers/net/ice/base/ice_devids.h | 17 +
 1 file changed, 17 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_devids.h

diff --git a/drivers/net/ice/base/ice_devids.h 
b/drivers/net/ice/base/ice_devids.h
new file mode 100644
index 000..87f17ab
--- /dev/null
+++ b/drivers/net/ice/base/ice_devids.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_DEVIDS_H_
+#define _ICE_DEVIDS_H_
+
+
+/* Device IDs */
+/* Intel(R) Ethernet Controller E810-C for backplane */
+#define ICE_DEV_ID_E810C_BACKPLANE 0x1591
+/* Intel(R) Ethernet Controller E810-C for QSFP */
+#define ICE_DEV_ID_E810C_QSFP  0x1592
+/* Intel(R) Ethernet Controller E810-C for SFP */
+#define ICE_DEV_ID_E810C_SFP   0x1593
+
+#endif /* _ICE_DEVIDS_H_ */
-- 
1.9.3



[dpdk-dev] [PATCH v5 09/31] net/ice/base: add code to work with the NVM

2018-12-16 Thread Wenzhuo Lu
From: Paul M Stillwell Jr 

Add code to read/write/query the NVM image.

Signed-off-by: Paul M Stillwell Jr 
---
 drivers/net/ice/base/ice_nvm.c | 387 +
 1 file changed, 387 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_nvm.c

diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c
new file mode 100644
index 000..25a2ca4
--- /dev/null
+++ b/drivers/net/ice/base/ice_nvm.c
@@ -0,0 +1,387 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+
+
+/**
+ * ice_aq_read_nvm
+ * @hw: pointer to the hw struct
+ * @module_typeid: module pointer location in words from the NVM beginning
+ * @offset: byte offset from the module beginning
+ * @length: length of the section to be read (in bytes from the offset)
+ * @data: command buffer (size [bytes] = length)
+ * @last_command: tells if this is the last command in a series
+ * @cd: pointer to command details structure or NULL
+ *
+ * Read the NVM using the admin queue commands (0x0701)
+ */
+static enum ice_status
+ice_aq_read_nvm(struct ice_hw *hw, u16 module_typeid, u32 offset, u16 length,
+   void *data, bool last_command, struct ice_sq_cd *cd)
+{
+   struct ice_aq_desc desc;
+   struct ice_aqc_nvm *cmd;
+
+   ice_debug(hw, ICE_DBG_TRACE, "ice_aq_read_nvm");
+
+   cmd = &desc.params.nvm;
+
+   /* In offset the highest byte must be zeroed. */
+   if (offset & 0xFF00)
+   return ICE_ERR_PARAM;
+
+   ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_nvm_read);
+
+   /* If this is the last command in a series, set the proper flag. */
+   if (last_command)
+   cmd->cmd_flags |= ICE_AQC_NVM_LAST_CMD;
+   cmd->module_typeid = CPU_TO_LE16(module_typeid);
+   cmd->offset_low = CPU_TO_LE16(offset & 0x);
+   cmd->offset_high = (offset >> 16) & 0xFF;
+   cmd->length = CPU_TO_LE16(length);
+
+   return ice_aq_send_cmd(hw, &desc, data, length, cd);
+}
+
+/**
+ * ice_check_sr_access_params - verify params for Shadow RAM R/W operations.
+ * @hw: pointer to the HW structure
+ * @offset: offset in words from module start
+ * @words: number of words to access
+ */
+static enum ice_status
+ice_check_sr_access_params(struct ice_hw *hw, u32 offset, u16 words)
+{
+   if ((offset + words) > hw->nvm.sr_words) {
+   ice_debug(hw, ICE_DBG_NVM,
+ "NVM error: offset beyond SR lmt.\n");
+   return ICE_ERR_PARAM;
+   }
+
+   if (words > ICE_SR_SECTOR_SIZE_IN_WORDS) {
+   /* We can access only up to 4KB (one sector), in one AQ write */
+   ice_debug(hw, ICE_DBG_NVM,
+ "NVM error: tried to access %d words, limit is %d.\n",
+ words, ICE_SR_SECTOR_SIZE_IN_WORDS);
+   return ICE_ERR_PARAM;
+   }
+
+   if (((offset + (words - 1)) / ICE_SR_SECTOR_SIZE_IN_WORDS) !=
+   (offset / ICE_SR_SECTOR_SIZE_IN_WORDS)) {
+   /* A single access cannot spread over two sectors */
+   ice_debug(hw, ICE_DBG_NVM,
+ "NVM error: cannot spread over two sectors.\n");
+   return ICE_ERR_PARAM;
+   }
+
+   return ICE_SUCCESS;
+}
+
+/**
+ * ice_read_sr_aq - Read Shadow RAM.
+ * @hw: pointer to the HW structure
+ * @offset: offset in words from module start
+ * @words: number of words to read
+ * @data: buffer for words reads from Shadow RAM
+ * @last_command: tells the AdminQ that this is the last command
+ *
+ * Reads 16-bit word buffers from the Shadow RAM using the admin command.
+ */
+static enum ice_status
+ice_read_sr_aq(struct ice_hw *hw, u32 offset, u16 words, u16 *data,
+  bool last_command)
+{
+   enum ice_status status;
+
+   ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_aq");
+
+   status = ice_check_sr_access_params(hw, offset, words);
+
+   /* values in "offset" and "words" parameters are sized as words
+* (16 bits) but ice_aq_read_nvm expects these values in bytes.
+* So do this conversion while calling ice_aq_read_nvm.
+*/
+   if (!status)
+   status = ice_aq_read_nvm(hw, 0, 2 * offset, 2 * words, data,
+last_command, NULL);
+
+   return status;
+}
+
+/**
+ * ice_read_sr_word_aq - Reads Shadow RAM via AQ
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x00 - 0x001FFF)
+ * @data: word read from the Shadow RAM
+ *
+ * Reads one 16 bit word from the Shadow RAM using the ice_read_sr_aq method.
+ */
+static enum ice_status
+ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data)
+{
+   enum ice_status status;
+
+   ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_word_aq");
+
+   status = ice_read_sr_aq(hw, offset, 1, data, true);
+   if (!status)
+   *data = LE16_TO_CPU(*(__le16 *)data);
+
+   

[dpdk-dev] [PATCH v5 08/31] net/ice/base: add virtual switch code

2018-12-16 Thread Wenzhuo Lu
From: Paul M Stillwell Jr 

Add code to handle the virtual switch within the NIC.

Signed-off-by: Paul M Stillwell Jr 
---
 drivers/net/ice/base/ice_switch.c | 2812 +
 drivers/net/ice/base/ice_switch.h |  333 +
 2 files changed, 3145 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_switch.c
 create mode 100644 drivers/net/ice/base/ice_switch.h

diff --git a/drivers/net/ice/base/ice_switch.c 
b/drivers/net/ice/base/ice_switch.c
new file mode 100644
index 000..0379cd0
--- /dev/null
+++ b/drivers/net/ice/base/ice_switch.c
@@ -0,0 +1,2812 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_switch.h"
+
+
+#define ICE_ETH_DA_OFFSET  0
+#define ICE_ETH_ETHTYPE_OFFSET 12
+#define ICE_ETH_VLAN_TCI_OFFSET14
+#define ICE_MAX_VLAN_ID0xFFF
+
+/* Dummy ethernet header needed in the ice_aqc_sw_rules_elem
+ * struct to configure any switch filter rules.
+ * {DA (6 bytes), SA(6 bytes),
+ * Ether type (2 bytes for header without VLAN tag) OR
+ * VLAN tag (4 bytes for header with VLAN tag) }
+ *
+ * Word on Hardcoded values
+ * byte 0 = 0x2: to identify it as locally administered DA MAC
+ * byte 6 = 0x2: to identify it as locally administered SA MAC
+ * byte 12 = 0x81 & byte 13 = 0x00:
+ * In case of VLAN filter first two bytes defines ether type (0x8100)
+ * and remaining two bytes are placeholder for programming a given VLAN id
+ * In case of Ether type filter it is treated as header without VLAN tag
+ * and byte 12 and 13 is used to program a given Ether type instead
+ */
+#define DUMMY_ETH_HDR_LEN  16
+static const u8 dummy_eth_header[DUMMY_ETH_HDR_LEN] = { 0x2, 0, 0, 0, 0, 0,
+   0x2, 0, 0, 0, 0, 0,
+   0x81, 0, 0, 0};
+
+#define ICE_SW_RULE_RX_TX_ETH_HDR_SIZE \
+   (sizeof(struct ice_aqc_sw_rules_elem) - \
+sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+sizeof(struct ice_sw_rule_lkup_rx_tx) + DUMMY_ETH_HDR_LEN - 1)
+#define ICE_SW_RULE_RX_TX_NO_HDR_SIZE \
+   (sizeof(struct ice_aqc_sw_rules_elem) - \
+sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+sizeof(struct ice_sw_rule_lkup_rx_tx) - 1)
+#define ICE_SW_RULE_LG_ACT_SIZE(n) \
+   (sizeof(struct ice_aqc_sw_rules_elem) - \
+sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+sizeof(struct ice_sw_rule_lg_act) - \
+sizeof(((struct ice_sw_rule_lg_act *)0)->act) + \
+((n) * sizeof(((struct ice_sw_rule_lg_act *)0)->act)))
+#define ICE_SW_RULE_VSI_LIST_SIZE(n) \
+   (sizeof(struct ice_aqc_sw_rules_elem) - \
+sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+sizeof(struct ice_sw_rule_vsi_list) - \
+sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi) + \
+((n) * sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi)))
+
+
+/**
+ * ice_init_def_sw_recp - initialize the recipe book keeping tables
+ * @hw: pointer to the hw struct
+ *
+ * Allocate memory for the entire recipe table and initialize the structures/
+ * entries corresponding to basic recipes.
+ */
+enum ice_status ice_init_def_sw_recp(struct ice_hw *hw)
+{
+   struct ice_sw_recipe *recps;
+   u8 i;
+
+   recps = (struct ice_sw_recipe *)
+   ice_calloc(hw, ICE_MAX_NUM_RECIPES, sizeof(*recps));
+   if (!recps)
+   return ICE_ERR_NO_MEMORY;
+
+   for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+   recps[i].root_rid = i;
+   INIT_LIST_HEAD(&recps[i].filt_rules);
+   INIT_LIST_HEAD(&recps[i].filt_replay_rules);
+   ice_init_lock(&recps[i].filt_rule_lock);
+   }
+
+   hw->switch_info->recp_list = recps;
+
+   return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_get_sw_cfg - get switch configuration
+ * @hw: pointer to the hardware structure
+ * @buf: pointer to the result buffer
+ * @buf_size: length of the buffer available for response
+ * @req_desc: pointer to requested descriptor
+ * @num_elems: pointer to number of elements
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get switch configuration (0x0200) to be placed in 'buff'.
+ * This admin command returns information such as initial VSI/port number
+ * and switch ID it belongs to.
+ *
+ * NOTE: *req_desc is both an input/output parameter.
+ * The caller of this function first calls this function with *request_desc set
+ * to 0. If the response from f/w has *req_desc set to 0, all the switch
+ * configuration information has been returned; if non-zero (meaning not all
+ * the information was returned), the caller should call this function again
+ * with *req_desc set to the previous value returned by f/w to get the
+ * next block of switch configuration information.
+ *
+ * *num_elems is output only parameter. This reflects the number of elements
+ * in respo

[dpdk-dev] [PATCH v5 12/31] net/ice/base: add protocol structures and defines

2018-12-16 Thread Wenzhuo Lu
From: Paul M Stillwell Jr 

Add the structures and defines that define what
protocols the NIC can handle.

Signed-off-by: Paul M Stillwell Jr 
---
 drivers/net/ice/base/ice_protocol_type.h | 248 +++
 1 file changed, 248 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_protocol_type.h

diff --git a/drivers/net/ice/base/ice_protocol_type.h 
b/drivers/net/ice/base/ice_protocol_type.h
new file mode 100644
index 000..7b92c71
--- /dev/null
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -0,0 +1,248 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_PROTOCOL_TYPE_H_
+#define _ICE_PROTOCOL_TYPE_H_
+#include "ice_flex_type.h"
+#define ICE_IPV6_ADDR_LENGTH 16
+
+/* Each recipe can match up to 5 different fields. Fields to match can be meta-
+ * data, values extracted from packet headers, or results from other recipes.
+ * One of the 5 fields is reserved for matching the switch ID. So, up to 4
+ * recipes can provide intermediate results to another one through chaining,
+ * e.g. recipes 0, 1, 2, and 3 can provide intermediate results to recipe 4.
+ */
+#define ICE_NUM_WORDS_RECIPE 4
+
+/* Max recipes that can be chained */
+#define ICE_MAX_CHAIN_RECIPE 5
+
+/* 1 word reserved for switch id from allowed 5 words.
+ * So a recipe can have max 4 words. And you can chain 5 such recipes
+ * together. So maximum words that can be programmed for look up is 5 * 4.
+ */
+#define ICE_MAX_CHAIN_WORDS (ICE_NUM_WORDS_RECIPE * ICE_MAX_CHAIN_RECIPE)
+
+/* Field vector index corresponding to chaining */
+#define ICE_CHAIN_FV_INDEX_START 47
+
+enum ice_protocol_type {
+   ICE_MAC_OFOS = 0,
+   ICE_MAC_IL,
+   ICE_IPV4_OFOS,
+   ICE_IPV4_IL,
+   ICE_IPV6_IL,
+   ICE_IPV6_OFOS,
+   ICE_TCP_IL,
+   ICE_UDP_ILOS,
+   ICE_SCTP_IL,
+   ICE_VXLAN,
+   ICE_GENEVE,
+   ICE_VXLAN_GPE,
+   ICE_NVGRE,
+   ICE_PROTOCOL_LAST
+};
+
+enum ice_sw_tunnel_type {
+   ICE_NON_TUN,
+   ICE_SW_TUN_VXLAN_GPE,
+   ICE_SW_TUN_GENEVE,
+   ICE_SW_TUN_VXLAN,
+   ICE_SW_TUN_NVGRE,
+   ICE_SW_TUN_UDP, /* This means all "UDP" tunnel types: VXLAN-GPE, VXLAN
+* and GENEVE
+*/
+   ICE_ALL_TUNNELS /* All tunnel types including NVGRE */
+};
+
+/* Decoders for ice_prot_id:
+ * - F: First
+ * - I: Inner
+ * - L: Last
+ * - O: Outer
+ * - S: Single
+ */
+enum ice_prot_id {
+   ICE_PROT_ID_INVAL   = 0,
+   ICE_PROT_MAC_OF_OR_S= 1,
+   ICE_PROT_MAC_O2 = 2,
+   ICE_PROT_MAC_IL = 4,
+   ICE_PROT_MAC_IN_MAC = 7,
+   ICE_PROT_ETYPE_OL   = 9,
+   ICE_PROT_ETYPE_IL   = 10,
+   ICE_PROT_PAY= 15,
+   ICE_PROT_EVLAN_O= 16,
+   ICE_PROT_VLAN_O = 17,
+   ICE_PROT_VLAN_IF= 18,
+   ICE_PROT_MPLS_OL_MINUS_1 = 27,
+   ICE_PROT_MPLS_OL_OR_OS  = 28,
+   ICE_PROT_MPLS_IL= 29,
+   ICE_PROT_IPV4_OF_OR_S   = 32,
+   ICE_PROT_IPV4_IL= 33,
+   ICE_PROT_IPV6_OF_OR_S   = 40,
+   ICE_PROT_IPV6_IL= 41,
+   ICE_PROT_IPV6_FRAG  = 47,
+   ICE_PROT_TCP_IL = 49,
+   ICE_PROT_UDP_OF = 52,
+   ICE_PROT_UDP_IL_OR_S= 53,
+   ICE_PROT_GRE_OF = 64,
+   ICE_PROT_NSH_F  = 84,
+   ICE_PROT_ESP_F  = 88,
+   ICE_PROT_ESP_2  = 89,
+   ICE_PROT_SCTP_IL= 96,
+   ICE_PROT_ICMP_IL= 98,
+   ICE_PROT_ICMPV6_IL  = 100,
+   ICE_PROT_VRRP_F = 101,
+   ICE_PROT_OSPF   = 102,
+   ICE_PROT_ATAOE_OF   = 114,
+   ICE_PROT_CTRL_OF= 116,
+   ICE_PROT_LLDP_OF= 117,
+   ICE_PROT_ARP_OF = 118,
+   ICE_PROT_EAPOL_OF   = 120,
+   ICE_PROT_META_ID= 255, /* when offset == metaddata */
+   ICE_PROT_INVALID= 255  /* when offset == 0xFF */
+};
+
+
+#define ICE_MAC_OFOS_HW1
+#define ICE_MAC_IL_HW  4
+#define ICE_IPV4_OFOS_HW   32
+#define ICE_IPV4_IL_HW 33
+#define ICE_IPV6_OFOS_HW   40
+#define ICE_IPV6_IL_HW 41
+#define ICE_TCP_IL_HW  49
+#define ICE_UDP_ILOS_HW53
+#define ICE_SCTP_IL_HW 96
+
+/* ICE_UDP_OF is used to identify all 3 tunnel types
+ * VXLAN, GENEVE and VXLAN_GPE. To differentiate further
+ * need to use flags from the field vector
+ */
+#define ICE_UDP_OF_HW  52 /* UDP Tunnels */
+#define ICE_GRE_OF_HW  64 /* NVGRE */
+#define ICE_META_DATA_ID_HW 255 /* this is used for tunnel type */
+
+#define ICE_TUN_FLAG_MASK 0xFF
+#define ICE_TUN_FLAG_FV_IND 2
+
+#define ICE_PROTOCOL_MAX_ENTRIES 16
+
+/* Mapping of software defined protocol id to hardware defined protocol id */
+struct ice_protocol_entry {
+   enum ice_protocol_type type;
+   u8 protocol_id;
+};
+
+
+struct ice_ether_hdr {
+   u8 dst_addr[ETH_ALEN];
+   u8 src_addr[ETH_ALEN];
+   u

[dpdk-dev] [PATCH v5 10/31] net/ice/base: add common functions

2018-12-16 Thread Wenzhuo Lu
From: Paul M Stillwell Jr 

Add code that multiple other features use.

Signed-off-by: Paul M Stillwell Jr 
---
 drivers/net/ice/base/ice_common.c | 3521 +
 drivers/net/ice/base/ice_common.h |  186 ++
 2 files changed, 3707 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_common.c
 create mode 100644 drivers/net/ice/base/ice_common.h

diff --git a/drivers/net/ice/base/ice_common.c 
b/drivers/net/ice/base/ice_common.c
new file mode 100644
index 000..d49264d
--- /dev/null
+++ b/drivers/net/ice/base/ice_common.c
@@ -0,0 +1,3521 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+#include "ice_sched.h"
+#include "ice_adminq_cmd.h"
+
+#include "ice_flow.h"
+#include "ice_switch.h"
+
+#define ICE_PF_RESET_WAIT_COUNT200
+
+#define ICE_PROG_FLEX_ENTRY(hw, rxdid, mdid, idx) \
+   wr32((hw), GLFLXP_RXDID_FLX_WRD_##idx(rxdid), \
+((ICE_RX_OPC_MDID << \
+  GLFLXP_RXDID_FLX_WRD_##idx##_RXDID_OPCODE_S) & \
+ GLFLXP_RXDID_FLX_WRD_##idx##_RXDID_OPCODE_M) | \
+(((mdid) << GLFLXP_RXDID_FLX_WRD_##idx##_PROT_MDID_S) & \
+ GLFLXP_RXDID_FLX_WRD_##idx##_PROT_MDID_M))
+
+#define ICE_PROG_FLG_ENTRY(hw, rxdid, flg_0, flg_1, flg_2, flg_3, idx) \
+   wr32((hw), GLFLXP_RXDID_FLAGS(rxdid, idx), \
+(((flg_0) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S) & \
+ GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M) | \
+(((flg_1) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_S) & \
+ GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_M) | \
+(((flg_2) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_S) & \
+ GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_M) | \
+(((flg_3) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_S) & \
+ GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_M))
+
+
+/**
+ * ice_set_mac_type - Sets MAC type
+ * @hw: pointer to the HW structure
+ *
+ * This function sets the MAC type of the adapter based on the
+ * vendor ID and device ID stored in the hw structure.
+ */
+static enum ice_status ice_set_mac_type(struct ice_hw *hw)
+{
+   enum ice_status status = ICE_SUCCESS;
+
+   ice_debug(hw, ICE_DBG_TRACE, "ice_set_mac_type\n");
+
+   if (hw->vendor_id == ICE_INTEL_VENDOR_ID) {
+   switch (hw->device_id) {
+   default:
+   hw->mac_type = ICE_MAC_GENERIC;
+   break;
+   }
+   } else {
+   status = ICE_ERR_DEVICE_NOT_SUPPORTED;
+   }
+
+   ice_debug(hw, ICE_DBG_INIT, "found mac_type: %d, status: %d\n",
+ hw->mac_type, status);
+
+   return status;
+}
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+void ice_dev_onetime_setup(struct ice_hw *hw)
+{
+   /* configure Rx - set non pxe mode */
+   wr32(hw, GLLAN_RCTL_0, 0x1);
+
+
+
+}
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+/**
+ * ice_clear_pf_cfg - Clear PF configuration
+ * @hw: pointer to the hardware structure
+ *
+ * Clears any existing PF configuration (VSIs, VSI lists, switch rules, port
+ * configuration, flow director filters, etc.).
+ */
+enum ice_status ice_clear_pf_cfg(struct ice_hw *hw)
+{
+   struct ice_aq_desc desc;
+
+   ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_pf_cfg);
+
+   return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_aq_manage_mac_read - manage MAC address read command
+ * @hw: pointer to the hw struct
+ * @buf: a virtual buffer to hold the manage MAC read response
+ * @buf_size: Size of the virtual buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function is used to return per PF station MAC address (0x0107).
+ * NOTE: Upon successful completion of this command, MAC address information
+ * is returned in user specified buffer. Please interpret user specified
+ * buffer as "manage_mac_read" response.
+ * Response such as various MAC addresses are stored in HW struct (port.mac)
+ * ice_aq_discover_caps is expected to be called before this function is 
called.
+ */
+static enum ice_status
+ice_aq_manage_mac_read(struct ice_hw *hw, void *buf, u16 buf_size,
+  struct ice_sq_cd *cd)
+{
+   struct ice_aqc_manage_mac_read_resp *resp;
+   struct ice_aqc_manage_mac_read *cmd;
+   struct ice_aq_desc desc;
+   enum ice_status status;
+   u16 flags;
+   u8 i;
+
+   cmd = &desc.params.mac_read;
+
+   if (buf_size < sizeof(*resp))
+   return ICE_ERR_BUF_TOO_SHORT;
+
+   ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_manage_mac_read);
+
+   status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+   if (status)
+   return status;
+
+   resp = (struct ice_aqc_manage_mac_read_resp *)buf;
+   flags = LE16_TO_CPU(cmd->flags) & ICE_AQC_MAN_MAC_READ_M;
+
+   if (!(flags & ICE_AQC_MAN_MAC_LAN_ADDR_VALID)) {
+   ice_debug(hw, ICE_DBG_LAN, "got invalid MAC address\n");
+   

[dpdk-dev] [PATCH v5 13/31] net/ice/base: add structures for RX/TX queues

2018-12-16 Thread Wenzhuo Lu
From: Paul M Stillwell Jr 

Add the structures that define how the RX/TX queues
are used.

Signed-off-by: Paul M Stillwell Jr 
---
 drivers/net/ice/base/ice_lan_tx_rx.h | 2291 ++
 1 file changed, 2291 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_lan_tx_rx.h

diff --git a/drivers/net/ice/base/ice_lan_tx_rx.h 
b/drivers/net/ice/base/ice_lan_tx_rx.h
new file mode 100644
index 000..d27045f
--- /dev/null
+++ b/drivers/net/ice/base/ice_lan_tx_rx.h
@@ -0,0 +1,2291 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_LAN_TX_RX_H_
+#define _ICE_LAN_TX_RX_H_
+#include "ice_osdep.h"
+
+/* RX Descriptors */
+union ice_16byte_rx_desc {
+   struct {
+   __le64 pkt_addr; /* Packet buffer address */
+   __le64 hdr_addr; /* Header buffer address */
+   } read;
+   struct {
+   struct {
+   struct {
+   __le16 mirroring_status;
+   __le16 l2tag1;
+   } lo_dword;
+   union {
+   __le32 rss; /* RSS Hash */
+   __le32 fd_id; /* Flow Director filter id */
+   } hi_dword;
+   } qword0;
+   struct {
+   /* ext status/error/PTYPE/length */
+   __le64 status_error_len;
+   } qword1;
+   } wb;  /* writeback */
+};
+
+union ice_32byte_rx_desc {
+   struct {
+   __le64 pkt_addr; /* Packet buffer address */
+   __le64 hdr_addr; /* Header buffer address */
+   /* bit 0 of hdr_addr is DD bit */
+   __le64 rsvd1;
+   __le64 rsvd2;
+   } read;
+   struct {
+   struct {
+   struct {
+   __le16 mirroring_status;
+   __le16 l2tag1;
+   } lo_dword;
+   union {
+   __le32 rss; /* RSS Hash */
+   __le32 fd_id; /* Flow Director filter id */
+   } hi_dword;
+   } qword0;
+   struct {
+   /* status/error/PTYPE/length */
+   __le64 status_error_len;
+   } qword1;
+   struct {
+   __le16 ext_status; /* extended status */
+   __le16 rsvd;
+   __le16 l2tag2_1;
+   __le16 l2tag2_2;
+   } qword2;
+   struct {
+   __le32 reserved;
+   __le32 fd_id;
+   } qword3;
+   } wb; /* writeback */
+};
+
+struct ice_fltr_desc {
+   __le64 qidx_compq_space_stat;
+   __le64 dtype_cmd_vsi_fdid;
+};
+
+#define ICE_FXD_FLTR_QW0_QINDEX_S  0
+#define ICE_FXD_FLTR_QW0_QINDEX_M  (0x7FFULL << ICE_FXD_FLTR_QW0_QINDEX_S)
+#define ICE_FXD_FLTR_QW0_COMP_Q_S  11
+#define ICE_FXD_FLTR_QW0_COMP_Q_M  BIT_ULL(ICE_FXD_FLTR_QW0_COMP_Q_S)
+#define ICE_FXD_FLTR_QW0_COMP_Q_ZERO   0x0ULL
+#define ICE_FXD_FLTR_QW0_COMP_Q_QINDX  0x1ULL
+
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_S 12
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_M \
+   (0x3ULL << ICE_FXD_FLTR_QW0_COMP_REPORT_S)
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_NONE  0x0ULL
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_SW_FAIL   0x1ULL
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_SW0x2ULL
+
+#define ICE_FXD_FLTR_QW0_FD_SPACE_S14
+#define ICE_FXD_FLTR_QW0_FD_SPACE_M(0x3ULL << ICE_FXD_FLTR_QW0_FD_SPACE_S)
+#define ICE_FXD_FLTR_QW0_FD_SPACE_GUAR 0x0ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_BEST_EFFORT  0x1ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_GUAR_BEST0x2ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_BEST_GUAR0x3ULL
+
+#define ICE_FXD_FLTR_QW0_STAT_CNT_S16
+#define ICE_FXD_FLTR_QW0_STAT_CNT_M\
+   (0x1FFFULL << ICE_FXD_FLTR_QW0_STAT_CNT_S)
+#define ICE_FXD_FLTR_QW0_STAT_ENA_S29
+#define ICE_FXD_FLTR_QW0_STAT_ENA_M(0x3ULL << ICE_FXD_FLTR_QW0_STAT_ENA_S)
+#define ICE_FXD_FLTR_QW0_STAT_ENA_NONE 0x0ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_PKTS 0x1ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_BYTES0x2ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_PKTS_BYTES   0x3ULL
+
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_S   31
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_M   BIT_ULL(ICE_FXD_FLTR_QW0_EVICT_ENA_S)
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_FALSE   0x0ULL
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_TRUE0x1ULL
+
+#define ICE_FXD_FLTR_QW0_TO_Q_S32
+#define ICE_FXD_FLTR_QW0_TO_Q_M(0x7ULL << 
ICE_FXD_FLTR_QW0_TO_Q_S)
+#define ICE_FXD_FLTR_QW0_TO_Q_EQUALS_QINDEX0x0ULL
+
+#define ICE_FXD_FLTR_QW0_TO_Q_PRI_S35
+#define ICE_FXD_FLTR_QW0_TO_Q_PRI_M   

[dpdk-dev] [PATCH v5 11/31] net/ice/base: add various headers

2018-12-16 Thread Wenzhuo Lu
From: Paul M Stillwell Jr 

Add various headers that define status codes and
basic defines for use in the code.

Signed-off-by: Paul M Stillwell Jr 
---
 drivers/net/ice/base/ice_alloc.h | 22 ++
 drivers/net/ice/base/ice_flex_type.h | 19 +++
 drivers/net/ice/base/ice_flow.h  |  8 +++
 drivers/net/ice/base/ice_status.h| 45 
 4 files changed, 94 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_alloc.h
 create mode 100644 drivers/net/ice/base/ice_flex_type.h
 create mode 100644 drivers/net/ice/base/ice_flow.h
 create mode 100644 drivers/net/ice/base/ice_status.h

diff --git a/drivers/net/ice/base/ice_alloc.h b/drivers/net/ice/base/ice_alloc.h
new file mode 100644
index 000..7883104
--- /dev/null
+++ b/drivers/net/ice/base/ice_alloc.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_ALLOC_H_
+#define _ICE_ALLOC_H_
+
+/* Memory types */
+enum ice_memset_type {
+   ICE_NONDMA_MEM = 0,
+   ICE_DMA_MEM
+};
+
+/* Memcpy types */
+enum ice_memcpy_type {
+   ICE_NONDMA_TO_NONDMA = 0,
+   ICE_NONDMA_TO_DMA,
+   ICE_DMA_TO_DMA,
+   ICE_DMA_TO_NONDMA
+};
+
+#endif /* _ICE_ALLOC_H_ */
diff --git a/drivers/net/ice/base/ice_flex_type.h 
b/drivers/net/ice/base/ice_flex_type.h
new file mode 100644
index 000..84a38cb
--- /dev/null
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_FLEX_TYPE_H_
+#define _ICE_FLEX_TYPE_H_
+
+/* Extraction Sequence (Field Vector) Table */
+struct ice_fv_word {
+   u8 prot_id;
+   u8 off; /* Offset within the protocol header */
+};
+
+#define ICE_MAX_FV_WORDS 48
+struct ice_fv {
+   struct ice_fv_word ew[ICE_MAX_FV_WORDS];
+};
+
+#endif /* _ICE_FLEX_TYPE_H_ */
diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h
new file mode 100644
index 000..228a2c0
--- /dev/null
+++ b/drivers/net/ice/base/ice_flow.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_FLOW_H_
+#define _ICE_FLOW_H_
+
+#endif /* _ICE_FLOW_H_ */
diff --git a/drivers/net/ice/base/ice_status.h 
b/drivers/net/ice/base/ice_status.h
new file mode 100644
index 000..898bfa6
--- /dev/null
+++ b/drivers/net/ice/base/ice_status.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_STATUS_H_
+#define _ICE_STATUS_H_
+
+/* Error Codes */
+enum ice_status {
+   ICE_SUCCESS = 0,
+
+   /* Generic codes : Range -1..-49 */
+   ICE_ERR_PARAM   = -1,
+   ICE_ERR_NOT_IMPL= -2,
+   ICE_ERR_NOT_READY   = -3,
+   ICE_ERR_BAD_PTR = -5,
+   ICE_ERR_INVAL_SIZE  = -6,
+   ICE_ERR_DEVICE_NOT_SUPPORTED= -8,
+   ICE_ERR_RESET_FAILED= -9,
+   ICE_ERR_FW_API_VER  = -10,
+   ICE_ERR_NO_MEMORY   = -11,
+   ICE_ERR_CFG = -12,
+   ICE_ERR_OUT_OF_RANGE= -13,
+   ICE_ERR_ALREADY_EXISTS  = -14,
+   ICE_ERR_DOES_NOT_EXIST  = -15,
+   ICE_ERR_IN_USE  = -16,
+   ICE_ERR_MAX_LIMIT   = -17,
+   ICE_ERR_RESET_ONGOING   = -18,
+   ICE_ERR_HW_TABLE= -19,
+
+   /* NVM specific error codes: Range -50..-59 */
+   ICE_ERR_NVM = -50,
+   ICE_ERR_NVM_CHECKSUM= -51,
+   ICE_ERR_BUF_TOO_SHORT   = -52,
+   ICE_ERR_NVM_BLANK_MODE  = -53,
+
+   /* ARQ/ASQ specific error codes. Range -100..-109 */
+   ICE_ERR_AQ_ERROR= -100,
+   ICE_ERR_AQ_TIMEOUT  = -101,
+   ICE_ERR_AQ_FULL = -102,
+   ICE_ERR_AQ_NO_WORK  = -103,
+   ICE_ERR_AQ_EMPTY= -104,
+};
+
+#endif /* _ICE_STATUS_H_ */
-- 
1.9.3



[dpdk-dev] [PATCH v5 14/31] net/ice/base: add OS specific implementation

2018-12-16 Thread Wenzhuo Lu
Add some MACRO defination and small functions which
are specific for DPDK.
Add readme too.

Signed-off-by: Wenzhuo Lu 
---
 drivers/net/ice/base/README  |  22 ++
 drivers/net/ice/base/ice_osdep.h | 524 +++
 2 files changed, 546 insertions(+)
 create mode 100644 drivers/net/ice/base/README
 create mode 100644 drivers/net/ice/base/ice_osdep.h

diff --git a/drivers/net/ice/base/README b/drivers/net/ice/base/README
new file mode 100644
index 000..708f607
--- /dev/null
+++ b/drivers/net/ice/base/README
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+Intel® ICE driver
+==
+
+This directory contains source code of FreeBSD ice driver of version
+2018.12.11 released by the team which develops
+basic drivers for any ice NIC. The directory of base/ contains the
+original source package.
+This driver is valid for the product(s) listed below
+
+* Intel® Ethernet Network Adapters E810
+
+Updating the driver
+===
+
+NOTE: The source code in this directory should not be modified apart from
+the following file(s):
+
+ice_osdep.h
diff --git a/drivers/net/ice/base/ice_osdep.h b/drivers/net/ice/base/ice_osdep.h
new file mode 100644
index 000..dd25b75
--- /dev/null
+++ b/drivers/net/ice/base/ice_osdep.h
@@ -0,0 +1,524 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_OSDEP_H_
+#define _ICE_OSDEP_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "../ice_logs.h"
+
+#define INLINE inline
+#define STATIC static
+
+typedef uint8_t u8;
+typedef int8_t  s8;
+typedef uint16_tu16;
+typedef int16_t s16;
+typedef uint32_tu32;
+typedef int32_t s32;
+typedef uint64_tu64;
+typedef uint64_ts64;
+
+#define __iomem
+#define hw_dbg(hw, S, A...) do {} while (0)
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+#define low_16_bits(x)   ((x) & 0x)
+#define high_16_bits(x)  (((x) & 0x) >> 16)
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN  6
+#endif
+
+#ifndef __le16
+#define __le16  uint16_t
+#endif
+#ifndef __le32
+#define __le32  uint32_t
+#endif
+#ifndef __le64
+#define __le64  uint64_t
+#endif
+#ifndef __be16
+#define __be16  uint16_t
+#endif
+#ifndef __be32
+#define __be32  uint32_t
+#endif
+#ifndef __be64
+#define __be64  uint64_t
+#endif
+
+#ifndef __always_unused
+#define __always_unused  __attribute__((unused))
+#endif
+#ifndef __maybe_unused
+#define __maybe_unused  __attribute__((unused))
+#endif
+#ifndef __packed
+#define __packed  __attribute__((packed))
+#endif
+
+#ifndef BIT_ULL
+#define BIT_ULL(a) (1ULL << (a))
+#endif
+
+#define FALSE   0
+#define TRUE1
+#define false   0
+#define true1
+
+#define min(a, b) RTE_MIN(a, b)
+#define max(a, b) RTE_MAX(a, b)
+
+#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof(arr[0]))
+#define FIELD_SIZEOF(t, f) (sizeof(((t *)0)->f))
+#define MAKEMASK(m, s) ((m) << (s))
+
+#define DEBUGOUT(S, A...) PMD_DRV_LOG_RAW(DEBUG, S, ##A)
+#define DEBUGFUNC(F) PMD_DRV_LOG_RAW(DEBUG, F)
+
+#define ice_debug(h, m, s, ...)\
+do {   \
+   if (((m) & (h)->debug_mask))\
+   PMD_DRV_LOG_RAW(DEBUG, "ice %02x.%x " s,\
+   (h)->bus.device, (h)->bus.func, \
+   ##__VA_ARGS__); \
+} while (0)
+
+#define ice_info(hw, fmt, args...) ice_debug(hw, ICE_DBG_ALL, fmt, ##args)
+#define ice_warn(hw, fmt, args...) ice_debug(hw, ICE_DBG_ALL, fmt, ##args)
+#define ice_debug_array(hw, type, rowsize, groupsize, buf, len)
\
+do {   \
+   struct ice_hw *hw_l = hw;   \
+   u16 len_l = len;\
+   u8 *buf_l = buf;\
+   int i;  \
+   for (i = 0; i < len_l; i += 8)  \
+   ice_debug(hw_l, type,   \
+ "0x%04X  0x%016"PRIx64"\n",   \
+ i, *((u64 *)((buf_l) + i)));  \
+} while (0)
+#define ice_snprintf snprintf
+#ifndef SNPRINTF
+#define SNPRINTF ice_snprintf
+#endif
+
+#define ICE_PCI_REG(reg) rte_read32(reg)
+#define ICE_PCI_REG_ADDR(a, reg) \
+   ((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+static inline uint32_t ice

[dpdk-dev] [PATCH v5 16/31] net/ice: support device and queue ops

2018-12-16 Thread Wenzhuo Lu
Normally when starting/stopping the device the queue
should be started and stopped. Support them both in
this patch.

Below ops are added,
dev_configure
dev_start
dev_stop
dev_close
dev_reset
rx_queue_start
rx_queue_stop
tx_queue_start
tx_queue_stop
rx_queue_setup
rx_queue_release
tx_queue_setup
tx_queue_release

Signed-off-by: Wenzhuo Lu 
Signed-off-by: Qiming Yang 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Jingjing Wu 
---
 config/common_base   |   2 +
 doc/guides/nics/features/ice.ini |   1 +
 doc/guides/nics/ice.rst  |   8 +
 drivers/net/ice/Makefile |   3 +-
 drivers/net/ice/ice_ethdev.c | 198 -
 drivers/net/ice/ice_lan_rxtx.c   | 927 +++
 drivers/net/ice/ice_rxtx.h   |  20 +
 drivers/net/ice/meson.build  |   3 +-
 8 files changed, 1159 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/ice/ice_lan_rxtx.c

diff --git a/config/common_base b/config/common_base
index 872f440..a342760 100644
--- a/config/common_base
+++ b/config/common_base
@@ -303,6 +303,8 @@ CONFIG_RTE_LIBRTE_ICE_PMD=y
 CONFIG_RTE_LIBRTE_ICE_DEBUG_RX=n
 CONFIG_RTE_LIBRTE_ICE_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_ICE_DEBUG_TX_FREE=n
+CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC=y
+CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC=n
 
 # Compile burst-oriented AVF PMD driver
 #
diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 085e848..a43a9cd 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Queue start/stop = Y
 BSD nic_uio  = Y
 Linux UIO= Y
 Linux VFIO   = Y
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index 946ed04..96a594f 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -38,6 +38,14 @@ Please note that enabling debugging options may affect 
system performance.
 
   Toggle display of generic debugging messages.
 
+- ``CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC`` (default ``y``)
+
+  Toggle bulk allocation for RX.
+
+- ``CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC`` (default ``n``)
+
+  Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 
byte.
+
 Runtime Config Options
 ~~
 
diff --git a/drivers/net/ice/Makefile b/drivers/net/ice/Makefile
index 70f23e3..ff93800 100644
--- a/drivers/net/ice/Makefile
+++ b/drivers/net/ice/Makefile
@@ -11,7 +11,7 @@ LIB = librte_pmd_ice.a
 CFLAGS += -O3
 CFLAGS += $(WERROR_FLAGS)
 
-LDLIBS += -lrte_eal -lrte_ethdev -lrte_kvargs -lrte_bus_pci
+LDLIBS += -lrte_eal -lrte_ethdev -lrte_kvargs -lrte_bus_pci -lrte_mempool
 
 EXPORT_MAP := rte_pmd_ice_version.map
 
@@ -50,5 +50,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_switch.c
 SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_nvm.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_lan_rxtx.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 4f0c819..2c86b3d 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -14,6 +14,12 @@
 int ice_logtype_init;
 int ice_logtype_driver;
 
+static int ice_dev_configure(struct rte_eth_dev *dev);
+static int ice_dev_start(struct rte_eth_dev *dev);
+static void ice_dev_stop(struct rte_eth_dev *dev);
+static void ice_dev_close(struct rte_eth_dev *dev);
+static int ice_dev_reset(struct rte_eth_dev *dev);
+
 static const struct rte_pci_id pci_id_ice_map[] = {
{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_QSFP) },
@@ -22,7 +28,19 @@
 };
 
 static const struct eth_dev_ops ice_eth_dev_ops = {
-   .dev_configure= NULL,
+   .dev_configure= ice_dev_configure,
+   .dev_start= ice_dev_start,
+   .dev_stop = ice_dev_stop,
+   .dev_close= ice_dev_close,
+   .dev_reset= ice_dev_reset,
+   .rx_queue_start   = ice_rx_queue_start,
+   .rx_queue_stop= ice_rx_queue_stop,
+   .tx_queue_start   = ice_tx_queue_start,
+   .tx_queue_stop= ice_tx_queue_stop,
+   .rx_queue_setup   = ice_rx_queue_setup,
+   .rx_queue_release = ice_rx_queue_release,
+   .tx_queue_setup   = ice_tx_queue_setup,
+   .tx_queue_release = ice_tx_queue_release,
 };
 
 static void
@@ -560,11 +578,41 @@
 }
 
 static void
+ice_dev_stop(struct rte_eth_dev *dev)
+{
+   struct rte_eth_dev_data *data = dev->data;
+   struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+   uint16_t i;
+
+   /* avoid stopping again */
+   if (pf->adapter_stopped)
+   return;
+
+   /* stop and clear all Rx queues */
+  

[dpdk-dev] [PATCH v5 18/31] net/ice: support packet type getting

2018-12-16 Thread Wenzhuo Lu
Add ops dev_supported_ptypes_get.

Signed-off-by: Wei Zhao 
Signed-off-by: Wenzhuo Lu 
---
 drivers/net/ice/ice_ethdev.c   |   2 +
 drivers/net/ice/ice_lan_rxtx.c | 601 +
 drivers/net/ice/ice_rxtx.h |   2 +
 3 files changed, 605 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index c572ba6..c916bf2 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -44,6 +44,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
.tx_queue_setup   = ice_tx_queue_setup,
.tx_queue_release = ice_tx_queue_release,
.dev_infos_get= ice_dev_info_get,
+   .dev_supported_ptypes_get = ice_dev_supported_ptypes_get,
 };
 
 static void
@@ -493,6 +494,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 
dev->dev_ops = &ice_eth_dev_ops;
 
+   ice_set_default_ptype_table(dev);
pci_dev = RTE_DEV_TO_PCI(dev->device);
 
pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 5c2301a..8230bb2 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -884,6 +884,42 @@
rte_free(q);
 }
 
+const uint32_t *
+ice_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+   static const uint32_t ptypes[] = {
+   /* refers to ice_get_default_pkt_type() */
+   RTE_PTYPE_L2_ETHER,
+   RTE_PTYPE_L2_ETHER_LLDP,
+   RTE_PTYPE_L2_ETHER_ARP,
+   RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+   RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+   RTE_PTYPE_L4_FRAG,
+   RTE_PTYPE_L4_ICMP,
+   RTE_PTYPE_L4_NONFRAG,
+   RTE_PTYPE_L4_SCTP,
+   RTE_PTYPE_L4_TCP,
+   RTE_PTYPE_L4_UDP,
+   RTE_PTYPE_TUNNEL_GRENAT,
+   RTE_PTYPE_TUNNEL_IP,
+   RTE_PTYPE_INNER_L2_ETHER,
+   RTE_PTYPE_INNER_L2_ETHER_VLAN,
+   RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN,
+   RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN,
+   RTE_PTYPE_INNER_L4_FRAG,
+   RTE_PTYPE_INNER_L4_ICMP,
+   RTE_PTYPE_INNER_L4_NONFRAG,
+   RTE_PTYPE_INNER_L4_SCTP,
+   RTE_PTYPE_INNER_L4_TCP,
+   RTE_PTYPE_INNER_L4_UDP,
+   RTE_PTYPE_TUNNEL_GTPC,
+   RTE_PTYPE_TUNNEL_GTPU,
+   RTE_PTYPE_UNKNOWN
+   };
+
+   return ptypes;
+}
+
 void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
@@ -925,3 +961,568 @@
}
dev->data->nb_tx_queues = 0;
 }
+
+/* For each value it means, datasheet of hardware can tell more details
+ *
+ * @note: fix ice_dev_supported_ptypes_get() if any change here.
+ */
+static inline uint32_t
+ice_get_default_pkt_type(uint16_t ptype)
+{
+   static const uint32_t type_table[ICE_MAX_PKT_TYPE]
+   __rte_cache_aligned = {
+   /* L2 types */
+   /* [0] reserved */
+   [1] = RTE_PTYPE_L2_ETHER,
+   /* [2] - [5] reserved */
+   [6] = RTE_PTYPE_L2_ETHER_LLDP,
+   /* [7] - [10] reserved */
+   [11] = RTE_PTYPE_L2_ETHER_ARP,
+   /* [12] - [21] reserved */
+
+   /* Non tunneled IPv4 */
+   [22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+  RTE_PTYPE_L4_FRAG,
+   [23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+  RTE_PTYPE_L4_NONFRAG,
+   [24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+  RTE_PTYPE_L4_UDP,
+   /* [25] reserved */
+   [26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+  RTE_PTYPE_L4_TCP,
+   [27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+  RTE_PTYPE_L4_SCTP,
+   [28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+  RTE_PTYPE_L4_ICMP,
+
+   /* IPv4 --> IPv4 */
+   [29] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+  RTE_PTYPE_TUNNEL_IP |
+  RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+  RTE_PTYPE_INNER_L4_FRAG,
+   [30] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+  RTE_PTYPE_TUNNEL_IP |
+  RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+  RTE_PTYPE_INNER_L4_NONFRAG,
+   [31] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+  RTE_PTYPE_TUNNEL_IP |
+  RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+  RTE_PTYPE_INNER_L4_UDP,
+   /* [32] reserved */
+   [33] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |

[dpdk-dev] [PATCH v5 17/31] net/ice: support getting device information

2018-12-16 Thread Wenzhuo Lu
Add ops dev_infos_get.

Signed-off-by: Wenzhuo Lu 
Signed-off-by: Qiming Yang 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Jingjing Wu 
---
 doc/guides/nics/features/ice.ini |   1 +
 drivers/net/ice/ice_ethdev.c | 103 +++
 drivers/net/ice/ice_ethdev.h |  13 +
 3 files changed, 117 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index a43a9cd..af8f0d3 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Speed capabilities   = Y
 Queue start/stop = Y
 BSD nic_uio  = Y
 Linux UIO= Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 2c86b3d..c572ba6 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -19,6 +19,8 @@
 static void ice_dev_stop(struct rte_eth_dev *dev);
 static void ice_dev_close(struct rte_eth_dev *dev);
 static int ice_dev_reset(struct rte_eth_dev *dev);
+static void ice_dev_info_get(struct rte_eth_dev *dev,
+struct rte_eth_dev_info *dev_info);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -41,6 +43,7 @@
.rx_queue_release = ice_rx_queue_release,
.tx_queue_setup   = ice_tx_queue_setup,
.tx_queue_release = ice_tx_queue_release,
+   .dev_infos_get= ice_dev_info_get,
 };
 
 static void
@@ -790,6 +793,106 @@ static int ice_init_rss(struct ice_pf *pf)
return 0;
 }
 
+static void
+ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+   struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+   struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+   struct ice_vsi *vsi = pf->main_vsi;
+   struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
+
+   dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
+   dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
+   dev_info->max_rx_queues = vsi->nb_qps;
+   dev_info->max_tx_queues = vsi->nb_qps;
+   dev_info->max_mac_addrs = vsi->max_macaddrs;
+   dev_info->max_vfs = pci_dev->max_vfs;
+
+   dev_info->rx_offload_capa =
+   DEV_RX_OFFLOAD_VLAN_STRIP |
+   DEV_RX_OFFLOAD_IPV4_CKSUM |
+   DEV_RX_OFFLOAD_UDP_CKSUM |
+   DEV_RX_OFFLOAD_TCP_CKSUM |
+   DEV_RX_OFFLOAD_QINQ_STRIP |
+   DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+   DEV_RX_OFFLOAD_VLAN_EXTEND |
+   DEV_RX_OFFLOAD_JUMBO_FRAME |
+   DEV_RX_OFFLOAD_KEEP_CRC |
+   DEV_RX_OFFLOAD_SCATTER |
+   DEV_RX_OFFLOAD_VLAN_FILTER;
+   dev_info->tx_offload_capa =
+   DEV_TX_OFFLOAD_VLAN_INSERT |
+   DEV_TX_OFFLOAD_QINQ_INSERT |
+   DEV_TX_OFFLOAD_IPV4_CKSUM |
+   DEV_TX_OFFLOAD_UDP_CKSUM |
+   DEV_TX_OFFLOAD_TCP_CKSUM |
+   DEV_TX_OFFLOAD_SCTP_CKSUM |
+   DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+   DEV_TX_OFFLOAD_TCP_TSO |
+   DEV_TX_OFFLOAD_MULTI_SEGS |
+   DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+   dev_info->rx_queue_offload_capa = 0;
+   dev_info->tx_queue_offload_capa = 0;
+
+   dev_info->reta_size = hw->func_caps.common_cap.rss_table_size;
+   dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
+   dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
+
+   dev_info->default_rxconf = (struct rte_eth_rxconf) {
+   .rx_thresh = {
+   .pthresh = ICE_DEFAULT_RX_PTHRESH,
+   .hthresh = ICE_DEFAULT_RX_HTHRESH,
+   .wthresh = ICE_DEFAULT_RX_WTHRESH,
+   },
+   .rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH,
+   .rx_drop_en = 0,
+   .offloads = 0,
+   };
+
+   dev_info->default_txconf = (struct rte_eth_txconf) {
+   .tx_thresh = {
+   .pthresh = ICE_DEFAULT_TX_PTHRESH,
+   .hthresh = ICE_DEFAULT_TX_HTHRESH,
+   .wthresh = ICE_DEFAULT_TX_WTHRESH,
+   },
+   .tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH,
+   .tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH,
+   .offloads = 0,
+   };
+
+   dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+   .nb_max = ICE_MAX_RING_DESC,
+   .nb_min = ICE_MIN_RING_DESC,
+   .nb_align = ICE_ALIGN_RING_DESC,
+   };
+
+   dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+   .nb_max = ICE_MAX_RING_DESC,
+   .nb_min = ICE_MIN_RING_DESC,
+   .nb_align = ICE_ALIGN_RING_DESC,
+   };
+
+   dev_inf

[dpdk-dev] [PATCH v5 21/31] net/ice: support MAC ops

2018-12-16 Thread Wenzhuo Lu
Add below ops,
mac_addr_set
mac_addr_add
mac_addr_remove

Signed-off-by: Wenzhuo Lu 
Signed-off-by: Qiming Yang 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Jingjing Wu 
---
 doc/guides/nics/features/ice.ini |   2 +
 drivers/net/ice/ice_ethdev.c | 236 +++
 2 files changed, 238 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index fab6442..759a036 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -10,6 +10,8 @@ Link status event= Y
 Queue start/stop = Y
 MTU update   = Y
 Jumbo frame  = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
 BSD nic_uio  = Y
 Linux UIO= Y
 Linux VFIO   = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 0c0efce..29840fd 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -24,6 +24,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 static int ice_link_update(struct rte_eth_dev *dev,
   int wait_to_complete);
 static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
+static int ice_macaddr_set(struct rte_eth_dev *dev,
+  struct ether_addr *mac_addr);
+static int ice_macaddr_add(struct rte_eth_dev *dev,
+  struct ether_addr *mac_addr,
+  __rte_unused uint32_t index,
+  uint32_t pool);
+static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -50,6 +57,9 @@ static int ice_link_update(struct rte_eth_dev *dev,
.dev_supported_ptypes_get = ice_dev_supported_ptypes_get,
.link_update  = ice_link_update,
.mtu_set  = ice_mtu_set,
+   .mac_addr_set = ice_macaddr_set,
+   .mac_addr_add = ice_macaddr_add,
+   .mac_addr_remove  = ice_macaddr_remove,
 };
 
 static void
@@ -336,6 +346,130 @@ static int ice_link_update(struct rte_eth_dev *dev,
return 0;
 }
 
+/* Find out specific MAC filter */
+static struct ice_mac_filter *
+ice_find_mac_filter(struct ice_vsi *vsi, struct ether_addr *macaddr)
+{
+   struct ice_mac_filter *f;
+
+   TAILQ_FOREACH(f, &vsi->mac_list, next) {
+   if (is_same_ether_addr(macaddr, &f->mac_info.mac_addr))
+   return f;
+   }
+
+   return NULL;
+}
+
+static int
+ice_add_mac_filter(struct ice_vsi *vsi, struct ether_addr *mac_addr)
+{
+   struct ice_fltr_list_entry *m_list_itr = NULL;
+   struct ice_mac_filter *f;
+   struct LIST_HEAD_TYPE list_head;
+   struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+   int ret = 0;
+
+   /* If it's added and configured, return */
+   f = ice_find_mac_filter(vsi, mac_addr);
+   if (f) {
+   PMD_DRV_LOG(INFO, "This MAC filter already exists.");
+   return 0;
+   }
+
+   INIT_LIST_HEAD(&list_head);
+
+   m_list_itr = (struct ice_fltr_list_entry *)
+   ice_malloc(hw, sizeof(*m_list_itr));
+   if (!m_list_itr) {
+   ret = -ENOMEM;
+   goto DONE;
+   }
+   ice_memcpy(m_list_itr->fltr_info.l_data.mac.mac_addr,
+  mac_addr, ETH_ALEN, ICE_NONDMA_TO_NONDMA);
+   m_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+   m_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+   m_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_MAC;
+   m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+   m_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+   LIST_ADD(&m_list_itr->list_entry, &list_head);
+
+   /* Add the mac */
+   ret = ice_add_mac(hw, &list_head);
+   if (ret != ICE_SUCCESS) {
+   PMD_DRV_LOG(ERR, "Failed to add MAC filter");
+   ret = -EINVAL;
+   goto DONE;
+   }
+   /* Add the mac addr into mac list */
+   f = rte_zmalloc(NULL, sizeof(*f), 0);
+   if (!f) {
+   PMD_DRV_LOG(ERR, "failed to allocate memory");
+   ret = -ENOMEM;
+   goto DONE;
+   }
+   rte_memcpy(&f->mac_info.mac_addr, mac_addr, ETH_ADDR_LEN);
+   TAILQ_INSERT_TAIL(&vsi->mac_list, f, next);
+   vsi->mac_num++;
+
+   ret = 0;
+
+DONE:
+   rte_free(m_list_itr);
+   return ret;
+}
+
+static int
+ice_remove_mac_filter(struct ice_vsi *vsi, struct ether_addr *mac_addr)
+{
+   struct ice_fltr_list_entry *m_list_itr = NULL;
+   struct ice_mac_filter *f;
+   struct LIST_HEAD_TYPE list_head;
+   struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+   int ret = 0;
+
+   /* Can't find it, return an error */
+   f = ice_find_mac_filter(vsi, mac_addr);
+   if (!f)
+   return -EINVAL;
+
+   INIT_LIST_HEAD(&list_head);
+
+  

[dpdk-dev] [PATCH v5 23/31] net/ice: support RSS

2018-12-16 Thread Wenzhuo Lu
Add below ops,
reta_update
reta_query
rss_hash_update
rss_hash_conf_get

Signed-off-by: Wenzhuo Lu 
Signed-off-by: Qiming Yang 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Jingjing Wu 
---
 doc/guides/nics/features/ice.ini |   3 +
 drivers/net/ice/ice_ethdev.c | 242 +++
 2 files changed, 245 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 5ac8e56..953a869 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -12,6 +12,9 @@ MTU update   = Y
 Jumbo frame  = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
+RSS hash = Y
+RSS key update   = Y
+RSS reta update  = Y
 VLAN filter  = Y
 VLAN offload = Y
 QinQ offload = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 1d3cc7a..28d0282 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -28,6 +28,16 @@ static int ice_link_update(struct rte_eth_dev *dev,
 static int ice_vlan_tpid_set(struct rte_eth_dev *dev,
 enum rte_vlan_type vlan_type,
 uint16_t tpid);
+static int ice_rss_reta_update(struct rte_eth_dev *dev,
+  struct rte_eth_rss_reta_entry64 *reta_conf,
+  uint16_t reta_size);
+static int ice_rss_reta_query(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size);
+static int ice_rss_hash_update(struct rte_eth_dev *dev,
+  struct rte_eth_rss_conf *rss_conf);
+static int ice_rss_hash_conf_get(struct rte_eth_dev *dev,
+struct rte_eth_rss_conf *rss_conf);
 static int ice_vlan_filter_set(struct rte_eth_dev *dev,
   uint16_t vlan_id,
   int on);
@@ -72,6 +82,10 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
.vlan_filter_set  = ice_vlan_filter_set,
.vlan_offload_set = ice_vlan_offload_set,
.vlan_tpid_set= ice_vlan_tpid_set,
+   .reta_update  = ice_rss_reta_update,
+   .reta_query   = ice_rss_reta_query,
+   .rss_hash_update  = ice_rss_hash_update,
+   .rss_hash_conf_get= ice_rss_hash_conf_get,
.vlan_pvid_set= ice_vlan_pvid_set,
 };
 
@@ -2006,6 +2020,234 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
 }
 
 static int
+ice_get_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
+{
+   struct ice_pf *pf = ICE_VSI_TO_PF(vsi);
+   struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+   int ret;
+
+   if (!lut)
+   return -EINVAL;
+
+   if (pf->flags & ICE_FLAG_RSS_AQ_CAPABLE) {
+   ret = ice_aq_get_rss_lut(hw, vsi->idx, TRUE,
+lut, lut_size);
+   if (ret) {
+   PMD_DRV_LOG(ERR, "Failed to get RSS lookup table");
+   return -EINVAL;
+   }
+   } else {
+   uint64_t *lut_dw = (uint64_t *)lut;
+   uint16_t i, lut_size_dw = lut_size / 4;
+
+   for (i = 0; i < lut_size_dw; i++)
+   lut_dw[i] = ICE_READ_REG(hw, PFQF_HLUT(i));
+   }
+
+   return 0;
+}
+
+static int
+ice_set_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
+{
+   struct ice_pf *pf = ICE_VSI_TO_PF(vsi);
+   struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+   int ret;
+
+   if (!vsi || !lut)
+   return -EINVAL;
+
+   if (pf->flags & ICE_FLAG_RSS_AQ_CAPABLE) {
+   ret = ice_aq_set_rss_lut(hw, vsi->idx, TRUE,
+lut, lut_size);
+   if (ret) {
+   PMD_DRV_LOG(ERR, "Failed to set RSS lookup table");
+   return -EINVAL;
+   }
+   } else {
+   uint64_t *lut_dw = (uint64_t *)lut;
+   uint16_t i, lut_size_dw = lut_size / 4;
+
+   for (i = 0; i < lut_size_dw; i++)
+   ICE_WRITE_REG(hw, PFQF_HLUT(i), lut_dw[i]);
+
+   ice_flush(hw);
+   }
+
+   return 0;
+}
+
+static int
+ice_rss_reta_update(struct rte_eth_dev *dev,
+   struct rte_eth_rss_reta_entry64 *reta_conf,
+   uint16_t reta_size)
+{
+   struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+   struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+   uint16_t i, lut_size = hw->func_caps.common_cap.rss_table_size;
+   uint16_t idx, shift;
+   uint8_t *lut;
+   int ret;
+
+   if (reta_size != lut_size ||
+   reta_size > ETH_RSS_RETA_SIZE_512) {
+   PMD_DRV_LOG(ERR,
+   "The

[dpdk-dev] [PATCH v5 25/31] net/ice: support FW version getting

2018-12-16 Thread Wenzhuo Lu
Add ops fw_version_get.

Signed-off-by: Wenzhuo Lu 
Signed-off-by: Qiming Yang 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Jingjing Wu 
---
 doc/guides/nics/features/ice.ini |  1 +
 drivers/net/ice/ice_ethdev.c | 21 +
 2 files changed, 22 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 2844f4c..4867433 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -19,6 +19,7 @@ RSS reta update  = Y
 VLAN filter  = Y
 VLAN offload = Y
 QinQ offload = Y
+FW version   = Y
 BSD nic_uio  = Y
 Linux UIO= Y
 Linux VFIO   = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 568d8a4..13d233a 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -52,6 +52,8 @@ static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
uint16_t queue_id);
 static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 uint16_t queue_id);
+static int ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version,
+ size_t fw_size);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 uint16_t pvid, int on);
 
@@ -92,6 +94,7 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
.rss_hash_conf_get= ice_rss_hash_conf_get,
.rx_queue_intr_enable = ice_rx_queue_intr_enable,
.rx_queue_intr_disable= ice_rx_queue_intr_disable,
+   .fw_version_get   = ice_fw_version_get,
.vlan_pvid_set= ice_vlan_pvid_set,
 };
 
@@ -2478,6 +2481,24 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev 
*dev,
 }
 
 static int
+ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size)
+{
+   struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+   int ret;
+
+   ret = snprintf(fw_version, fw_size, "%d.%d.%05d %d.%d",
+  hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
+  hw->api_maj_ver, hw->api_min_ver);
+
+   /* add the size of '\0' */
+   ret += 1;
+   if (fw_size < (u32)ret)
+   return ret;
+   else
+   return 0;
+}
+
+static int
 ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
 {
struct ice_hw *hw;
-- 
1.9.3



[dpdk-dev] [PATCH v5 22/31] net/ice: support VLAN ops

2018-12-16 Thread Wenzhuo Lu
Add below ops,
ice_vlan_filter_set
ice_vlan_offload_set
ice_vlan_tpid_set
ice_vlan_pvid_set

Signed-off-by: Wenzhuo Lu 
Signed-off-by: Qiming Yang 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Jingjing Wu 
---
 doc/guides/nics/features/ice.ini |   3 +
 doc/guides/nics/ice.rst  |  16 ++
 drivers/net/ice/ice_ethdev.c | 590 +++
 3 files changed, 609 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 759a036..5ac8e56 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -12,6 +12,9 @@ MTU update   = Y
 Jumbo frame  = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
+VLAN filter  = Y
+VLAN offload = Y
+QinQ offload = Y
 BSD nic_uio  = Y
 Linux UIO= Y
 Linux VFIO   = Y
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index 96a594f..466af55 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -64,6 +64,22 @@ Driver compilation and testing
 Refer to the document :ref:`compiling and testing a PMD for a NIC 
`
 for details.
 
+Sample Application Notes
+
+
+Vlan filter
+~~~
+
+Vlan filter only works when Promiscuous mode is off.
+
+To start ``testpmd``, and add vlan 10 to port 0:
+
+.. code-block:: console
+
+./app/testpmd -l 0-15 -n 4 -- -i
+...
+
+testpmd> rx_vlan add 10 0
 
 Limitations or Known issues
 ---
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 29840fd..1d3cc7a 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -24,6 +24,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 static int ice_link_update(struct rte_eth_dev *dev,
   int wait_to_complete);
 static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
+static int ice_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int ice_vlan_tpid_set(struct rte_eth_dev *dev,
+enum rte_vlan_type vlan_type,
+uint16_t tpid);
+static int ice_vlan_filter_set(struct rte_eth_dev *dev,
+  uint16_t vlan_id,
+  int on);
 static int ice_macaddr_set(struct rte_eth_dev *dev,
   struct ether_addr *mac_addr);
 static int ice_macaddr_add(struct rte_eth_dev *dev,
@@ -31,6 +38,8 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
   __rte_unused uint32_t index,
   uint32_t pool);
 static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
+static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
+uint16_t pvid, int on);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -60,6 +69,10 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
.mac_addr_set = ice_macaddr_set,
.mac_addr_add = ice_macaddr_add,
.mac_addr_remove  = ice_macaddr_remove,
+   .vlan_filter_set  = ice_vlan_filter_set,
+   .vlan_offload_set = ice_vlan_offload_set,
+   .vlan_tpid_set= ice_vlan_tpid_set,
+   .vlan_pvid_set= ice_vlan_pvid_set,
 };
 
 static void
@@ -470,6 +483,297 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
return ret;
 }
 
+/* Find out specific VLAN filter */
+static struct ice_vlan_filter *
+ice_find_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+   struct ice_vlan_filter *f;
+
+   TAILQ_FOREACH(f, &vsi->vlan_list, next) {
+   if (vlan_id == f->vlan_info.vlan_id)
+   return f;
+   }
+
+   return NULL;
+}
+
+static int
+ice_add_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+   struct ice_fltr_list_entry *v_list_itr = NULL;
+   struct ice_vlan_filter *f;
+   struct LIST_HEAD_TYPE list_head;
+   struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+   int ret = 0;
+
+   if (!vsi || vlan_id > ETHER_MAX_VLAN_ID)
+   return -EINVAL;
+
+   /* If it's added and configured, return. */
+   f = ice_find_vlan_filter(vsi, vlan_id);
+   if (f) {
+   PMD_DRV_LOG(INFO, "This VLAN filter already exists.");
+   return 0;
+   }
+
+   if (!vsi->vlan_anti_spoof_on && !vsi->vlan_filter_on)
+   return 0;
+
+   INIT_LIST_HEAD(&list_head);
+
+   v_list_itr = (struct ice_fltr_list_entry *)
+ ice_malloc(hw, sizeof(*v_list_itr));
+   if (!v_list_itr) {
+   ret = -ENOMEM;
+   goto DONE;
+   }
+   v_list_itr->fltr_info.l_data.vlan.vlan_id = vlan_id;
+   v_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+   v_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+ 

[dpdk-dev] [PATCH v5 15/31] net/ice: support device initialization

2018-12-16 Thread Wenzhuo Lu
Update the documents too.

Signed-off-by: Wenzhuo Lu 
Signed-off-by: Qiming Yang 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Jingjing Wu 
---
 MAINTAINERS |   2 +
 config/common_base  |   7 +
 doc/guides/nics/features/ice.ini|  11 +
 doc/guides/nics/ice.rst |  80 
 doc/guides/nics/index.rst   |   1 +
 doc/guides/rel_notes/release_19_02.rst  |   5 +
 drivers/net/Makefile|   1 +
 drivers/net/ice/Makefile|  54 +++
 drivers/net/ice/base/meson.build|  27 ++
 drivers/net/ice/ice_ethdev.c| 636 
 drivers/net/ice/ice_ethdev.h| 305 +++
 drivers/net/ice/ice_logs.h  |  45 +++
 drivers/net/ice/ice_rxtx.h  | 117 ++
 drivers/net/ice/meson.build |  12 +
 drivers/net/ice/rte_pmd_ice_version.map |   4 +
 drivers/net/meson.build |   1 +
 mk/rte.app.mk   |   1 +
 17 files changed, 1309 insertions(+)
 create mode 100644 doc/guides/nics/features/ice.ini
 create mode 100644 doc/guides/nics/ice.rst
 create mode 100644 drivers/net/ice/Makefile
 create mode 100644 drivers/net/ice/base/meson.build
 create mode 100644 drivers/net/ice/ice_ethdev.c
 create mode 100644 drivers/net/ice/ice_ethdev.h
 create mode 100644 drivers/net/ice/ice_logs.h
 create mode 100644 drivers/net/ice/ice_rxtx.h
 create mode 100644 drivers/net/ice/meson.build
 create mode 100644 drivers/net/ice/rte_pmd_ice_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 37f3bf7..cdb18e0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -598,6 +598,8 @@ M: Qiming Yang 
 M: Wenzhuo Lu 
 T: git://dpdk.org/next/dpdk-next-net-intel
 F: drivers/net/ice/
+F: doc/guides/nics/ice.rst
+F: doc/guides/nics/features/ice.ini
 
 Marvell mvpp2
 M: Tomasz Duszynski 
diff --git a/config/common_base b/config/common_base
index d12ae98..872f440 100644
--- a/config/common_base
+++ b/config/common_base
@@ -297,6 +297,13 @@ CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
 CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 
 #
+# Compile burst-oriented ICE PMD driver
+#
+CONFIG_RTE_LIBRTE_ICE_PMD=y
+CONFIG_RTE_LIBRTE_ICE_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_ICE_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_ICE_DEBUG_TX_FREE=n
+
 # Compile burst-oriented AVF PMD driver
 #
 CONFIG_RTE_LIBRTE_AVF_PMD=y
diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
new file mode 100644
index 000..085e848
--- /dev/null
+++ b/doc/guides/nics/features/ice.ini
@@ -0,0 +1,11 @@
+;
+; Supported features of the 'ice' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+BSD nic_uio  = Y
+Linux UIO= Y
+Linux VFIO   = Y
+x86-32   = Y
+x86-64   = Y
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
new file mode 100644
index 000..946ed04
--- /dev/null
+++ b/doc/guides/nics/ice.rst
@@ -0,0 +1,80 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+Copyright(c) 2018 Intel Corporation.
+
+ICE Poll Mode Driver
+==
+
+The ice PMD (librte_pmd_ice) provides poll mode driver support for
+10/25 Gbps Intel® Ethernet 810 Series Network Adapters based on
+the Intel Ethernet Controller E810.
+
+
+Prerequisites
+-
+
+- Identifying your adapter using `Intel Support
+  `_ and get the latest NVM/FW images.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux ` to setup 
the basic DPDK environment.
+
+- To get better performance on Intel platforms, please follow the "How to get 
best performance with NICs on Intel platforms"
+  section of the :ref:`Getting Started Guide for Linux `.
+
+
+Pre-Installation Configuration
+--
+
+Config File Options
+~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``)
+
+  Toggle compilation of the ``librte_pmd_ice`` driver.
+
+- ``CONFIG_RTE_LIBRTE_ICE_DEBUG_*`` (default ``n``)
+
+  Toggle display of generic debugging messages.
+
+Runtime Config Options
+~~
+
+- ``Maximum Number of Queue Pairs``
+
+  The maximum number of queue pairs is decided by HW. If not configured, APP
+  uses the number from HW. Users can check the number by calling the API
+  ``rte_eth_dev_info_get``.
+  If users want to limit the number of queues, they can set a smaller number
+  using EAL parameter like ``max_queue_pair_num=n``.
+
+
+Driver compilation and testing
+--
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC 
`
+for details.
+
+
+Limitations or Known issues
+---
+
+19.02 limitation
+
+
+Ice code released in 19.02 is for evaluation only.
+
+
+Promiscuous mode not supported
+~~~

[dpdk-dev] [PATCH v5 19/31] net/ice: support link update

2018-12-16 Thread Wenzhuo Lu
Add ops link_update.
LSC interrupt is also enabled in this patch.

Signed-off-by: Wenzhuo Lu 
Signed-off-by: Qiming Yang 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Jingjing Wu 
---
 doc/guides/nics/features/ice.ini |   2 +
 drivers/net/ice/ice_ethdev.c | 332 +++
 2 files changed, 334 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index af8f0d3..eb852ff 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -5,6 +5,8 @@
 ;
 [Features]
 Speed capabilities   = Y
+Link status  = Y
+Link status event= Y
 Queue start/stop = Y
 BSD nic_uio  = Y
 Linux UIO= Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index c916bf2..3118b05 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -21,6 +21,8 @@
 static int ice_dev_reset(struct rte_eth_dev *dev);
 static void ice_dev_info_get(struct rte_eth_dev *dev,
 struct rte_eth_dev_info *dev_info);
+static int ice_link_update(struct rte_eth_dev *dev,
+  int wait_to_complete);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -45,6 +47,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
.tx_queue_release = ice_tx_queue_release,
.dev_infos_get= ice_dev_info_get,
.dev_supported_ptypes_get = ice_dev_supported_ptypes_get,
+   .link_update  = ice_link_update,
 };
 
 static void
@@ -331,6 +334,187 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
return 0;
 }
 
+/* Enable IRQ0 */
+static void
+ice_pf_enable_irq0(struct ice_hw *hw)
+{
+   /* reset the registers */
+   ICE_WRITE_REG(hw, PFINT_OICR_ENA, 0);
+   ICE_READ_REG(hw, PFINT_OICR);
+
+#ifdef ICE_LSE_SPT
+   ICE_WRITE_REG(hw, PFINT_OICR_ENA,
+ (uint32_t)(PFINT_OICR_ENA_INT_ENA_M &
+(~PFINT_OICR_LINK_STAT_CHANGE_M)));
+
+   ICE_WRITE_REG(hw, PFINT_OICR_CTL,
+ (0 & PFINT_OICR_CTL_MSIX_INDX_M) |
+ ((0 << PFINT_OICR_CTL_ITR_INDX_S) &
+  PFINT_OICR_CTL_ITR_INDX_M) |
+ PFINT_OICR_CTL_CAUSE_ENA_M);
+
+   ICE_WRITE_REG(hw, PFINT_FW_CTL,
+ (0 & PFINT_FW_CTL_MSIX_INDX_M) |
+ ((0 << PFINT_FW_CTL_ITR_INDX_S) &
+  PFINT_FW_CTL_ITR_INDX_M) |
+ PFINT_FW_CTL_CAUSE_ENA_M);
+#else
+   ICE_WRITE_REG(hw, PFINT_OICR_ENA, PFINT_OICR_ENA_INT_ENA_M);
+#endif
+
+   ICE_WRITE_REG(hw, GLINT_DYN_CTL(0),
+ GLINT_DYN_CTL_INTENA_M |
+ GLINT_DYN_CTL_CLEARPBA_M |
+ GLINT_DYN_CTL_ITR_INDX_M);
+
+   ice_flush(hw);
+}
+
+/* Disable IRQ0 */
+static void
+ice_pf_disable_irq0(struct ice_hw *hw)
+{
+   /* Disable all interrupt types */
+   ICE_WRITE_REG(hw, GLINT_DYN_CTL(0), GLINT_DYN_CTL_WB_ON_ITR_M);
+   ice_flush(hw);
+}
+
+#ifdef ICE_LSE_SPT
+static void
+ice_handle_aq_msg(struct rte_eth_dev *dev)
+{
+   struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+   struct ice_ctl_q_info *cq = &hw->adminq;
+   struct ice_rq_event_info event;
+   uint16_t pending, opcode;
+   int ret;
+
+   event.buf_len = ICE_AQ_MAX_BUF_LEN;
+   event.msg_buf = rte_zmalloc(NULL, event.buf_len, 0);
+   if (!event.msg_buf) {
+   PMD_DRV_LOG(ERR, "Failed to allocate mem");
+   return;
+   }
+
+   pending = 1;
+   while (pending) {
+   ret = ice_clean_rq_elem(hw, cq, &event, &pending);
+
+   if (ret != ICE_SUCCESS) {
+   PMD_DRV_LOG(INFO,
+   "Failed to read msg from AdminQ, "
+   "adminq_err: %u",
+   hw->adminq.sq_last_status);
+   break;
+   }
+   opcode = rte_le_to_cpu_16(event.desc.opcode);
+
+   switch (opcode) {
+   case ice_aqc_opc_get_link_status:
+   ret = ice_link_update(dev, 0);
+   if (!ret)
+   _rte_eth_dev_callback_process
+   (dev, RTE_ETH_EVENT_INTR_LSC, NULL);
+   break;
+   default:
+   PMD_DRV_LOG(DEBUG, "Request %u is not supported yet",
+   opcode);
+   break;
+   }
+   }
+   rte_free(event.msg_buf);
+}
+#endif
+
+/**
+ * Interrupt handler triggered by NIC for handling
+ * specific interrupt.
+ *
+ * @param handle
+ *  Pointer to interrupt handle.
+ * @param param
+ *  The address of parameter (struct rte_eth_

[dpdk-dev] [PATCH v5 20/31] net/ice: support MTU setting

2018-12-16 Thread Wenzhuo Lu
Add ops mtu_set.

Signed-off-by: Wenzhuo Lu 
Signed-off-by: Qiming Yang 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Jingjing Wu 
---
 doc/guides/nics/features/ice.ini |  2 ++
 drivers/net/ice/ice_ethdev.c | 34 ++
 2 files changed, 36 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index eb852ff..fab6442 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -8,6 +8,8 @@ Speed capabilities   = Y
 Link status  = Y
 Link status event= Y
 Queue start/stop = Y
+MTU update   = Y
+Jumbo frame  = Y
 BSD nic_uio  = Y
 Linux UIO= Y
 Linux VFIO   = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 3118b05..0c0efce 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -23,6 +23,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 struct rte_eth_dev_info *dev_info);
 static int ice_link_update(struct rte_eth_dev *dev,
   int wait_to_complete);
+static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -48,6 +49,7 @@ static int ice_link_update(struct rte_eth_dev *dev,
.dev_infos_get= ice_dev_info_get,
.dev_supported_ptypes_get = ice_dev_supported_ptypes_get,
.link_update  = ice_link_update,
+   .mtu_set  = ice_mtu_set,
 };
 
 static void
@@ -1228,6 +1230,38 @@ static int ice_init_rss(struct ice_pf *pf)
 }
 
 static int
+ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+   struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+   struct rte_eth_dev_data *dev_data = pf->dev_data;
+   uint32_t frame_size = mtu + ETHER_HDR_LEN
+ + ETHER_CRC_LEN + ICE_VLAN_TAG_SIZE;
+
+   /* check if mtu is within the allowed range */
+   if (mtu < ETHER_MIN_MTU || frame_size > ICE_FRAME_SIZE_MAX)
+   return -EINVAL;
+
+   /* mtu setting is forbidden if port is start */
+   if (dev_data->dev_started) {
+   PMD_DRV_LOG(ERR,
+   "port %d must be stopped before configuration",
+   dev_data->port_id);
+   return -EBUSY;
+   }
+
+   if (frame_size > ETHER_MAX_LEN)
+   dev_data->dev_conf.rxmode.offloads |=
+   DEV_RX_OFFLOAD_JUMBO_FRAME;
+   else
+   dev_data->dev_conf.rxmode.offloads &=
+   ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+   dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+   return 0;
+}
+
+static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
  struct rte_pci_device *pci_dev)
 {
-- 
1.9.3



[dpdk-dev] [PATCH v5 24/31] net/ice: support RX queue interruption

2018-12-16 Thread Wenzhuo Lu
Add below ops,
rx_queue_intr_enable
rx_queue_intr_disable

Signed-off-by: Wenzhuo Lu 
Signed-off-by: Qiming Yang 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Jingjing Wu 
---
 doc/guides/nics/features/ice.ini |   1 +
 drivers/net/ice/ice_ethdev.c | 230 +++
 2 files changed, 231 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 953a869..2844f4c 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -7,6 +7,7 @@
 Speed capabilities   = Y
 Link status  = Y
 Link status event= Y
+Rx interrupt = Y
 Queue start/stop = Y
 MTU update   = Y
 Jumbo frame  = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 28d0282..568d8a4 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -48,6 +48,10 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
   __rte_unused uint32_t index,
   uint32_t pool);
 static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
+static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
+   uint16_t queue_id);
+static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
+uint16_t queue_id);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 uint16_t pvid, int on);
 
@@ -86,6 +90,8 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
.reta_query   = ice_rss_reta_query,
.rss_hash_update  = ice_rss_hash_update,
.rss_hash_conf_get= ice_rss_hash_conf_get,
+   .rx_queue_intr_enable = ice_rx_queue_intr_enable,
+   .rx_queue_intr_disable= ice_rx_queue_intr_disable,
.vlan_pvid_set= ice_vlan_pvid_set,
 };
 
@@ -1258,10 +1264,39 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 }
 
 static void
+ice_vsi_disable_queues_intr(struct ice_vsi *vsi)
+{
+   struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+   struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+   struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+   struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+   uint16_t msix_intr, i;
+
+   /* disable interrupt and also clear all the exist config */
+   for (i = 0; i < vsi->nb_qps; i++) {
+   ICE_WRITE_REG(hw, QINT_TQCTL(vsi->base_queue + i), 0);
+   ICE_WRITE_REG(hw, QINT_RQCTL(vsi->base_queue + i), 0);
+   rte_wmb();
+   }
+
+   if (rte_intr_allow_others(intr_handle))
+   /* vfio-pci */
+   for (i = 0; i < vsi->nb_msix; i++) {
+   msix_intr = vsi->msix_intr + i;
+   ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr),
+ GLINT_DYN_CTL_WB_ON_ITR_M);
+   }
+   else
+   /* igb_uio */
+   ICE_WRITE_REG(hw, GLINT_DYN_CTL(0), GLINT_DYN_CTL_WB_ON_ITR_M);
+}
+
+static void
 ice_dev_stop(struct rte_eth_dev *dev)
 {
struct rte_eth_dev_data *data = dev->data;
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+   struct ice_vsi *main_vsi = pf->main_vsi;
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
uint16_t i;
@@ -1278,6 +1313,9 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
for (i = 0; i < data->nb_tx_queues; i++)
ice_tx_queue_stop(dev, i);
 
+   /* disable all queue interrupts */
+   ice_vsi_disable_queues_intr(main_vsi);
+
/* Clear all queues and release mbufs */
ice_clear_queues(dev);
 
@@ -1405,6 +1443,158 @@ static int ice_init_rss(struct ice_pf *pf)
return 0;
 }
 
+static void
+__vsi_queues_bind_intr(struct ice_vsi *vsi, uint16_t msix_vect,
+  int base_queue, int nb_queue)
+{
+   struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+   uint32_t val, val_tx;
+   int i;
+
+   for (i = 0; i < nb_queue; i++) {
+   /*do actual bind*/
+   val = (msix_vect & QINT_RQCTL_MSIX_INDX_M) |
+ (0 < QINT_RQCTL_ITR_INDX_S) | QINT_RQCTL_CAUSE_ENA_M;
+   val_tx = (msix_vect & QINT_TQCTL_MSIX_INDX_M) |
+(0 < QINT_TQCTL_ITR_INDX_S) | QINT_TQCTL_CAUSE_ENA_M;
+
+   PMD_DRV_LOG(INFO, "queue %d is binding to vect %d",
+   base_queue + i, msix_vect);
+   /* set ITR0 value */
+   ICE_WRITE_REG(hw, GLINT_ITR(0, msix_vect), 0x10);
+   ICE_WRITE_REG(hw, QINT_RQCTL(base_queue + i), val);
+   ICE_WRITE_REG(hw, QINT_TQCTL(base_queue + i), val_tx);
+   }
+}
+
+static void
+ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
+{
+   struct rte_eth_dev *dev = vs

[dpdk-dev] [PATCH v5 26/31] net/ice: support EEPROM information getting

2018-12-16 Thread Wenzhuo Lu
Add below ops,
get_eeprom_length
get_eeprom

Signed-off-by: Wei Zhao 
Signed-off-by: Wenzhuo Lu 
---
 doc/guides/nics/features/ice.ini |  1 +
 drivers/net/ice/ice_ethdev.c | 45 
 2 files changed, 46 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 4867433..c939b52 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -20,6 +20,7 @@ VLAN filter  = Y
 VLAN offload = Y
 QinQ offload = Y
 FW version   = Y
+Module EEPROM dump   = Y
 BSD nic_uio  = Y
 Linux UIO= Y
 Linux VFIO   = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 13d233a..42460a4 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -56,6 +56,9 @@ static int ice_fw_version_get(struct rte_eth_dev *dev, char 
*fw_version,
  size_t fw_size);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 uint16_t pvid, int on);
+static int ice_get_eeprom_length(struct rte_eth_dev *dev);
+static int ice_get_eeprom(struct rte_eth_dev *dev,
+ struct rte_dev_eeprom_info *eeprom);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -96,6 +99,8 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
.rx_queue_intr_disable= ice_rx_queue_intr_disable,
.fw_version_get   = ice_fw_version_get,
.vlan_pvid_set= ice_vlan_pvid_set,
+   .get_eeprom_length= ice_get_eeprom_length,
+   .get_eeprom   = ice_get_eeprom,
 };
 
 static void
@@ -2581,6 +2586,46 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev 
*dev,
 }
 
 static int
+ice_get_eeprom_length(struct rte_eth_dev *dev)
+{
+   struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+   /* Convert word count to byte count */
+   return hw->nvm.sr_words << 1;
+}
+
+static int
+ice_get_eeprom(struct rte_eth_dev *dev,
+  struct rte_dev_eeprom_info *eeprom)
+{
+   struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+   uint16_t *data = eeprom->data;
+   uint16_t offset, length, i;
+   enum ice_status ret_code = ICE_SUCCESS;
+
+   offset = eeprom->offset >> 1;
+   length = eeprom->length >> 1;
+
+   if (offset > hw->nvm.sr_words ||
+   offset + length > hw->nvm.sr_words) {
+   PMD_DRV_LOG(ERR, "Requested EEPROM bytes out of range.");
+   return -EINVAL;
+   }
+
+   eeprom->magic = hw->vendor_id | (hw->device_id << 16);
+
+   for (i = 0; i < length; i++) {
+   ret_code = ice_read_sr_word(hw, offset + i, &data[i]);
+   if (ret_code != ICE_SUCCESS) {
+   PMD_DRV_LOG(ERR, "EEPROM read failed.");
+   return -EIO;
+   }
+   }
+
+   return 0;
+}
+
+static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
  struct rte_pci_device *pci_dev)
 {
-- 
1.9.3



[dpdk-dev] [PATCH v5 29/31] net/ice: support basic RX/TX

2018-12-16 Thread Wenzhuo Lu
Signed-off-by: Wenzhuo Lu 
Signed-off-by: Qiming Yang 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Jingjing Wu 
---
 doc/guides/nics/features/ice.ini |   5 +
 drivers/net/ice/ice_ethdev.c |   5 +
 drivers/net/ice/ice_lan_rxtx.c   | 568 ++-
 drivers/net/ice/ice_rxtx.h   |   8 +
 4 files changed, 584 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 67fd044..19655f1 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -11,14 +11,19 @@ Rx interrupt = Y
 Queue start/stop = Y
 MTU update   = Y
 Jumbo frame  = Y
+TSO  = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
 RSS hash = Y
 RSS key update   = Y
 RSS reta update  = Y
 VLAN filter  = Y
+CRC offload  = Y
 VLAN offload = Y
 QinQ offload = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
+Packet type parsing  = Y
 Basic stats  = Y
 Extended stats   = Y
 FW version   = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 3235d01..ab8fe3b 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1260,6 +1260,9 @@ struct ice_xstats_name_off {
int ret;
 
dev->dev_ops = &ice_eth_dev_ops;
+   dev->rx_pkt_burst = ice_recv_pkts;
+   dev->tx_pkt_burst = ice_xmit_pkts;
+   dev->tx_pkt_prepare = ice_prep_pkts;
 
ice_set_default_ptype_table(dev);
pci_dev = RTE_DEV_TO_PCI(dev->device);
@@ -1732,6 +1735,8 @@ static int ice_init_rss(struct ice_pf *pf)
goto rx_err;
}
 
+   ice_set_rx_function(dev);
+
/* enable Rx interrput and mapping Rx queue to interrupt vector */
if (ice_rxq_intr_setup(dev))
return -EIO;
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index fed12b4..c0ee7c5 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -884,8 +884,81 @@
rte_free(q);
 }
 
+/* Translate the rx descriptor status to pkt flags */
+static inline uint64_t
+ice_rxd_status_to_pkt_flags(uint64_t qword)
+{
+   uint64_t flags;
+
+   /* Check if RSS_HASH */
+   flags = (((qword >> ICE_RX_DESC_STATUS_FLTSTAT_S) &
+ ICE_RX_DESC_FLTSTAT_RSS_HASH) ==
+ICE_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+
+   return flags;
+}
+
+/* Rx L3/L4 checksum */
+static inline uint64_t
+ice_rxd_error_to_pkt_flags(uint64_t qword)
+{
+   uint64_t flags = 0;
+   uint64_t error_bits = (qword >> ICE_RXD_QW1_ERROR_S);
+
+   if (likely((error_bits & ICE_RX_ERR_BITS) == 0)) {
+   flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+   return flags;
+   }
+
+   if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_IPE_S)))
+   flags |= PKT_RX_IP_CKSUM_BAD;
+   else
+   flags |= PKT_RX_IP_CKSUM_GOOD;
+
+   if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_L4E_S)))
+   flags |= PKT_RX_L4_CKSUM_BAD;
+   else
+   flags |= PKT_RX_L4_CKSUM_GOOD;
+
+   if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_EIPE_S)))
+   flags |= PKT_RX_EIP_CKSUM_BAD;
+
+   return flags;
+}
+
+static inline void
+ice_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union ice_rx_desc *rxdp)
+{
+   if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+   (1 << ICE_RX_DESC_STATUS_L2TAG1P_S)) {
+   mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+   mb->vlan_tci =
+   rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
+   PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
+  rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1));
+   } else {
+   mb->vlan_tci = 0;
+   }
+
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+   if (rte_le_to_cpu_16(rxdp->wb.qword2.ext_status) &
+   (1 << ICE_RX_DESC_EXT_STATUS_L2TAG2P_S)) {
+   mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+   PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+   mb->vlan_tci_outer = mb->vlan_tci;
+   mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2);
+   PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+  rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_1),
+  rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2));
+   } else {
+   mb->vlan_tci_outer = 0;
+   }
+#endif
+   PMD_RX_LOG(DEBUG, "Mbuf vlan_tci: %u, vlan_tci_outer: %u",
+  mb->vlan_tci, mb->vlan_tci_outer);
+}
 const uint32_t *
-ice_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+ice_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 {
static const uint32_t ptypes[] = {
/* re

[dpdk-dev] [PATCH v5 27/31] net/ice: support statistics

2018-12-16 Thread Wenzhuo Lu
Add below ops,
stats_get
stats_reset
xstats_get
xstats_get_names
xstats_reset

Signed-off-by: Wenzhuo Lu 
Signed-off-by: Jia Guo 
---
 doc/guides/nics/features/ice.ini |   2 +
 drivers/net/ice/ice_ethdev.c | 566 +++
 2 files changed, 568 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index c939b52..67fd044 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -19,6 +19,8 @@ RSS reta update  = Y
 VLAN filter  = Y
 VLAN offload = Y
 QinQ offload = Y
+Basic stats  = Y
+Extended stats   = Y
 FW version   = Y
 Module EEPROM dump   = Y
 BSD nic_uio  = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 42460a4..0b11a42 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -59,6 +59,14 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 static int ice_get_eeprom_length(struct rte_eth_dev *dev);
 static int ice_get_eeprom(struct rte_eth_dev *dev,
  struct rte_dev_eeprom_info *eeprom);
+static int ice_stats_get(struct rte_eth_dev *dev,
+struct rte_eth_stats *stats);
+static void ice_stats_reset(struct rte_eth_dev *dev);
+static int ice_xstats_get(struct rte_eth_dev *dev,
+ struct rte_eth_xstat *xstats, unsigned int n);
+static int ice_xstats_get_names(struct rte_eth_dev *dev,
+   struct rte_eth_xstat_name *xstats_names,
+   unsigned int limit);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -101,8 +109,92 @@ static int ice_get_eeprom(struct rte_eth_dev *dev,
.vlan_pvid_set= ice_vlan_pvid_set,
.get_eeprom_length= ice_get_eeprom_length,
.get_eeprom   = ice_get_eeprom,
+   .stats_get= ice_stats_get,
+   .stats_reset  = ice_stats_reset,
+   .xstats_get   = ice_xstats_get,
+   .xstats_get_names = ice_xstats_get_names,
+   .xstats_reset = ice_stats_reset,
 };
 
+/* store statistics names and its offset in stats structure */
+struct ice_xstats_name_off {
+   char name[RTE_ETH_XSTATS_NAME_SIZE];
+   unsigned int offset;
+};
+
+static const struct ice_xstats_name_off ice_stats_strings[] = {
+   {"rx_unicast_packets", offsetof(struct ice_eth_stats, rx_unicast)},
+   {"rx_multicast_packets", offsetof(struct ice_eth_stats, rx_multicast)},
+   {"rx_broadcast_packets", offsetof(struct ice_eth_stats, rx_broadcast)},
+   {"rx_dropped", offsetof(struct ice_eth_stats, rx_discards)},
+   {"rx_unknown_protocol_packets", offsetof(struct ice_eth_stats,
+   rx_unknown_protocol)},
+   {"tx_unicast_packets", offsetof(struct ice_eth_stats, tx_unicast)},
+   {"tx_multicast_packets", offsetof(struct ice_eth_stats, tx_multicast)},
+   {"tx_broadcast_packets", offsetof(struct ice_eth_stats, tx_broadcast)},
+   {"tx_dropped", offsetof(struct ice_eth_stats, tx_discards)},
+};
+
+#define ICE_NB_ETH_XSTATS (sizeof(ice_stats_strings) / \
+   sizeof(ice_stats_strings[0]))
+
+static const struct ice_xstats_name_off ice_hw_port_strings[] = {
+   {"tx_link_down_dropped", offsetof(struct ice_hw_port_stats,
+   tx_dropped_link_down)},
+   {"rx_crc_errors", offsetof(struct ice_hw_port_stats, crc_errors)},
+   {"rx_illegal_byte_errors", offsetof(struct ice_hw_port_stats,
+   illegal_bytes)},
+   {"rx_error_bytes", offsetof(struct ice_hw_port_stats, error_bytes)},
+   {"mac_local_errors", offsetof(struct ice_hw_port_stats,
+   mac_local_faults)},
+   {"mac_remote_errors", offsetof(struct ice_hw_port_stats,
+   mac_remote_faults)},
+   {"rx_len_errors", offsetof(struct ice_hw_port_stats,
+   rx_len_errors)},
+   {"tx_xon_packets", offsetof(struct ice_hw_port_stats, link_xon_tx)},
+   {"rx_xon_packets", offsetof(struct ice_hw_port_stats, link_xon_rx)},
+   {"tx_xoff_packets", offsetof(struct ice_hw_port_stats, link_xoff_tx)},
+   {"rx_xoff_packets", offsetof(struct ice_hw_port_stats, link_xoff_rx)},
+   {"rx_size_64_packets", offsetof(struct ice_hw_port_stats, rx_size_64)},
+   {"rx_size_65_to_127_packets", offsetof(struct ice_hw_port_stats,
+   rx_size_127)},
+   {"rx_size_128_to_255_packets", offsetof(struct ice_hw_port_stats,
+   rx_size_255)},
+   {"rx_size_256_to_511_packets", offsetof(struct ice_hw_port_stats,
+   rx_size_511)},
+   {"rx_size_512_to_1023_packets", offsetof(struct ice_hw_port_stats,
+   rx_size_1023)},
+   {"rx_size_1024_to_1522_packets", offsetof(struct ice_hw_port_stats,
+ 

[dpdk-dev] [PATCH v5 30/31] net/ice: support advance RX/TX

2018-12-16 Thread Wenzhuo Lu
Add RX functions, scatter and bulk.
Add TX function, simple.

Signed-off-by: Wenzhuo Lu 
Signed-off-by: Qiming Yang 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Jingjing Wu 
---
 doc/guides/nics/features/ice.ini |   1 +
 drivers/net/ice/ice_lan_rxtx.c   | 660 ++-
 2 files changed, 659 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 19655f1..300eced 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -11,6 +11,7 @@ Rx interrupt = Y
 Queue start/stop = Y
 MTU update   = Y
 Jumbo frame  = Y
+Scattered Rx = Y
 TSO  = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index c0ee7c5..b328a96 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -957,6 +957,431 @@
PMD_RX_LOG(DEBUG, "Mbuf vlan_tci: %u, vlan_tci_outer: %u",
   mb->vlan_tci, mb->vlan_tci_outer);
 }
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+#define ICE_LOOK_AHEAD 8
+#if (ICE_LOOK_AHEAD != 8)
+#error "PMD ICE: ICE_LOOK_AHEAD must be 8\n"
+#endif
+static inline int
+ice_rx_scan_hw_ring(struct ice_rx_queue *rxq)
+{
+   volatile union ice_rx_desc *rxdp;
+   struct ice_rx_entry *rxep;
+   struct rte_mbuf *mb;
+   uint16_t pkt_len;
+   uint64_t qword1;
+   uint32_t rx_status;
+   int32_t s[ICE_LOOK_AHEAD], nb_dd;
+   int32_t i, j, nb_rx = 0;
+   uint64_t pkt_flags = 0;
+   uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+
+   rxdp = &rxq->rx_ring[rxq->rx_tail];
+   rxep = &rxq->sw_ring[rxq->rx_tail];
+
+   qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+   rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >> ICE_RXD_QW1_STATUS_S;
+
+   /* Make sure there is at least 1 packet to receive */
+   if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+   return 0;
+
+   /**
+* Scan LOOK_AHEAD descriptors at a time to determine which
+* descriptors reference packets that are ready to be received.
+*/
+   for (i = 0; i < ICE_RX_MAX_BURST; i += ICE_LOOK_AHEAD,
+rxdp += ICE_LOOK_AHEAD, rxep += ICE_LOOK_AHEAD) {
+   /* Read desc statuses backwards to avoid race condition */
+   for (j = ICE_LOOK_AHEAD - 1; j >= 0; j--) {
+   qword1 = rte_le_to_cpu_64(
+   rxdp[j].wb.qword1.status_error_len);
+   s[j] = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+  ICE_RXD_QW1_STATUS_S;
+   }
+
+   rte_smp_rmb();
+
+   /* Compute how many status bits were set */
+   for (j = 0, nb_dd = 0; j < ICE_LOOK_AHEAD; j++)
+   nb_dd += s[j] & (1 << ICE_RX_DESC_STATUS_DD_S);
+
+   nb_rx += nb_dd;
+
+   /* Translate descriptor info to mbuf parameters */
+   for (j = 0; j < nb_dd; j++) {
+   mb = rxep[j].mbuf;
+   qword1 = rte_le_to_cpu_64(
+   rxdp[j].wb.qword1.status_error_len);
+   pkt_len = ((qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+  ICE_RXD_QW1_LEN_PBUF_S) - rxq->crc_len;
+   mb->data_len = pkt_len;
+   mb->pkt_len = pkt_len;
+   mb->ol_flags = 0;
+   pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+   pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+   if (pkt_flags & PKT_RX_RSS_HASH)
+   mb->hash.rss =
+   rte_le_to_cpu_32(
+   rxdp[j].wb.qword0.hi_dword.rss);
+   mb->packet_type = ptype_tbl[(uint8_t)(
+   (qword1 &
+ICE_RXD_QW1_PTYPE_M) >>
+   ICE_RXD_QW1_PTYPE_S)];
+   ice_rxd_to_vlan_tci(mb, &rxdp[j]);
+
+   mb->ol_flags |= pkt_flags;
+   }
+
+   for (j = 0; j < ICE_LOOK_AHEAD; j++)
+   rxq->rx_stage[i + j] = rxep[j].mbuf;
+
+   if (nb_dd != ICE_LOOK_AHEAD)
+   break;
+   }
+
+   /* Clear software ring entries */
+   for (i = 0; i < nb_rx; i++)
+   rxq->sw_ring[rxq->rx_tail + i].mbuf = NULL;
+
+   PMD_RX_LOG(DEBUG, "ice_rx_scan_hw_ring: "
+  "port_id=%u, queue_id=%u, nb_rx=%d",
+  rxq->port_id, rxq->queue_id, nb_rx);
+
+   return nb_rx;
+}
+
+static inline uint16_t
+ice_rx_fill_from_stage(struct ice_rx_queue *rxq,
+  st

[dpdk-dev] [PATCH v5 28/31] net/ice: support queue information getting

2018-12-16 Thread Wenzhuo Lu
Add below ops,
rxq_info_get
txq_info_get
rx_queue_count

Signed-off-by: Wenzhuo Lu 
Signed-off-by: Qiming Yang 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Jingjing Wu 
---
 drivers/net/ice/ice_ethdev.c   |  3 ++
 drivers/net/ice/ice_lan_rxtx.c | 66 ++
 drivers/net/ice/ice_rxtx.h |  5 
 3 files changed, 74 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 0b11a42..3235d01 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -107,8 +107,11 @@ static int ice_xstats_get_names(struct rte_eth_dev *dev,
.rx_queue_intr_disable= ice_rx_queue_intr_disable,
.fw_version_get   = ice_fw_version_get,
.vlan_pvid_set= ice_vlan_pvid_set,
+   .rxq_info_get = ice_rxq_info_get,
+   .txq_info_get = ice_txq_info_get,
.get_eeprom_length= ice_get_eeprom_length,
.get_eeprom   = ice_get_eeprom,
+   .rx_queue_count   = ice_rx_queue_count,
.stats_get= ice_stats_get,
.stats_reset  = ice_stats_reset,
.xstats_get   = ice_xstats_get,
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 8230bb2..fed12b4 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -921,6 +921,72 @@
 }
 
 void
+ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+struct rte_eth_rxq_info *qinfo)
+{
+   struct ice_rx_queue *rxq;
+
+   rxq = dev->data->rx_queues[queue_id];
+
+   qinfo->mp = rxq->mp;
+   qinfo->scattered_rx = dev->data->scattered_rx;
+   qinfo->nb_desc = rxq->nb_rx_desc;
+
+   qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+   qinfo->conf.rx_drop_en = rxq->drop_en;
+   qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+}
+
+void
+ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+struct rte_eth_txq_info *qinfo)
+{
+   struct ice_tx_queue *txq;
+
+   txq = dev->data->tx_queues[queue_id];
+
+   qinfo->nb_desc = txq->nb_tx_desc;
+
+   qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+   qinfo->conf.tx_thresh.hthresh = txq->hthresh;
+   qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+
+   qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+   qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
+   qinfo->conf.offloads = txq->offloads;
+   qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
+
+uint32_t
+ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+#define ICE_RXQ_SCAN_INTERVAL 4
+   volatile union ice_rx_desc *rxdp;
+   struct ice_rx_queue *rxq;
+   uint16_t desc = 0;
+
+   rxq = dev->data->rx_queues[rx_queue_id];
+   rxdp = &rxq->rx_ring[rxq->rx_tail];
+   while ((desc < rxq->nb_rx_desc) &&
+  ((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+ICE_RXD_QW1_STATUS_M) >> ICE_RXD_QW1_STATUS_S) &
+  (1 << ICE_RX_DESC_STATUS_DD_S)) {
+   /**
+* Check the DD bit of a rx descriptor of each 4 in a group,
+* to avoid checking too frequently and downgrading performance
+* too much.
+*/
+   desc += ICE_RXQ_SCAN_INTERVAL;
+   rxdp += ICE_RXQ_SCAN_INTERVAL;
+   if (rxq->rx_tail + desc >= rxq->nb_rx_desc)
+   rxdp = &(rxq->rx_ring[rxq->rx_tail +
+desc - rxq->nb_rx_desc]);
+   }
+
+   return desc;
+}
+
+void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
uint16_t i;
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 871646f..bad2b89 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,6 +134,11 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
 void ice_tx_queue_release(void *txq);
 void ice_clear_queues(struct rte_eth_dev *dev);
 void ice_free_queues(struct rte_eth_dev *dev);
+uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
 #endif /* _ICE_RXTX_H_ */
-- 
1.9.3



[dpdk-dev] [PATCH v5 31/31] net/ice: support descriptor ops

2018-12-16 Thread Wenzhuo Lu
Add below ops,
rx_descriptor_status
tx_descriptor_status

Signed-off-by: Wenzhuo Lu 
Signed-off-by: Qiming Yang 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Jingjing Wu 
---
 doc/guides/nics/features/ice.ini |  2 ++
 drivers/net/ice/ice_ethdev.c |  2 ++
 drivers/net/ice/ice_lan_rxtx.c   | 58 
 drivers/net/ice/ice_rxtx.h   |  2 ++
 4 files changed, 64 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 300eced..196b8d5 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -25,6 +25,8 @@ QinQ offload = Y
 L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
 Basic stats  = Y
 Extended stats   = Y
 FW version   = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index ab8fe3b..86db69d 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -112,6 +112,8 @@ static int ice_xstats_get_names(struct rte_eth_dev *dev,
.get_eeprom_length= ice_get_eeprom_length,
.get_eeprom   = ice_get_eeprom,
.rx_queue_count   = ice_rx_queue_count,
+   .rx_descriptor_status = ice_rx_descriptor_status,
+   .tx_descriptor_status = ice_tx_descriptor_status,
.stats_get= ice_stats_get,
.stats_reset  = ice_stats_reset,
.xstats_get   = ice_xstats_get,
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index b328a96..c481aed 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -1490,6 +1490,64 @@
return desc;
 }
 
+int
+ice_rx_descriptor_status(void *rx_queue, uint16_t offset)
+{
+   struct ice_rx_queue *rxq = rx_queue;
+   volatile uint64_t *status;
+   uint64_t mask;
+   uint32_t desc;
+
+   if (unlikely(offset >= rxq->nb_rx_desc))
+   return -EINVAL;
+
+   if (offset >= rxq->nb_rx_desc - rxq->nb_rx_hold)
+   return RTE_ETH_RX_DESC_UNAVAIL;
+
+   desc = rxq->rx_tail + offset;
+   if (desc >= rxq->nb_rx_desc)
+   desc -= rxq->nb_rx_desc;
+
+   status = &rxq->rx_ring[desc].wb.qword1.status_error_len;
+   mask = rte_cpu_to_le_64((1ULL << ICE_RX_DESC_STATUS_DD_S) <<
+   ICE_RXD_QW1_STATUS_S);
+   if (*status & mask)
+   return RTE_ETH_RX_DESC_DONE;
+
+   return RTE_ETH_RX_DESC_AVAIL;
+}
+
+int
+ice_tx_descriptor_status(void *tx_queue, uint16_t offset)
+{
+   struct ice_tx_queue *txq = tx_queue;
+   volatile uint64_t *status;
+   uint64_t mask, expect;
+   uint32_t desc;
+
+   if (unlikely(offset >= txq->nb_tx_desc))
+   return -EINVAL;
+
+   desc = txq->tx_tail + offset;
+   /* go to next desc that has the RS bit */
+   desc = ((desc + txq->tx_rs_thresh - 1) / txq->tx_rs_thresh) *
+   txq->tx_rs_thresh;
+   if (desc >= txq->nb_tx_desc) {
+   desc -= txq->nb_tx_desc;
+   if (desc >= txq->nb_tx_desc)
+   desc -= txq->nb_tx_desc;
+   }
+
+   status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+   mask = rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M);
+   expect = rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE <<
+ ICE_TXD_QW1_DTYPE_S);
+   if ((*status & mask) == expect)
+   return RTE_ETH_TX_DESC_DONE;
+
+   return RTE_ETH_TX_DESC_FULL;
+}
+
 void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index e0218b3..a0aa8f9 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -143,6 +143,8 @@ uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct 
rte_mbuf **tx_pkts,
   uint16_t nb_pkts);
 void ice_set_tx_function(struct rte_eth_dev *dev);
 uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_rx_descriptor_status(void *rx_queue, uint16_t offset);
+int ice_tx_descriptor_status(void *tx_queue, uint16_t offset);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
-- 
1.9.3