Re: [dpdk-dev] [pull-request] next-eventdev 18.08 RC1

2018-07-12 Thread Thomas Monjalon
06/07/2018 07:18, Jerin Jacob:
>   http://dpdk.org/git/next/dpdk-next-eventdev 

Pulled, thanks




Re: [dpdk-dev] [PATCH v2] eal/service: improve error checking of coremasks

2018-07-12 Thread Thomas Monjalon
Hi Harry,

What is the status of this patch?


21/05/2018 11:41, Varghese, Vipin:
> Hi Harry,
> 
> This look ok to me, except for one warning rewrite else its ACK from my end.
> 
> > -Original Message-
> > From: Van Haaren, Harry
> > Sent: Tuesday, May 15, 2018 9:26 PM
> > To: dev@dpdk.org
> > Cc: Van Haaren, Harry ; tho...@monjalon.net;
> > Varghese, Vipin 
> > Subject: [PATCH v2] eal/service: improve error checking of coremasks
> > 
> > This commit improves the error checking performed on the core masks (or 
> > lists)
> > of the service cores, in particular with respect to the data-plane (RTE) 
> > cores of
> > DPDK.
> > 
> > With this commit, invalid configurations are detected at runtime, and 
> > warning
> > messages are printed to inform the user.
> > 
> > For example specifying the coremask as 0xf, and the service coremask as 
> > 0xff00
> > is invalid as not all service-cores are contained within the coremask. A 
> > warning is
> > now printed to inform the user.
> > 
> > Reported-by: Vipin Varghese 
> > Signed-off-by: Harry van Haaren 
> > 
> > ---
> > 
> > v2, thanks for review:
> > - Consistency in message endings - vs . (Thomas)
> > - Wrap lines as they're very long otherwise (Thomas)
> > 
> > Cc: tho...@monjalon.net
> > Cc: vipin.vargh...@intel.com
> > 
> > @Thomas, please consider this patch for RC4, it adds checks and prints
> > warnings, better usability, no functional changes.
> > ---
> >  lib/librte_eal/common/eal_common_options.c | 43
> > ++
> >  1 file changed, 43 insertions(+)
> > 
> > diff --git a/lib/librte_eal/common/eal_common_options.c
> > b/lib/librte_eal/common/eal_common_options.c
> > index ecebb29..9f3a484 100644
> > --- a/lib/librte_eal/common/eal_common_options.c
> > +++ b/lib/librte_eal/common/eal_common_options.c
> > @@ -315,6 +315,7 @@ eal_parse_service_coremask(const char *coremask)
> > unsigned int count = 0;
> > char c;
> > int val;
> > +   uint32_t taken_lcore_count = 0;
> > 
> > if (coremask == NULL)
> > return -1;
> > @@ -358,6 +359,10 @@ eal_parse_service_coremask(const char *coremask)
> > "lcore %u unavailable\n", idx);
> > return -1;
> > }
> > +
> > +   if (cfg->lcore_role[idx] == ROLE_RTE)
> > +   taken_lcore_count++;
> > +
> > lcore_config[idx].core_role = ROLE_SERVICE;
> > count++;
> > }
> > @@ -374,11 +379,28 @@ eal_parse_service_coremask(const char *coremask)
> > if (count == 0)
> > return -1;
> > 
> > +   if (core_parsed && taken_lcore_count != count) {
> > +   RTE_LOG(ERR, EAL,
> > +   "Warning: not all service cores were in the coremask. "
> > +   "Please ensure -c or -l includes service cores\n");
> 
> Current execution will throw warning message as 'Warning: not all service 
> cores were in the coremask. Please ensure -c or -l includes service cores'. 
> 
> 1) Should we re-write this with ' RTE_LOG(WARN, EAL,' and removing 'Warning: '
> 2) Warning message as "service cores not in data plane core mask ".
> 3) If we share information "Please ensure -c or -l includes service cores\n" 
> is not it expected to rte_panic? So should we remove this line?
> 
> > +   }
> > +
> > cfg->service_lcore_count = count;
> > return 0;
> >  }
> > 
> >  static int
> > +eal_service_cores_parsed(void)
> > +{
> > +   int idx;
> > +   for (idx = 0; idx < RTE_MAX_LCORE; idx++) {
> > +   if (lcore_config[idx].core_role == ROLE_SERVICE)
> > +   return 1;
> > +   }
> > +   return 0;
> > +}
> > +
> > +static int
> >  eal_parse_coremask(const char *coremask)  {
> > struct rte_config *cfg = rte_eal_get_configuration(); @@ -387,6
> > +409,11 @@ eal_parse_coremask(const char *coremask)
> > char c;
> > int val;
> > 
> > +   if (eal_service_cores_parsed())
> > +   RTE_LOG(ERR, EAL,
> > +   "Warning: Service cores parsed before dataplane cores.
> > "
> > +   "Please ensure -c is before -s or -S.\n");
> > +
> > if (coremask == NULL)
> > return -1;
> > /* Remove all blank characters ahead and after .
> > @@ -418,6 +445,7 @@ eal_parse_coremask(const char *coremask)
> > "unavailable\n", idx);
> > return -1;
> > }
> > +
> > cfg->lcore_role[idx] = ROLE_RTE;
> > lcore_config[idx].core_index = count;
> > count++;
> > @@ -449,6 +477,7 @@ eal_parse_service_corelist(const char *corelist)
> > unsigned count = 0;
> > char *end = NULL;
> > int min, max;
> > +   uint32_t taken_lcore_count = 0;
> > 
> > if (corelist == NULL)
> > retu

Re: [dpdk-dev] [PATCH v11 07/25] eal: introduce device class abstraction

2018-07-12 Thread Gaëtan Rivet
On Thu, Jul 12, 2018 at 12:19:09PM +0530, Shreyansh Jain wrote:
> On Thursday 12 July 2018 03:14 AM, Gaetan Rivet wrote:
> > This abstraction exists since the infancy of DPDK.
> > It needs to be fleshed out however, to allow a generic
> > description of devices properties and capabilities.
> > 
> > A device class is the northbound interface of the device, intended
> > for applications to know what it can be used for.
> > 
> > It is conceptually just above buses.
> > 
> > Signed-off-by: Gaetan Rivet 
> > ---
> 
> [...]
> 
> > --- a/lib/librte_eal/rte_eal_version.map
> > +++ b/lib/librte_eal/rte_eal_version.map
> > @@ -244,6 +244,8 @@ DPDK_18.05 {
> >   EXPERIMENTAL {
> > global:
> > +   rte_class_register;
> > +   rte_class_unregister;
> > rte_ctrl_thread_create;
> > rte_dev_event_callback_register;
> > rte_dev_event_callback_unregister;
> > 
> 
> Any reason you don't want the rte_class_find and rte_class_find_by_name as
> exposed APIs? There is no experimental tag on these APIs either.
> 

No actually I just overlooked that part! Thanks for catching this, I
think it should be exposed and tagged experimental.

-- 
Gaëtan Rivet
6WIND


Re: [dpdk-dev] [PATCH] hash: validate hash bucket entries while compiling

2018-07-12 Thread Thomas Monjalon
Review please?

31/05/2018 17:30, Honnappa Nagarahalli:
> Validate RTE_HASH_BUCKET_ENTRIES during compilation instead of
> run time.
> 
> Signed-off-by: Honnappa Nagarahalli 
> Reviewed-by: Gavin Hu 
> ---
>  lib/librte_eal/common/include/rte_common.h | 5 +
>  lib/librte_hash/rte_cuckoo_hash.c  | 1 -
>  lib/librte_hash/rte_cuckoo_hash.h  | 4 
>  3 files changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/librte_eal/common/include/rte_common.h 
> b/lib/librte_eal/common/include/rte_common.h
> index 434adfd45..a9df7c161 100644
> --- a/lib/librte_eal/common/include/rte_common.h
> +++ b/lib/librte_eal/common/include/rte_common.h
> @@ -293,6 +293,11 @@ rte_combine64ms1b(register uint64_t v)
>  
>  /*** Macros to work with powers of 2 /
>  
> +/**
> + * Macro to return 1 if n is a power of 2, 0 otherwise
> + */
> +#define RTE_IS_POWER_OF_2(n) ((n) && !(((n) - 1) & (n)))
> +
>  /**
>   * Returns true if n is a power of 2
>   * @param n
> diff --git a/lib/librte_hash/rte_cuckoo_hash.c 
> b/lib/librte_hash/rte_cuckoo_hash.c
> index a07543a29..375e7d208 100644
> --- a/lib/librte_hash/rte_cuckoo_hash.c
> +++ b/lib/librte_hash/rte_cuckoo_hash.c
> @@ -107,7 +107,6 @@ rte_hash_create(const struct rte_hash_parameters *params)
>   /* Check for valid parameters */
>   if ((params->entries > RTE_HASH_ENTRIES_MAX) ||
>   (params->entries < RTE_HASH_BUCKET_ENTRIES) ||
> - !rte_is_power_of_2(RTE_HASH_BUCKET_ENTRIES) ||
>   (params->key_len == 0)) {
>   rte_errno = EINVAL;
>   RTE_LOG(ERR, HASH, "rte_hash_create has invalid parameters\n");
> diff --git a/lib/librte_hash/rte_cuckoo_hash.h 
> b/lib/librte_hash/rte_cuckoo_hash.h
> index 7a54e5557..bd6ad1bd6 100644
> --- a/lib/librte_hash/rte_cuckoo_hash.h
> +++ b/lib/librte_hash/rte_cuckoo_hash.h
> @@ -97,6 +97,10 @@ enum add_key_case {
>  /** Number of items per bucket. */
>  #define RTE_HASH_BUCKET_ENTRIES  8
>  
> +#if !RTE_IS_POWER_OF_2(RTE_HASH_BUCKET_ENTRIES)
> +#error RTE_HASH_BUCKET_ENTRIES must be a power of 2
> +#endif
> +
>  #define NULL_SIGNATURE   0
>  
>  #define EMPTY_SLOT   0
> 







Re: [dpdk-dev] [PATCH 0/3] bpf: extend validation of input BPF programs

2018-07-12 Thread Thomas Monjalon
> Konstantin Ananyev (3):
>   bpf: add extra information for external symbol definitions
>   bpf: add extra validation for input BPF program
>   test/bpf: add new test-case for external function call

Applied, thanks




Re: [dpdk-dev] [PATCH] examples: make Linux environment check consistent

2018-07-12 Thread Thomas Monjalon
06/06/2018 15:50, Thomas Monjalon:
> Some Makefiles are using CONFIG_RTE_EXEC_ENV and others
> are using CONFIG_RTE_EXEC_ENV_LINUXAPP.
> Use the latter one for consistency.
> We could remove CONFIG_RTE_EXEC_ENV later if considered useless.
> 
> Signed-off-by: Thomas Monjalon 

Applied





Re: [dpdk-dev] [PATCH v5 00/10] net/mlx5: add port representor support

2018-07-12 Thread Shahaf Shuler
Tuesday, July 10, 2018 7:05 PM, Adrien Mazarguil:
> Subject: [PATCH v5 00/10] net/mlx5: add port representor support
> 
> This series adds support for port (VF) representors to the mlx5 PMD, which
> can be instantiated using the standard "representor" device parameter.
> 
> Note the PMD only probes existing representors which exist as Verbs
> devices; their creation is part of the host system configuration.
> 

Applied to next-net-mlx besides the last patch
[v5,10/10] net/mlx5: support negative identifiers for port representors

As agreed. 

Thanks!

> v5 changes:
> 
> - Fixed and added missing HAVE_* definitions to Makefile for systems that
> do
>   not expose them. Series now compiles fine down to RHEL 7.2 inclusive.
> 
> v4 changes:
> 
> - Fixed domain ID release that did not work, see relevant patch.
> - Rebased series.
> 
> v3 changes:
> 
> - Added the following patches:
>   - net/mlx5: drop useless support for several Verbs ports
>   - net/mlx5: probe port representors in natural order
>   - net/mlx5: support negative identifiers for port representors
> - See individual patches for details.
> - Rebased series.
> 
> v2 changes:
> 
> - See individual patches for details.
> - Rebased series.
> 
> Adrien Mazarguil (10):
>   net/mlx5: rename confusing object in probe code
>   net/mlx5: remove redundant objects in probe code
>   net/mlx5: drop useless support for several Verbs ports
>   net/mlx5: split PCI from generic probing code
>   net/mlx5: re-indent generic probing function
>   net/mlx5: add port representor awareness
>   net/mlx5: probe all port representors
>   net/mlx5: probe port representors in natural order
>   net/mlx5: add parameter for port representors
>   net/mlx5: support negative identifiers for port representors
> 
>  doc/guides/nics/mlx5.rst|   12 +
>  doc/guides/prog_guide/poll_mode_drv.rst |2 +
>  drivers/net/mlx5/Makefile   |   45 ++
>  drivers/net/mlx5/mlx5.c | 1108 --
>  drivers/net/mlx5/mlx5.h |   29 +-
>  drivers/net/mlx5/mlx5_ethdev.c  |  135 +++-
>  drivers/net/mlx5/mlx5_mac.c |2 +-
>  drivers/net/mlx5/mlx5_nl.c  |  308 ++-
>  drivers/net/mlx5/mlx5_stats.c   |6 +-
>  drivers/net/mlx5/mlx5_txq.c |2 +-
>  10 files changed, 1175 insertions(+), 474 deletions(-)
> 
> --
> 2.11.0


Re: [dpdk-dev] [PATCH 1/2] examples/ethtool: add to meson build

2018-07-12 Thread Thomas Monjalon
29/03/2018 16:04, Bruce Richardson:
> Add the ethtool example to the meson build. This example is more
> complicated than the previously added ones as it has files in two
> subdirectories. An ethtool "wrapper lib" in one, used by the actual
> example "ethtool app" in the other.
> 
> Rather than using recursive operation, like is done with the makefiles,
> we instead can just special-case the building of the library from the
> single .c file, and then use that as a dependency when building the app
> proper.
> 
> Signed-off-by: Bruce Richardson 

It does not compile because of experimental function:
examples/ethtool/lib/rte_ethtool.c:186:2: error:
‘rte_eth_dev_get_module_info’ is deprecated: Symbol is not yet part of stable 
ABI





Re: [dpdk-dev] [PATCH v2] app/testpmd: fix little perf drop with XL710

2018-07-12 Thread Li, Xiaoyun
OK. I will modify the commit log and name and send v3 later. Thanks.

> -Original Message-
> From: Lu, Wenzhuo
> Sent: Thursday, July 12, 2018 13:56
> To: Li, Xiaoyun ; Zhang, Qi Z 
> Cc: dev@dpdk.org; sta...@dpdk.org
> Subject: RE: [PATCH v2] app/testpmd: fix little perf drop with XL710
> 
> Hi Xiaoyun,
> 
> 
> > -Original Message-
> > From: Li, Xiaoyun
> > Sent: Wednesday, July 11, 2018 10:16 AM
> > To: Zhang, Qi Z ; Lu, Wenzhuo
> > 
> > Cc: dev@dpdk.org; Li, Xiaoyun ; sta...@dpdk.org
> > Subject: [PATCH v2] app/testpmd: fix little perf drop with XL710
> >
> > There is about 1.8M perf drop with XL710. And it is because of a
> > bitrate
> What's 1.8M mean? BPS? PPS?
> Looks like this patch fixes the CPU consuming problem and has nothing to do
> with a specific NIC.
> 2 suggestions.
> Just omit XL710, also in the tittle.
> Maybe better mentioning how many percent drop but not an accurate
> number.
> 
> Except that, Acked-by: Wenzhuo Lu 
> 
> 
> > calculation in the datapath. So improve it by maintaining an array of
> > port indexes in testpmd, which is updated with ethdev events.
> >
> > Fixes: 8728ccf37615 ("fix ethdev ports enumeration")
> > Cc: sta...@dpdk.org
> >
> > Signed-off-by: Xiaoyun Li 



Re: [dpdk-dev] [PATCH] hash: validate hash bucket entries while compiling

2018-07-12 Thread De Lara Guarch, Pablo



> -Original Message-
> From: Thomas Monjalon [mailto:tho...@monjalon.net]
> Sent: Thursday, July 12, 2018 8:42 AM
> To: Richardson, Bruce ; De Lara Guarch, Pablo
> 
> Cc: dev@dpdk.org; Honnappa Nagarahalli 
> Subject: Re: [dpdk-dev] [PATCH] hash: validate hash bucket entries while
> compiling
> 
> Review please?
> 
> 31/05/2018 17:30, Honnappa Nagarahalli:
> > Validate RTE_HASH_BUCKET_ENTRIES during compilation instead of run
> > time.
> >
> > Signed-off-by: Honnappa Nagarahalli 
> > Reviewed-by: Gavin Hu 
> > ---

Acked-by: Pablo de Lara 
 



Re: [dpdk-dev] [PATCH] ethdev: fix device info getting

2018-07-12 Thread Andrew Rybchenko

On 12.07.2018 08:27, Wenzhuo Lu wrote:

The device information cannot be gotten correctly before
the configuration is set. Because on some NICs the
information has dependence on the configuration.

Fixes: 3be82f5cc5e3 ("ethdev: support PMD-tuned Tx/Rx parameters")
Signed-off-by: Wenzhuo Lu 
---
  lib/librte_ethdev/rte_ethdev.c | 47 +-
  1 file changed, 24 insertions(+), 23 deletions(-)

diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index 3d556a8..9d60bea 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -1017,28 +1017,6 @@ struct rte_eth_dev *
  
  	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
  
-	dev = &rte_eth_devices[port_id];

-
-   RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
-   RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP);
-
-   rte_eth_dev_info_get(port_id, &dev_info);
-
-   /* If number of queues specified by application for both Rx and Tx is
-* zero, use driver preferred values. This cannot be done individually
-* as it is valid for either Tx or Rx (but not both) to be zero.
-* If driver does not provide any preferred valued, fall back on
-* EAL defaults.
-*/
-   if (nb_rx_q == 0 && nb_tx_q == 0) {
-   nb_rx_q = dev_info.default_rxportconf.nb_queues;
-   if (nb_rx_q == 0)
-   nb_rx_q = RTE_ETH_DEV_FALLBACK_RX_NBQUEUES;
-   nb_tx_q = dev_info.default_txportconf.nb_queues;
-   if (nb_tx_q == 0)
-   nb_tx_q = RTE_ETH_DEV_FALLBACK_TX_NBQUEUES;
-   }
-
if (nb_rx_q > RTE_MAX_QUEUES_PER_PORT) {
RTE_ETHDEV_LOG(ERR,
"Number of RX queues requested (%u) is greater than max 
supported(%d)\n",
@@ -1053,6 +1031,11 @@ struct rte_eth_dev *
return -EINVAL;
}
  
+	dev = &rte_eth_devices[port_id];

+
+   RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+   RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP);
+
if (dev->data->dev_started) {
RTE_ETHDEV_LOG(ERR,
"Port %u must be stopped to allow configuration\n",
@@ -1060,8 +1043,26 @@ struct rte_eth_dev *
return -EBUSY;
}
  
-	/* Copy the dev_conf parameter into the dev structure */

+   /* Copy the dev_conf parameter into the dev structure,
+* then get the info.
+*/
memcpy(&dev->data->dev_conf, &local_conf, sizeof(dev->data->dev_conf));
+   rte_eth_dev_info_get(port_id, &dev_info);
+
+   /* If number of queues specified by application for both Rx and Tx is
+* zero, use driver preferred values. This cannot be done individually
+* as it is valid for either Tx or Rx (but not both) to be zero.
+* If driver does not provide any preferred valued, fall back on
+* EAL defaults.
+*/
+   if (nb_rx_q == 0 && nb_tx_q == 0) {
+   nb_rx_q = dev_info.default_rxportconf.nb_queues;
+   if (nb_rx_q == 0)
+   nb_rx_q = RTE_ETH_DEV_FALLBACK_RX_NBQUEUES;
+   nb_tx_q = dev_info.default_txportconf.nb_queues;
+   if (nb_tx_q == 0)
+   nb_tx_q = RTE_ETH_DEV_FALLBACK_TX_NBQUEUES;


Values assigned in this branch are not checked against
RTE_MAX_QUEUES_PER_PORT and RTE_MAX_QUEUES_PER_PORT now


+   }
  
  	/*

 * Check that the numbers of RX and TX queues are not greater




[dpdk-dev] [PATCH v3] app/testpmd: fix little perf drop

2018-07-12 Thread Xiaoyun Li
There is about 3% perf drop. And it is because of a bitrate
calculation in the datapath. So improve it by maintaining an array
of port indexes in testpmd, which is updated with ethdev events.

Fixes: 8728ccf37615 ("fix ethdev ports enumeration")
Cc: sta...@dpdk.org

Signed-off-by: Xiaoyun Li 
---
v3:
* Modify the commit log and patch name.
v2:
* Update ports_ids when user attach or detach a port.
---
 app/test-pmd/testpmd.c | 26 ++
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index dde7d43..e4f39be 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -127,6 +127,8 @@ portid_t nb_ports; /**< Number of probed 
ethernet ports. */
 struct fwd_lcore **fwd_lcores; /**< For all probed logical cores. */
 lcoreid_t nb_lcores;   /**< Number of probed logical cores. */
 
+portid_t ports_ids[RTE_MAX_ETHPORTS]; /**< Store all port ids. */
+
 /*
  * Test Forwarding Configuration.
  *nb_fwd_lcores <= nb_cfg_lcores <= nb_lcores
@@ -1147,8 +1149,9 @@ run_pkt_fwd_on_lcore(struct fwd_lcore *fc, packet_fwd_t 
pkt_fwd)
uint64_t tics_per_1sec;
uint64_t tics_datum;
uint64_t tics_current;
-   uint16_t idx_port;
+   uint16_t i, cnt_ports;
 
+   cnt_ports = nb_ports;
tics_datum = rte_rdtsc();
tics_per_1sec = rte_get_timer_hz();
 #endif
@@ -1163,9 +1166,9 @@ run_pkt_fwd_on_lcore(struct fwd_lcore *fc, packet_fwd_t 
pkt_fwd)
tics_current = rte_rdtsc();
if (tics_current - tics_datum >= tics_per_1sec) {
/* Periodic bitrate calculation */
-   RTE_ETH_FOREACH_DEV(idx_port)
+   for (i = 0; i < cnt_ports; i++)
rte_stats_bitrate_calc(bitrate_data,
-   idx_port);
+   ports_ids[i]);
tics_datum = tics_current;
}
}
@@ -1968,6 +1971,7 @@ attach_port(char *identifier)
reconfig(pi, socket_id);
rte_eth_promiscuous_enable(pi);
 
+   ports_ids[nb_ports] = pi;
nb_ports = rte_eth_dev_count_avail();
 
ports[pi].port_status = RTE_PORT_STOPPED;
@@ -1982,6 +1986,7 @@ void
 detach_port(portid_t port_id)
 {
char name[RTE_ETH_NAME_MAX_LEN];
+   uint16_t i;
 
printf("Detaching a port...\n");
 
@@ -1998,6 +2003,13 @@ detach_port(portid_t port_id)
return;
}
 
+   for (i = 0; i < nb_ports; i++) {
+   if (ports_ids[i] == port_id) {
+   ports_ids[i] = ports_ids[nb_ports-1];
+   ports_ids[nb_ports-1] = 0;
+   break;
+   }
+   }
nb_ports = rte_eth_dev_count_avail();
 
update_fwd_ports(RTE_MAX_ETHPORTS);
@@ -2649,6 +2661,7 @@ main(int argc, char** argv)
 {
int diag;
portid_t port_id;
+   uint16_t count;
int ret;
 
signal(SIGINT, signal_handler);
@@ -2668,7 +2681,12 @@ main(int argc, char** argv)
rte_pdump_init(NULL);
 #endif
 
-   nb_ports = (portid_t) rte_eth_dev_count_avail();
+   count = 0;
+   RTE_ETH_FOREACH_DEV(port_id) {
+   ports_ids[count] = port_id;
+   count++;
+   }
+   nb_ports = (portid_t) count;
if (nb_ports == 0)
TESTPMD_LOG(WARNING, "No probed ethernet devices\n");
 
-- 
2.7.4



Re: [dpdk-dev] [PATCH v2] crypto/qat: fix checks for 3gpp algo bit params

2018-07-12 Thread Dmitry Eremin-Solenikov
On 11 July 2018 at 21:02, Fiona Trahe  wrote:
> QAT driver checks byte alignment for KASUMI/SNOW 3G/ZUC algorithms using
> cipher/auth_param, which are not initialized at this moment yet. Use
> operation params instead.
>
> Signed-off-by: Fiona Trahe 

Thanks, this should fix the issue.

-- 
With best wishes
Dmitry


Re: [dpdk-dev] [PATCH v12 00/19] enable hotplug on multi-process

2018-07-12 Thread Thomas Monjalon
12/07/2018 03:14, Qi Zhang:
> v13:
> - Since rte_eth_dev_attach/rte_eth_dev_detach will be deprecated,
>   so, modify the sample code to use rte_eal_hotplug_add and
>   rte_eal_hotplug_remove to attach/detach device.

Yes, this is what I tried to explain you.

I think it is now too late for 18.08.
We see that this patchset deserves more reviews.





[dpdk-dev] [PATCH v2] add sample functions for packet forwarding

2018-07-12 Thread Jananee Parthasarathy
Add sample test functions for packet forwarding.
These can be used for unit test cases for
LatencyStats and BitrateStats libraries.

Signed-off-by: Chaitanya Babu Talluri 
Reviewed-by: Reshma Pattan 
---
v2: SOCKET0 is removed and NUM_QUEUES is used accordingly
---
 test/test/Makefile|  1 +
 test/test/sample_packet_forward.c | 80 +++
 test/test/sample_packet_forward.h | 22 +++
 3 files changed, 103 insertions(+)
 create mode 100644 test/test/sample_packet_forward.c
 create mode 100644 test/test/sample_packet_forward.h

diff --git a/test/test/Makefile b/test/test/Makefile
index eccc8efcf..1e69f37a1 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -133,6 +133,7 @@ SRCS-y += test_version.c
 SRCS-y += test_func_reentrancy.c
 
 SRCS-y += test_service_cores.c
+SRCS-y += sample_packet_forward.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_CMDLINE) += test_cmdline.c
 SRCS-$(CONFIG_RTE_LIBRTE_CMDLINE) += test_cmdline_num.c
diff --git a/test/test/sample_packet_forward.c 
b/test/test/sample_packet_forward.c
new file mode 100644
index 0..ec79f7e6e
--- /dev/null
+++ b/test/test/sample_packet_forward.c
@@ -0,0 +1,80 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+
+#include "sample_packet_forward.h"
+#include "test.h"
+#include 
+
+#define NB_MBUF 512
+
+static struct rte_mempool *mp;
+uint16_t tx_portid, rx_portid;
+
+/* Sample test to create virtual rings and tx,rx portid from rings */
+int
+test_ring_setup(void)
+{
+   uint16_t socket_id = rte_socket_id();
+   struct rte_ring *rxtx[NUM_RINGS];
+   rxtx[0] = rte_ring_create("R0", RING_SIZE, socket_id,
+   RING_F_SP_ENQ|RING_F_SC_DEQ);
+   if (rxtx[0] == NULL) {
+   printf("%s() line %u: rte_ring_create R0 failed",
+   __func__, __LINE__);
+   return TEST_FAILED;
+   }
+   rxtx[1] = rte_ring_create("R1", RING_SIZE, socket_id,
+   RING_F_SP_ENQ|RING_F_SC_DEQ);
+   if (rxtx[1] == NULL) {
+   printf("%s() line %u: rte_ring_create R1 failed",
+   __func__, __LINE__);
+   return TEST_FAILED;
+   }
+   tx_portid = rte_eth_from_rings("net_ringa", rxtx, NUM_QUEUES, rxtx,
+   NUM_QUEUES, socket_id);
+   rx_portid = rte_eth_from_rings("net_ringb", rxtx, NUM_QUEUES, rxtx,
+   NUM_QUEUES, socket_id);
+
+   return TEST_SUCCESS;
+}
+
+/* Sample test to forward packets using virtual portids */
+int
+test_packet_forward(void)
+{
+   struct rte_mbuf *pbuf[NUM_PACKETS];
+
+   mp = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, 32, 0,
+   RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+   if (mp == NULL)
+   return -1;
+   if (rte_pktmbuf_alloc_bulk(mp, pbuf, NUM_PACKETS) != 0)
+   printf("%s() line %u: rte_pktmbuf_alloc_bulk failed"
+   , __func__, __LINE__);
+   /* send and receive packet and check for stats update */
+   if (rte_eth_tx_burst(tx_portid, 0, pbuf, NUM_PACKETS) !=
+   NUM_PACKETS) {
+   printf("%s() line %u: Error sending packet to"
+   " port %d\n", __func__, __LINE__,
+   tx_portid);
+   return TEST_FAILED;
+   }
+   if (rte_eth_rx_burst(rx_portid, 0, pbuf, NUM_PACKETS) !=
+   NUM_PACKETS) {
+   printf("%s() line %u: Error receiving packet from"
+   " port %d\n", __func__, __LINE__,
+   rx_portid);
+   return TEST_FAILED;
+   }
+   return TEST_SUCCESS;
+}
diff --git a/test/test/sample_packet_forward.h 
b/test/test/sample_packet_forward.h
new file mode 100644
index 0..f6226e34b
--- /dev/null
+++ b/test/test/sample_packet_forward.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _SAMPLE_PACKET_FORWARD_H_
+#define _SAMPLE_PACKET_FORWARD_H_
+
+/* MACROS to support virtual ring creation */
+#define RING_SIZE 256
+#define NUM_RINGS 2
+#define NUM_QUEUES 1
+
+#define NUM_PACKETS 10
+
+/* Sample test to create virtual rings and tx,rx portid from rings */
+int test_ring_setup(void);
+
+/* Sample test to forward packet using virtual port id */
+int test_packet_forward(void);
+
+#endif /* _SAMPLE_PACKET_FORWARD_H_ */
+
-- 
2.13.6



Re: [dpdk-dev] [PATCH v12 00/19] enable hotplug on multi-process

2018-07-12 Thread Zhang, Qi Z



> -Original Message-
> From: Thomas Monjalon [mailto:tho...@monjalon.net]
> Sent: Thursday, July 12, 2018 4:30 PM
> To: Zhang, Qi Z 
> Cc: dev@dpdk.org; Burakov, Anatoly ; Ananyev,
> Konstantin ; Richardson, Bruce
> ; Yigit, Ferruh ; Shelton,
> Benjamin H ; Vangati, Narender
> 
> Subject: Re: [dpdk-dev] [PATCH v12 00/19] enable hotplug on multi-process
> 
> 12/07/2018 03:14, Qi Zhang:
> > v13:
> > - Since rte_eth_dev_attach/rte_eth_dev_detach will be deprecated,
> >   so, modify the sample code to use rte_eal_hotplug_add and
> >   rte_eal_hotplug_remove to attach/detach device.
> 
> Yes, this is what I tried to explain you.
> 
> I think it is now too late for 18.08.

Understand, but probably patch 2,3,4 could be considered in 18.08, since they 
fix general issue but not just for hotplug mp.
What do you think?


> We see that this patchset deserves more reviews.
> 
> 



Re: [dpdk-dev] [PATCH v2] vfio: fix workaround of BAR0 mapping

2018-07-12 Thread Burakov, Anatoly

On 12-Jul-18 4:08 AM, Takeshi Yoshimura wrote:

The workaround of BAR0 mapping does not work if BAR0 area is smaller
than page size (64KB in ppc). In addition, we no longer need the
workaround in recent Linux because VFIO allows MSIX mapping (*).
This fix is just to skip the workaround if BAR0 is smarller than a page.

(*): "vfio-pci: Allow mapping MSIX BAR",
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
commit/id=a32295c612c57990d17fb0f41e7134394b2f35f6

Fixes: 90a1633b2347 ("eal/linux: allow to map BARs with MSI-X tables")

Signed-off-by: Takeshi Yoshimura 
---


Minimum support kernel version in DPDK is 3.2, we cannot rely on 
functionality provided by the recent kernel versions.


It would be better if you modified the check at line 350 instead (or 
added a new check, specifically testing for whether BAR size is less 
than page size).


--
Thanks,
Anatoly


Re: [dpdk-dev] [PATCH v12 00/19] enable hotplug on multi-process

2018-07-12 Thread Thomas Monjalon
12/07/2018 11:11, Zhang, Qi Z:
> From: Thomas Monjalon [mailto:tho...@monjalon.net]
> > 12/07/2018 03:14, Qi Zhang:
> > > v13:
> > > - Since rte_eth_dev_attach/rte_eth_dev_detach will be deprecated,
> > >   so, modify the sample code to use rte_eal_hotplug_add and
> > >   rte_eal_hotplug_remove to attach/detach device.
> > 
> > Yes, this is what I tried to explain you.
> > 
> > I think it is now too late for 18.08.
> 
> Understand, but probably patch 2,3,4 could be considered in 18.08, since they 
> fix general issue but not just for hotplug mp.
> What do you think?

Yes, you are right.

Please send a separate patchset and try to get reviews.
Gaetan, Anatoly, please review patches 2 and 3.




Re: [dpdk-dev] [PATCH v13 02/19] bus/pci: fix PCI address compare

2018-07-12 Thread Burakov, Anatoly

On 12-Jul-18 2:14 AM, Qi Zhang wrote:

When use memcmp to compare two PCI address, sizeof(struct rte_pci_addr)
is 4 bytes aligned, and it is 8. While only 7 byte of struct rte_pci_addr
is valid. So compare the 8th byte will cause the unexpected result, which
happens when repeatedly attach/detach a device.

Fixes: c752998b5e2e ("pci: introduce library and driver")
Cc: sta...@dpdk.org

Signed-off-by: Qi Zhang 
---
  drivers/bus/pci/linux/pci_vfio.c | 13 -
  1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index aeeaa9ed8..dd25c3542 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -43,6 +43,17 @@ static struct rte_tailq_elem rte_vfio_tailq = {
  };
  EAL_REGISTER_TAILQ(rte_vfio_tailq)
  
+/* Compair two pci address */

+static int pci_addr_cmp(struct rte_pci_addr *addr1, struct rte_pci_addr *addr2)
+{
+   if (addr1->domain == addr2->domain &&
+   addr1->bus == addr2->bus &&
+   addr1->devid == addr2->devid &&
+   addr1->function == addr2->function)
+   return 0;
+   return 1;
+}


Generally, change looks OK to me, but I think we already have this 
function in PCI library - rte_pci_addr_cmp(). Is there a specific reason 
to reimplement it here?



+
  int
  pci_vfio_read_config(const struct rte_intr_handle *intr_handle,
void *buf, size_t len, off_t offs)
@@ -642,7 +653,7 @@ pci_vfio_unmap_resource(struct rte_pci_device *dev)
vfio_res_list = RTE_TAILQ_CAST(rte_vfio_tailq.head, 
mapped_pci_res_list);
/* Get vfio_res */
TAILQ_FOREACH(vfio_res, vfio_res_list, next) {
-   if (memcmp(&vfio_res->pci_addr, &dev->addr, sizeof(dev->addr)))
+   if (pci_addr_cmp(&vfio_res->pci_addr, &dev->addr))
continue;
break;
}




--
Thanks,
Anatoly


Re: [dpdk-dev] [PATCH v3 1/2] librte_lpm: Improve performance of the delete and add functions

2018-07-12 Thread Alex Kiselev



> On Wed, 11 Jul 2018 20:53:46 +0300
> Alex Kiselev  wrote:

>> librte_lpm: Improve lpm6 performance

...

>>  
>>   /* LPM Tables. */
>> - struct rte_lpm6_rule *rules_tbl; /**< LPM rules. */
>> + struct rte_mempool *rules_pool; /**< LPM rules mempool. */
>> + struct rte_hash *rules_tbl; /**< LPM rules. */
>>   struct rte_lpm6_tbl_entry tbl24[RTE_LPM6_TBL24_NUM_ENTRIES]
>>   __rte_cache_aligned; /**< LPM tbl24 table. */
>>   struct rte_lpm6_tbl_entry tbl8[0]
>> @@ -93,22 +106,81 @@ struct rte_lpm6 {
>>   * and set the rest to 0.


> What is the increased memory overhead of having a hash table?
compared to the current rules array it's about 2 times since a prefix is stored 
in
a rule (mempool) and in a rule key (hashtable). 
I am only talking here about the rule storage.

And I've just realised it doesn't have to be this
way, I don't need the rules mempool anymore. I only need the rules hashtable, 
since 
it could contains everything a rule needs. A rule prefix is stored in a hash 
key, 
and a next hop index could be stored in a hash value. That would eliminate 
memory overhead.

I'll try this way in next patch series.

> Wouldn't it make more sense to use something like tree, and use left/right
> in the rules entry. That way the memory is spread and scales with the number
> of rules.
Maybe. But there is no tree library in the DPDK. So I choose 
a fast and simple way to implement
rules storage using the existent hashtable lib.
And it gives good perfomance results.

Anyway, it's not a data plane, add/delete operations are 
executed not very often, so it's not critical to find
the most efficient (in terms of memory consumption) way, a good one is ok.

> Remember on a internet router, it is not unusual to 2M or more rules.

> Also. Please run checkpatch shell script on your patches.  For example, there
> should be blank line between declarations and code.
I have. It didn't give me any warnings.




-- 
Alex



[dpdk-dev] [PATCH v4 00/21] net/mlx5: flow rework

2018-07-12 Thread Nelio Laranjeiro
Re-work flow engine to support port redirection actions through TC.

This first series depends on [1] which is implemented in commit 
"net/mlx5: support inner RSS computation" and on [2].
Next series will bring the port redirection as announced[3].

[1] https://mails.dpdk.org/archives/dev/2018-July/107378.html
[2] https://mails.dpdk.org/archives/dev/2018-June/104192.html
[3] https://mails.dpdk.org/archives/dev/2018-May/103043.html

Changes in v4:

- fix compilation on redhat 7.5 without Mellanox OFED.
- avoid multiple pattern parsing for the expansion.

Changes in v3:

- remove redundant parameters in drop queues internal API.
- simplify the RSS expansion by only adding missing items in the pattern.
- document all functions.


Nelio Laranjeiro (21):
  net/mlx5: remove flow support
  net/mlx5: handle drop queues as regular queues
  net/mlx5: replace verbs priorities by flow
  net/mlx5: support flow Ethernet item along with drop action
  net/mlx5: add flow queue action
  net/mlx5: add flow stop/start
  net/mlx5: add flow VLAN item
  net/mlx5: add flow IPv4 item
  net/mlx5: add flow IPv6 item
  net/mlx5: add flow UDP item
  net/mlx5: add flow TCP item
  net/mlx5: add mark/flag flow action
  net/mlx5: use a macro for the RSS key size
  net/mlx5: add RSS flow action
  net/mlx5: remove useless arguments in hrxq API
  net/mlx5: support inner RSS computation
  net/mlx5: add flow VXLAN item
  net/mlx5: add flow VXLAN-GPE item
  net/mlx5: add flow GRE item
  net/mlx5: add flow MPLS item
  net/mlx5: add count flow action

 drivers/net/mlx5/mlx5.c|   22 +-
 drivers/net/mlx5/mlx5.h|   18 +-
 drivers/net/mlx5/mlx5_ethdev.c |   14 +-
 drivers/net/mlx5/mlx5_flow.c   | 4821 
 drivers/net/mlx5/mlx5_prm.h|3 +
 drivers/net/mlx5/mlx5_rss.c|7 +-
 drivers/net/mlx5/mlx5_rxq.c|  281 +-
 drivers/net/mlx5/mlx5_rxtx.h   |   21 +-
 8 files changed, 2640 insertions(+), 2547 deletions(-)

-- 
2.18.0



[dpdk-dev] [PATCH v4 02/21] net/mlx5: handle drop queues as regular queues

2018-07-12 Thread Nelio Laranjeiro
Drop queues are essentially used in flows due to Verbs API, the
information if the fate of the flow is a drop or not is already present
in the flow.  Due to this, drop queues can be fully mapped on regular
queues.

Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5.c  |  24 ++--
 drivers/net/mlx5/mlx5.h  |  14 ++-
 drivers/net/mlx5/mlx5_flow.c |  94 +++---
 drivers/net/mlx5/mlx5_rxq.c  | 232 +++
 drivers/net/mlx5/mlx5_rxtx.h |   6 +
 5 files changed, 308 insertions(+), 62 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index df7f39844..e9780ac8f 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -261,7 +261,6 @@ mlx5_dev_close(struct rte_eth_dev *dev)
priv->txqs_n = 0;
priv->txqs = NULL;
}
-   mlx5_flow_delete_drop_queue(dev);
mlx5_mprq_free_mp(dev);
mlx5_mr_release(dev);
if (priv->pd != NULL) {
@@ -1139,22 +1138,15 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
mlx5_link_update(eth_dev, 0);
/* Store device configuration on private structure. */
priv->config = config;
-   /* Create drop queue. */
-   err = mlx5_flow_create_drop_queue(eth_dev);
-   if (err) {
-   DRV_LOG(ERR, "port %u drop queue allocation failed: %s",
-   eth_dev->data->port_id, strerror(rte_errno));
-   err = rte_errno;
-   goto error;
-   }
/* Supported Verbs flow priority number detection. */
-   if (verb_priorities == 0)
-   verb_priorities = mlx5_get_max_verbs_prio(eth_dev);
-   if (verb_priorities < MLX5_VERBS_FLOW_PRIO_8) {
-   DRV_LOG(ERR, "port %u wrong Verbs flow priorities: %u",
-   eth_dev->data->port_id, verb_priorities);
-   err = ENOTSUP;
-   goto error;
+   if (verb_priorities == 0) {
+   err = mlx5_verbs_max_prio(eth_dev);
+   if (err < 0) {
+   DRV_LOG(ERR, "port %u wrong Verbs flow priorities",
+   eth_dev->data->port_id);
+   goto error;
+   }
+   verb_priorities = err;
}
priv->config.max_verbs_prio = verb_priorities;
/*
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index cc01310e0..227429848 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -139,9 +139,6 @@ enum mlx5_verbs_alloc_type {
MLX5_VERBS_ALLOC_TYPE_RX_QUEUE,
 };
 
-/* 8 Verbs priorities. */
-#define MLX5_VERBS_FLOW_PRIO_8 8
-
 /**
  * Verbs allocator needs a context to know in the callback which kind of
  * resources it is allocating.
@@ -153,6 +150,12 @@ struct mlx5_verbs_alloc_ctx {
 
 LIST_HEAD(mlx5_mr_list, mlx5_mr);
 
+/* Flow drop context necessary due to Verbs API. */
+struct mlx5_drop {
+   struct mlx5_hrxq *hrxq; /* Hash Rx queue queue. */
+   struct mlx5_rxq_ibv *rxq; /* Verbs Rx queue. */
+};
+
 struct priv {
LIST_ENTRY(priv) mem_event_cb; /* Called by memory event callback. */
struct rte_eth_dev_data *dev_data;  /* Pointer to device data. */
@@ -182,7 +185,7 @@ struct priv {
struct rte_intr_handle intr_handle; /* Interrupt handler. */
unsigned int (*reta_idx)[]; /* RETA index table. */
unsigned int reta_idx_n; /* RETA index size. */
-   struct mlx5_hrxq_drop *flow_drop_queue; /* Flow drop queue. */
+   struct mlx5_drop drop_queue; /* Flow drop queues. */
struct mlx5_flows flows; /* RTE Flow rules. */
struct mlx5_flows ctrl_flows; /* Control flow rules. */
struct {
@@ -314,7 +317,8 @@ int mlx5_traffic_restart(struct rte_eth_dev *dev);
 
 /* mlx5_flow.c */
 
-unsigned int mlx5_get_max_verbs_prio(struct rte_eth_dev *dev);
+int mlx5_verbs_max_prio(struct rte_eth_dev *dev);
+void mlx5_flow_print(struct rte_flow *flow);
 int mlx5_flow_validate(struct rte_eth_dev *dev,
   const struct rte_flow_attr *attr,
   const struct rte_flow_item items[],
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index a45cb06e1..5e325be37 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -75,6 +75,58 @@ struct ibv_spec_header {
uint16_t size;
 };
 
+ /**
+  * Get the maximum number of priority available.
+  *
+  * @param[in] dev
+  *   Pointer to Ethernet device.
+  *
+  * @return
+  *   number of supported Verbs flow priority on success, a negative errno
+  *   value otherwise and rte_errno is set.
+  */
+int
+mlx5_verbs_max_prio(struct rte_eth_dev *dev)
+{
+   struct {
+   struct ibv_flow_attr attr;
+   struct ibv_flow_spec_eth eth;
+   struct ibv_flow_spec_action_drop drop;
+   } flow_attr = {
+   .attr = {
+   .num_of_specs = 2,
+   },
+   

[dpdk-dev] [PATCH v4 04/21] net/mlx5: support flow Ethernet item along with drop action

2018-07-12 Thread Nelio Laranjeiro
Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5.c  |   1 +
 drivers/net/mlx5/mlx5_flow.c | 664 +--
 2 files changed, 627 insertions(+), 38 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 74248f098..6d3421fae 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -242,6 +242,7 @@ mlx5_dev_close(struct rte_eth_dev *dev)
/* In case mlx5_dev_stop() has not been called. */
mlx5_dev_interrupt_handler_uninstall(dev);
mlx5_traffic_disable(dev);
+   mlx5_flow_flush(dev, NULL);
/* Prevent crashes when queues are still in use. */
dev->rx_pkt_burst = removed_rx_burst;
dev->tx_pkt_burst = removed_tx_burst;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 8fdc6d7bb..036a8d440 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -35,11 +35,50 @@
 extern const struct eth_dev_ops mlx5_dev_ops;
 extern const struct eth_dev_ops mlx5_dev_ops_isolate;
 
+/* Pattern Layer bits. */
+#define MLX5_FLOW_LAYER_OUTER_L2 (1u << 0)
+#define MLX5_FLOW_LAYER_OUTER_L3_IPV4 (1u << 1)
+#define MLX5_FLOW_LAYER_OUTER_L3_IPV6 (1u << 2)
+#define MLX5_FLOW_LAYER_OUTER_L4_UDP (1u << 3)
+#define MLX5_FLOW_LAYER_OUTER_L4_TCP (1u << 4)
+#define MLX5_FLOW_LAYER_OUTER_VLAN (1u << 5)
+/* Masks. */
+#define MLX5_FLOW_LAYER_OUTER_L3 \
+   (MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6)
+#define MLX5_FLOW_LAYER_OUTER_L4 \
+   (MLX5_FLOW_LAYER_OUTER_L4_UDP | MLX5_FLOW_LAYER_OUTER_L4_TCP)
+
+/* Actions that modify the fate of matching traffic. */
+#define MLX5_FLOW_FATE_DROP (1u << 0)
+
+/** Handles information leading to a drop fate. */
+struct mlx5_flow_verbs {
+   unsigned int size; /**< Size of the attribute. */
+   struct {
+   struct ibv_flow_attr *attr;
+   /**< Pointer to the Specification buffer. */
+   uint8_t *specs; /**< Pointer to the specifications. */
+   };
+   struct ibv_flow *flow; /**< Verbs flow pointer. */
+   struct mlx5_hrxq *hrxq; /**< Hash Rx queue object. */
+};
+
+/* Flow structure. */
 struct rte_flow {
TAILQ_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */
+   struct rte_flow_attr attributes; /**< User flow attribute. */
+   uint32_t layers;
+   /**< Bit-fields of present layers see MLX5_FLOW_LAYER_*. */
+   uint32_t fate;
+   /**< Bit-fields of present fate see MLX5_FLOW_FATE_*. */
+   struct mlx5_flow_verbs verbs; /* Verbs drop flow. */
 };
 
 static const struct rte_flow_ops mlx5_flow_ops = {
+   .validate = mlx5_flow_validate,
+   .create = mlx5_flow_create,
+   .destroy = mlx5_flow_destroy,
+   .flush = mlx5_flow_flush,
.isolate = mlx5_flow_isolate,
 };
 
@@ -128,13 +167,415 @@ mlx5_flow_discover_priorities(struct rte_eth_dev *dev)
 }
 
 /**
- * Convert a flow.
+ * Verify the @p attributes will be correctly understood by the NIC and store
+ * them in the @p flow if everything is correct.
  *
- * @param dev
+ * @param[in] dev
  *   Pointer to Ethernet device.
- * @param list
- *   Pointer to a TAILQ flow list.
- * @param[in] attr
+ * @param[in] attributes
+ *   Pointer to flow attributes
+ * @param[in, out] flow
+ *   Pointer to the rte_flow structure.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_flow_attributes(struct rte_eth_dev *dev,
+const struct rte_flow_attr *attributes,
+struct rte_flow *flow,
+struct rte_flow_error *error)
+{
+   uint32_t priority_max =
+   ((struct priv *)dev->data->dev_private)->config.flow_prio;
+
+   if (attributes->group)
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ATTR_GROUP,
+ NULL,
+ "groups is not supported");
+   if (attributes->priority >= priority_max)
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+ NULL,
+ "priority out of range");
+   if (attributes->egress)
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+ NULL,
+ "egress is not supported");
+   if (attributes->transfer)
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER,
+ NULL,
+ "transfer is not support

[dpdk-dev] [PATCH v4 01/21] net/mlx5: remove flow support

2018-07-12 Thread Nelio Laranjeiro
This start a series to re-work the flow engine in mlx5 to easily support
flow conversion to Verbs or TC.  This is necessary to handle both regular
flows and representors flows.

As the full file needs to be clean-up to re-write all items/actions
processing, this patch starts to disable the regular code and only let the
PMD to start in isolated mode.

After this patch flow API will not be usable.

Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5_flow.c | 3095 +-
 drivers/net/mlx5/mlx5_rxtx.h |1 -
 2 files changed, 80 insertions(+), 3016 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 45207d70e..a45cb06e1 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -31,2406 +31,49 @@
 #include "mlx5_prm.h"
 #include "mlx5_glue.h"
 
-/* Flow priority for control plane flows. */
-#define MLX5_CTRL_FLOW_PRIORITY 1
-
-/* Internet Protocol versions. */
-#define MLX5_IPV4 4
-#define MLX5_IPV6 6
-#define MLX5_GRE 47
-
-#ifndef HAVE_IBV_DEVICE_COUNTERS_SET_SUPPORT
-struct ibv_flow_spec_counter_action {
-   int dummy;
-};
-#endif
-
-/* Dev ops structure defined in mlx5.c */
-extern const struct eth_dev_ops mlx5_dev_ops;
-extern const struct eth_dev_ops mlx5_dev_ops_isolate;
-
-/** Structure give to the conversion functions. */
-struct mlx5_flow_data {
-   struct rte_eth_dev *dev; /** Ethernet device. */
-   struct mlx5_flow_parse *parser; /** Parser context. */
-   struct rte_flow_error *error; /** Error context. */
-};
-
-static int
-mlx5_flow_create_eth(const struct rte_flow_item *item,
-const void *default_mask,
-struct mlx5_flow_data *data);
-
-static int
-mlx5_flow_create_vlan(const struct rte_flow_item *item,
- const void *default_mask,
- struct mlx5_flow_data *data);
-
-static int
-mlx5_flow_create_ipv4(const struct rte_flow_item *item,
- const void *default_mask,
- struct mlx5_flow_data *data);
-
-static int
-mlx5_flow_create_ipv6(const struct rte_flow_item *item,
- const void *default_mask,
- struct mlx5_flow_data *data);
-
-static int
-mlx5_flow_create_udp(const struct rte_flow_item *item,
-const void *default_mask,
-struct mlx5_flow_data *data);
-
-static int
-mlx5_flow_create_tcp(const struct rte_flow_item *item,
-const void *default_mask,
-struct mlx5_flow_data *data);
-
-static int
-mlx5_flow_create_vxlan(const struct rte_flow_item *item,
-  const void *default_mask,
-  struct mlx5_flow_data *data);
-
-static int
-mlx5_flow_create_vxlan_gpe(const struct rte_flow_item *item,
-  const void *default_mask,
-  struct mlx5_flow_data *data);
-
-static int
-mlx5_flow_create_gre(const struct rte_flow_item *item,
-const void *default_mask,
-struct mlx5_flow_data *data);
-
-static int
-mlx5_flow_create_mpls(const struct rte_flow_item *item,
- const void *default_mask,
- struct mlx5_flow_data *data);
-
-struct mlx5_flow_parse;
-
-static void
-mlx5_flow_create_copy(struct mlx5_flow_parse *parser, void *src,
- unsigned int size);
-
-static int
-mlx5_flow_create_flag_mark(struct mlx5_flow_parse *parser, uint32_t mark_id);
-
-static int
-mlx5_flow_create_count(struct rte_eth_dev *dev, struct mlx5_flow_parse 
*parser);
-
-/* Hash RX queue types. */
-enum hash_rxq_type {
-   HASH_RXQ_TCPV4,
-   HASH_RXQ_UDPV4,
-   HASH_RXQ_IPV4,
-   HASH_RXQ_TCPV6,
-   HASH_RXQ_UDPV6,
-   HASH_RXQ_IPV6,
-   HASH_RXQ_ETH,
-   HASH_RXQ_TUNNEL,
-};
-
-/* Initialization data for hash RX queue. */
-struct hash_rxq_init {
-   uint64_t hash_fields; /* Fields that participate in the hash. */
-   uint64_t dpdk_rss_hf; /* Matching DPDK RSS hash fields. */
-   unsigned int flow_priority; /* Flow priority to use. */
-   unsigned int ip_version; /* Internet protocol. */
-};
-
-/* Initialization data for hash RX queues. */
-const struct hash_rxq_init hash_rxq_init[] = {
-   [HASH_RXQ_TCPV4] = {
-   .hash_fields = (IBV_RX_HASH_SRC_IPV4 |
-   IBV_RX_HASH_DST_IPV4 |
-   IBV_RX_HASH_SRC_PORT_TCP |
-   IBV_RX_HASH_DST_PORT_TCP),
-   .dpdk_rss_hf = ETH_RSS_NONFRAG_IPV4_TCP,
-   .flow_priority = 0,
-   .ip_version = MLX5_IPV4,
-   },
-   [HASH_RXQ_UDPV4] = {
-   .hash_fields = (IBV_RX_HASH_SRC_IPV4 |
-   IBV_RX_HASH_DST_IPV4 |
-   IBV_RX_HASH_SRC_PORT_UDP |
-   IBV_RX_HASH_DST_PORT_UDP),
-   .dpd

[dpdk-dev] [PATCH v4 03/21] net/mlx5: replace verbs priorities by flow

2018-07-12 Thread Nelio Laranjeiro
Previous work introduce verbs priorities, whereas the PMD is making
translation between Flow priority into Verbs.  Rename this to make more
sense on what the PMD has to translate.

Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5.c  | 15 ---
 drivers/net/mlx5/mlx5.h  |  4 ++--
 drivers/net/mlx5/mlx5_flow.c | 24 
 3 files changed, 18 insertions(+), 25 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index e9780ac8f..74248f098 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -717,7 +717,6 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
unsigned int tunnel_en = 0;
unsigned int mpls_en = 0;
unsigned int swp = 0;
-   unsigned int verb_priorities = 0;
unsigned int mprq = 0;
unsigned int mprq_min_stride_size_n = 0;
unsigned int mprq_max_stride_size_n = 0;
@@ -1139,16 +1138,10 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
/* Store device configuration on private structure. */
priv->config = config;
/* Supported Verbs flow priority number detection. */
-   if (verb_priorities == 0) {
-   err = mlx5_verbs_max_prio(eth_dev);
-   if (err < 0) {
-   DRV_LOG(ERR, "port %u wrong Verbs flow priorities",
-   eth_dev->data->port_id);
-   goto error;
-   }
-   verb_priorities = err;
-   }
-   priv->config.max_verbs_prio = verb_priorities;
+   err = mlx5_flow_discover_priorities(eth_dev);
+   if (err < 0)
+   goto error;
+   priv->config.flow_prio = err;
/*
 * Once the device is added to the list of memory event
 * callback, its global MR cache table cannot be expanded
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 227429848..9949cd3fa 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -122,7 +122,7 @@ struct mlx5_dev_config {
unsigned int min_rxqs_num;
/* Rx queue count threshold to enable MPRQ. */
} mprq; /* Configurations for Multi-Packet RQ. */
-   unsigned int max_verbs_prio; /* Number of Verb flow priorities. */
+   unsigned int flow_prio; /* Number of flow priorities. */
unsigned int tso_max_payload_sz; /* Maximum TCP payload for TSO. */
unsigned int ind_table_max_size; /* Maximum indirection table size. */
int txq_inline; /* Maximum packet size for inlining. */
@@ -317,7 +317,7 @@ int mlx5_traffic_restart(struct rte_eth_dev *dev);
 
 /* mlx5_flow.c */
 
-int mlx5_verbs_max_prio(struct rte_eth_dev *dev);
+int mlx5_flow_discover_priorities(struct rte_eth_dev *dev);
 void mlx5_flow_print(struct rte_flow *flow);
 int mlx5_flow_validate(struct rte_eth_dev *dev,
   const struct rte_flow_attr *attr,
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 5e325be37..8fdc6d7bb 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -76,17 +76,17 @@ struct ibv_spec_header {
 };
 
  /**
-  * Get the maximum number of priority available.
+  * Discover the maximum number of priority available.
   *
   * @param[in] dev
   *   Pointer to Ethernet device.
   *
   * @return
-  *   number of supported Verbs flow priority on success, a negative errno
-  *   value otherwise and rte_errno is set.
+  *   number of supported flow priority on success, a negative errno value
+  *   otherwise and rte_errno is set.
   */
 int
-mlx5_verbs_max_prio(struct rte_eth_dev *dev)
+mlx5_flow_discover_priorities(struct rte_eth_dev *dev)
 {
struct {
struct ibv_flow_attr attr;
@@ -106,25 +106,25 @@ mlx5_verbs_max_prio(struct rte_eth_dev *dev)
},
};
struct ibv_flow *flow;
-   uint32_t verb_priorities;
struct mlx5_hrxq *drop = mlx5_hrxq_drop_new(dev);
+   uint16_t vprio[] = { 8, 16 };
+   int i;
 
if (!drop) {
rte_errno = ENOTSUP;
return -rte_errno;
}
-   for (verb_priorities = 0; 1; verb_priorities++) {
-   flow_attr.attr.priority = verb_priorities;
-   flow = mlx5_glue->create_flow(drop->qp,
- &flow_attr.attr);
+   for (i = 0; i != RTE_DIM(vprio); i++) {
+   flow_attr.attr.priority = vprio[i] - 1;
+   flow = mlx5_glue->create_flow(drop->qp, &flow_attr.attr);
if (!flow)
break;
claim_zero(mlx5_glue->destroy_flow(flow));
}
mlx5_hrxq_drop_release(dev);
DRV_LOG(INFO, "port %u flow maximum priority: %d",
-   dev->data->port_id, verb_priorities);
-   return verb_priorities;
+   dev->data->port_id, vprio[i - 1]);
+   return vprio[i - 1];
 }
 
 /**
@@ -318,7 +318,7 @@ mlx5_ctrl_flow_vlan

[dpdk-dev] [PATCH v4 05/21] net/mlx5: add flow queue action

2018-07-12 Thread Nelio Laranjeiro
Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5_flow.c | 97 
 1 file changed, 86 insertions(+), 11 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 036a8d440..6041a4573 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -50,6 +50,7 @@ extern const struct eth_dev_ops mlx5_dev_ops_isolate;
 
 /* Actions that modify the fate of matching traffic. */
 #define MLX5_FLOW_FATE_DROP (1u << 0)
+#define MLX5_FLOW_FATE_QUEUE (1u << 1)
 
 /** Handles information leading to a drop fate. */
 struct mlx5_flow_verbs {
@@ -71,7 +72,8 @@ struct rte_flow {
/**< Bit-fields of present layers see MLX5_FLOW_LAYER_*. */
uint32_t fate;
/**< Bit-fields of present fate see MLX5_FLOW_FATE_*. */
-   struct mlx5_flow_verbs verbs; /* Verbs drop flow. */
+   struct mlx5_flow_verbs verbs; /* Verbs flow. */
+   uint16_t queue; /**< Destination queue to redirect traffic to. */
 };
 
 static const struct rte_flow_ops mlx5_flow_ops = {
@@ -492,6 +494,52 @@ mlx5_flow_action_drop(const struct rte_flow_action *action,
return size;
 }
 
+/**
+ * Convert the @p action into @p flow after ensuring the NIC will understand
+ * and process it correctly.
+ *
+ * @param[in] dev
+ *   Pointer to Ethernet device structure.
+ * @param[in] action
+ *   Action configuration.
+ * @param[in, out] flow
+ *   Pointer to flow structure.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_flow_action_queue(struct rte_eth_dev *dev,
+  const struct rte_flow_action *action,
+  struct rte_flow *flow,
+  struct rte_flow_error *error)
+{
+   struct priv *priv = dev->data->dev_private;
+   const struct rte_flow_action_queue *queue = action->conf;
+
+   if (flow->fate)
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ action,
+ "multiple fate actions are not"
+ " supported");
+   if (queue->index >= priv->rxqs_n)
+   return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ &queue->index,
+ "queue index out of range");
+   if (!(*priv->rxqs)[queue->index])
+   return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ &queue->index,
+ "queue is not configured");
+   flow->queue = queue->index;
+   flow->fate |= MLX5_FLOW_FATE_QUEUE;
+   return 0;
+}
+
 /**
  * Convert the @p action into @p flow after ensuring the NIC will understand
  * and process it correctly.
@@ -520,7 +568,7 @@ mlx5_flow_action_drop(const struct rte_flow_action *action,
  *   On error, a negative errno value is returned and rte_errno is set.
  */
 static int
-mlx5_flow_actions(struct rte_eth_dev *dev __rte_unused,
+mlx5_flow_actions(struct rte_eth_dev *dev,
  const struct rte_flow_action actions[],
  struct rte_flow *flow, const size_t flow_size,
  struct rte_flow_error *error)
@@ -537,6 +585,9 @@ mlx5_flow_actions(struct rte_eth_dev *dev __rte_unused,
ret = mlx5_flow_action_drop(actions, flow, remain,
error);
break;
+   case RTE_FLOW_ACTION_TYPE_QUEUE:
+   ret = mlx5_flow_action_queue(dev, actions, flow, error);
+   break;
default:
return rte_flow_error_set(error, ENOTSUP,
  RTE_FLOW_ERROR_TYPE_ACTION,
@@ -661,7 +712,10 @@ mlx5_flow_remove(struct rte_eth_dev *dev, struct rte_flow 
*flow)
}
}
if (flow->verbs.hrxq) {
-   mlx5_hrxq_drop_release(dev);
+   if (flow->fate & MLX5_FLOW_FATE_DROP)
+   mlx5_hrxq_drop_release(dev);
+   else if (flow->fate & MLX5_FLOW_FATE_QUEUE)
+   mlx5_hrxq_release(dev, flow->verbs.hrxq);
flow->verbs.hrxq = NULL;
}
 }
@@ -683,17 +737,38 @@ static int
 mlx5_flow_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
struct rte_flow_error *error)
 {
-   flow->verbs.hrxq = mlx5_hrxq_drop_new(dev);
-   if (!flow->verbs.hrxq)
-   return rte_flow_error_set
-   (error, errno,
-RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-

[dpdk-dev] [PATCH v4 06/21] net/mlx5: add flow stop/start

2018-07-12 Thread Nelio Laranjeiro
Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5_flow.c | 24 
 1 file changed, 20 insertions(+), 4 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 6041a4573..77f1bd5cc 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -908,9 +908,12 @@ mlx5_flow_list_flush(struct rte_eth_dev *dev, struct 
mlx5_flows *list)
  *   Pointer to a TAILQ flow list.
  */
 void
-mlx5_flow_stop(struct rte_eth_dev *dev __rte_unused,
-  struct mlx5_flows *list __rte_unused)
+mlx5_flow_stop(struct rte_eth_dev *dev, struct mlx5_flows *list)
 {
+   struct rte_flow *flow;
+
+   TAILQ_FOREACH_REVERSE(flow, list, mlx5_flows, next)
+   mlx5_flow_remove(dev, flow);
 }
 
 /**
@@ -925,10 +928,23 @@ mlx5_flow_stop(struct rte_eth_dev *dev __rte_unused,
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 int
-mlx5_flow_start(struct rte_eth_dev *dev __rte_unused,
-   struct mlx5_flows *list __rte_unused)
+mlx5_flow_start(struct rte_eth_dev *dev, struct mlx5_flows *list)
 {
+   struct rte_flow *flow;
+   struct rte_flow_error error;
+   int ret = 0;
+
+   TAILQ_FOREACH(flow, list, next) {
+   ret = mlx5_flow_apply(dev, flow, &error);
+   if (ret < 0)
+   goto error;
+   }
return 0;
+error:
+   ret = rte_errno; /* Save rte_errno before cleanup. */
+   mlx5_flow_stop(dev, list);
+   rte_errno = ret; /* Restore rte_errno. */
+   return -rte_errno;
 }
 
 /**
-- 
2.18.0



[dpdk-dev] [PATCH v4 07/21] net/mlx5: add flow VLAN item

2018-07-12 Thread Nelio Laranjeiro
Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5_flow.c | 127 +++
 1 file changed, 127 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 77f1bd5cc..659979283 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -382,6 +382,130 @@ mlx5_flow_item_eth(const struct rte_flow_item *item, 
struct rte_flow *flow,
return size;
 }
 
+/**
+ * Update the VLAN tag in the Verbs Ethernet specification.
+ *
+ * @param[in, out] attr
+ *   Pointer to Verbs attributes structure.
+ * @param[in] eth
+ *   Verbs structure containing the VLAN information to copy.
+ */
+static void
+mlx5_flow_item_vlan_update(struct ibv_flow_attr *attr,
+  struct ibv_flow_spec_eth *eth)
+{
+   unsigned int i;
+   enum ibv_flow_spec_type search = IBV_FLOW_SPEC_ETH;
+   struct ibv_spec_header *hdr = (struct ibv_spec_header *)
+   ((uint8_t *)attr + sizeof(struct ibv_flow_attr));
+
+   for (i = 0; i != attr->num_of_specs; ++i) {
+   if (hdr->type == search) {
+   struct ibv_flow_spec_eth *e =
+   (struct ibv_flow_spec_eth *)hdr;
+
+   e->val.vlan_tag = eth->val.vlan_tag;
+   e->mask.vlan_tag = eth->mask.vlan_tag;
+   e->val.ether_type = eth->val.ether_type;
+   e->mask.ether_type = eth->mask.ether_type;
+   break;
+   }
+   hdr = (struct ibv_spec_header *)((uint8_t *)hdr + hdr->size);
+   }
+}
+
+/**
+ * Convert the @p item into @p flow (or by updating the already present
+ * Ethernet Verbs) specification after ensuring the NIC will understand and
+ * process it correctly.
+ * If the necessary size for the conversion is greater than the @p flow_size,
+ * nothing is written in @p flow, the validation is still performed.
+ *
+ * @param[in] item
+ *   Item specification.
+ * @param[in, out] flow
+ *   Pointer to flow structure.
+ * @param[in] flow_size
+ *   Size in bytes of the available space in @p flow, if too small, nothing is
+ *   written.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *   On success the number of bytes consumed/necessary, if the returned value
+ *   is lesser or equal to @p flow_size, the @p item has fully been converted,
+ *   otherwise another call with this returned memory size should be done.
+ *   On error, a negative errno value is returned and rte_errno is set.
+ */
+static int
+mlx5_flow_item_vlan(const struct rte_flow_item *item, struct rte_flow *flow,
+   const size_t flow_size, struct rte_flow_error *error)
+{
+   const struct rte_flow_item_vlan *spec = item->spec;
+   const struct rte_flow_item_vlan *mask = item->mask;
+   const struct rte_flow_item_vlan nic_mask = {
+   .tci = RTE_BE16(0x0fff),
+   .inner_type = RTE_BE16(0x),
+   };
+   unsigned int size = sizeof(struct ibv_flow_spec_eth);
+   struct ibv_flow_spec_eth eth = {
+   .type = IBV_FLOW_SPEC_ETH,
+   .size = size,
+   };
+   int ret;
+   const uint32_t l34m = MLX5_FLOW_LAYER_OUTER_L3 |
+   MLX5_FLOW_LAYER_OUTER_L4;
+   const uint32_t vlanm = MLX5_FLOW_LAYER_OUTER_VLAN;
+   const uint32_t l2m = MLX5_FLOW_LAYER_OUTER_L2;
+
+   if (flow->layers & vlanm)
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "VLAN layer already configured");
+   else if ((flow->layers & l34m) != 0)
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "L2 layer cannot follow L3/L4 layer");
+   if (!mask)
+   mask = &rte_flow_item_vlan_mask;
+   ret = mlx5_flow_item_acceptable
+   (item, (const uint8_t *)mask,
+(const uint8_t *)&nic_mask,
+sizeof(struct rte_flow_item_vlan), error);
+   if (ret)
+   return ret;
+   if (spec) {
+   eth.val.vlan_tag = spec->tci;
+   eth.mask.vlan_tag = mask->tci;
+   eth.val.vlan_tag &= eth.mask.vlan_tag;
+   eth.val.ether_type = spec->inner_type;
+   eth.mask.ether_type = mask->inner_type;
+   eth.val.ether_type &= eth.mask.ether_type;
+   }
+   /*
+* From verbs perspective an empty VLAN is equivalent
+* to a packet without VLAN layer.
+*/
+   if (!eth.mask.vlan_tag)
+   return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_SPEC,
+   

[dpdk-dev] [PATCH v4 09/21] net/mlx5: add flow IPv6 item

2018-07-12 Thread Nelio Laranjeiro
Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5_flow.c | 115 +++
 1 file changed, 115 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index c05b8498d..513f70d40 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -593,6 +593,118 @@ mlx5_flow_item_ipv4(const struct rte_flow_item *item, 
struct rte_flow *flow,
return size;
 }
 
+/**
+ * Convert the @p item into a Verbs specification after ensuring the NIC
+ * will understand and process it correctly.
+ * If the necessary size for the conversion is greater than the @p flow_size,
+ * nothing is written in @p flow, the validation is still performed.
+ *
+ * @param[in] item
+ *   Item specification.
+ * @param[in, out] flow
+ *   Pointer to flow structure.
+ * @param[in] flow_size
+ *   Size in bytes of the available space in @p flow, if too small, nothing is
+ *   written.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *   On success the number of bytes consumed/necessary, if the returned value
+ *   is lesser or equal to @p flow_size, the @p item has fully been converted,
+ *   otherwise another call with this returned memory size should be done.
+ *   On error, a negative errno value is returned and rte_errno is set.
+ */
+static int
+mlx5_flow_item_ipv6(const struct rte_flow_item *item, struct rte_flow *flow,
+   const size_t flow_size, struct rte_flow_error *error)
+{
+   const struct rte_flow_item_ipv6 *spec = item->spec;
+   const struct rte_flow_item_ipv6 *mask = item->mask;
+   const struct rte_flow_item_ipv6 nic_mask = {
+   .hdr = {
+   .src_addr =
+   "\xff\xff\xff\xff\xff\xff\xff\xff"
+   "\xff\xff\xff\xff\xff\xff\xff\xff",
+   .dst_addr =
+   "\xff\xff\xff\xff\xff\xff\xff\xff"
+   "\xff\xff\xff\xff\xff\xff\xff\xff",
+   .vtc_flow = RTE_BE32(0x),
+   .proto = 0xff,
+   .hop_limits = 0xff,
+   },
+   };
+   unsigned int size = sizeof(struct ibv_flow_spec_ipv6);
+   struct ibv_flow_spec_ipv6 ipv6 = {
+   .type = IBV_FLOW_SPEC_IPV6,
+   .size = size,
+   };
+   int ret;
+
+   if (flow->layers & MLX5_FLOW_LAYER_OUTER_L3)
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "multiple L3 layers not supported");
+   else if (flow->layers & MLX5_FLOW_LAYER_OUTER_L4)
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "L3 cannot follow an L4 layer.");
+   if (!mask)
+   mask = &rte_flow_item_ipv6_mask;
+   ret = mlx5_flow_item_acceptable
+   (item, (const uint8_t *)mask,
+(const uint8_t *)&nic_mask,
+sizeof(struct rte_flow_item_ipv6), error);
+   if (ret < 0)
+   return ret;
+   flow->layers |= MLX5_FLOW_LAYER_OUTER_L3_IPV6;
+   if (size > flow_size)
+   return size;
+   if (spec) {
+   unsigned int i;
+   uint32_t vtc_flow_val;
+   uint32_t vtc_flow_mask;
+
+   memcpy(&ipv6.val.src_ip, spec->hdr.src_addr,
+  RTE_DIM(ipv6.val.src_ip));
+   memcpy(&ipv6.val.dst_ip, spec->hdr.dst_addr,
+  RTE_DIM(ipv6.val.dst_ip));
+   memcpy(&ipv6.mask.src_ip, mask->hdr.src_addr,
+  RTE_DIM(ipv6.mask.src_ip));
+   memcpy(&ipv6.mask.dst_ip, mask->hdr.dst_addr,
+  RTE_DIM(ipv6.mask.dst_ip));
+   vtc_flow_val = rte_be_to_cpu_32(spec->hdr.vtc_flow);
+   vtc_flow_mask = rte_be_to_cpu_32(mask->hdr.vtc_flow);
+   ipv6.val.flow_label =
+   rte_cpu_to_be_32((vtc_flow_val & IPV6_HDR_FL_MASK) >>
+IPV6_HDR_FL_SHIFT);
+   ipv6.val.traffic_class = (vtc_flow_val & IPV6_HDR_TC_MASK) >>
+IPV6_HDR_TC_SHIFT;
+   ipv6.val.next_hdr = spec->hdr.proto;
+   ipv6.val.hop_limit = spec->hdr.hop_limits;
+   ipv6.mask.flow_label =
+   rte_cpu_to_be_32((vtc_flow_mask & IPV6_HDR_FL_MASK) >>
+IPV6_HDR_FL_SHIFT);
+   ipv6.mask.traffic_class = (vtc_flow_mask & IPV6_HDR_TC_MASK) >>
+ IPV6_HDR_TC_SHIFT;
+   ipv6.mask.next_hdr = mask->hdr.pr

[dpdk-dev] [PATCH v4 08/21] net/mlx5: add flow IPv4 item

2018-07-12 Thread Nelio Laranjeiro
Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5_flow.c | 90 
 1 file changed, 90 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 659979283..c05b8498d 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -506,6 +506,93 @@ mlx5_flow_item_vlan(const struct rte_flow_item *item, 
struct rte_flow *flow,
return size;
 }
 
+/**
+ * Convert the @p item into a Verbs specification after ensuring the NIC
+ * will understand and process it correctly.
+ * If the necessary size for the conversion is greater than the @p flow_size,
+ * nothing is written in @p flow, the validation is still performed.
+ *
+ * @param[in] item
+ *   Item specification.
+ * @param[in, out] flow
+ *   Pointer to flow structure.
+ * @param[in] flow_size
+ *   Size in bytes of the available space in @p flow, if too small, nothing is
+ *   written.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *   On success the number of bytes consumed/necessary, if the returned value
+ *   is lesser or equal to @p flow_size, the @p item has fully been converted,
+ *   otherwise another call with this returned memory size should be done.
+ *   On error, a negative errno value is returned and rte_errno is set.
+ */
+static int
+mlx5_flow_item_ipv4(const struct rte_flow_item *item, struct rte_flow *flow,
+   const size_t flow_size, struct rte_flow_error *error)
+{
+   const struct rte_flow_item_ipv4 *spec = item->spec;
+   const struct rte_flow_item_ipv4 *mask = item->mask;
+   const struct rte_flow_item_ipv4 nic_mask = {
+   .hdr = {
+   .src_addr = RTE_BE32(0x),
+   .dst_addr = RTE_BE32(0x),
+   .type_of_service = 0xff,
+   .next_proto_id = 0xff,
+   },
+   };
+   unsigned int size = sizeof(struct ibv_flow_spec_ipv4_ext);
+   struct ibv_flow_spec_ipv4_ext ipv4 = {
+   .type = IBV_FLOW_SPEC_IPV4_EXT,
+   .size = size,
+   };
+   int ret;
+
+   if (flow->layers & MLX5_FLOW_LAYER_OUTER_L3)
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "multiple L3 layers not supported");
+   else if (flow->layers & MLX5_FLOW_LAYER_OUTER_L4)
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "L3 cannot follow an L4 layer.");
+   if (!mask)
+   mask = &rte_flow_item_ipv4_mask;
+   ret = mlx5_flow_item_acceptable
+   (item, (const uint8_t *)mask,
+(const uint8_t *)&nic_mask,
+sizeof(struct rte_flow_item_ipv4), error);
+   if (ret < 0)
+   return ret;
+   flow->layers |= MLX5_FLOW_LAYER_OUTER_L3_IPV4;
+   if (size > flow_size)
+   return size;
+   if (spec) {
+   ipv4.val = (struct ibv_flow_ipv4_ext_filter){
+   .src_ip = spec->hdr.src_addr,
+   .dst_ip = spec->hdr.dst_addr,
+   .proto = spec->hdr.next_proto_id,
+   .tos = spec->hdr.type_of_service,
+   };
+   ipv4.mask = (struct ibv_flow_ipv4_ext_filter){
+   .src_ip = mask->hdr.src_addr,
+   .dst_ip = mask->hdr.dst_addr,
+   .proto = mask->hdr.next_proto_id,
+   .tos = mask->hdr.type_of_service,
+   };
+   /* Remove unwanted bits from values. */
+   ipv4.val.src_ip &= ipv4.mask.src_ip;
+   ipv4.val.dst_ip &= ipv4.mask.dst_ip;
+   ipv4.val.proto &= ipv4.mask.proto;
+   ipv4.val.tos &= ipv4.mask.tos;
+   }
+   mlx5_flow_spec_verbs_add(flow, &ipv4, size);
+   return size;
+}
+
 /**
  * Convert the @p pattern into a Verbs specifications after ensuring the NIC
  * will understand and process it correctly.
@@ -551,6 +638,9 @@ mlx5_flow_items(const struct rte_flow_item pattern[],
case RTE_FLOW_ITEM_TYPE_VLAN:
ret = mlx5_flow_item_vlan(pattern, flow, remain, error);
break;
+   case RTE_FLOW_ITEM_TYPE_IPV4:
+   ret = mlx5_flow_item_ipv4(pattern, flow, remain, error);
+   break;
default:
return rte_flow_error_set(error, ENOTSUP,
  RTE_FLOW_ERROR_TYPE_ITEM,
-- 
2.18.0



[dpdk-dev] [PATCH v4 10/21] net/mlx5: add flow UDP item

2018-07-12 Thread Nelio Laranjeiro
Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5_flow.c | 97 +---
 1 file changed, 91 insertions(+), 6 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 513f70d40..0096ed8a2 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -52,6 +52,9 @@ extern const struct eth_dev_ops mlx5_dev_ops_isolate;
 #define MLX5_FLOW_FATE_DROP (1u << 0)
 #define MLX5_FLOW_FATE_QUEUE (1u << 1)
 
+/* possible L3 layers protocols filtering. */
+#define MLX5_IP_PROTOCOL_UDP 17
+
 /** Handles information leading to a drop fate. */
 struct mlx5_flow_verbs {
unsigned int size; /**< Size of the attribute. */
@@ -68,10 +71,12 @@ struct mlx5_flow_verbs {
 struct rte_flow {
TAILQ_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */
struct rte_flow_attr attributes; /**< User flow attribute. */
+   uint32_t l3_protocol_en:1; /**< Protocol filtering requested. */
uint32_t layers;
/**< Bit-fields of present layers see MLX5_FLOW_LAYER_*. */
uint32_t fate;
/**< Bit-fields of present fate see MLX5_FLOW_FATE_*. */
+   uint8_t l3_protocol; /**< valid when l3_protocol_en is set. */
struct mlx5_flow_verbs verbs; /* Verbs flow. */
uint16_t queue; /**< Destination queue to redirect traffic to. */
 };
@@ -568,8 +573,6 @@ mlx5_flow_item_ipv4(const struct rte_flow_item *item, 
struct rte_flow *flow,
if (ret < 0)
return ret;
flow->layers |= MLX5_FLOW_LAYER_OUTER_L3_IPV4;
-   if (size > flow_size)
-   return size;
if (spec) {
ipv4.val = (struct ibv_flow_ipv4_ext_filter){
.src_ip = spec->hdr.src_addr,
@@ -589,7 +592,10 @@ mlx5_flow_item_ipv4(const struct rte_flow_item *item, 
struct rte_flow *flow,
ipv4.val.proto &= ipv4.mask.proto;
ipv4.val.tos &= ipv4.mask.tos;
}
-   mlx5_flow_spec_verbs_add(flow, &ipv4, size);
+   flow->l3_protocol_en = !!ipv4.mask.proto;
+   flow->l3_protocol = ipv4.val.proto;
+   if (size <= flow_size)
+   mlx5_flow_spec_verbs_add(flow, &ipv4, size);
return size;
 }
 
@@ -660,8 +666,6 @@ mlx5_flow_item_ipv6(const struct rte_flow_item *item, 
struct rte_flow *flow,
if (ret < 0)
return ret;
flow->layers |= MLX5_FLOW_LAYER_OUTER_L3_IPV6;
-   if (size > flow_size)
-   return size;
if (spec) {
unsigned int i;
uint32_t vtc_flow_val;
@@ -701,7 +705,85 @@ mlx5_flow_item_ipv6(const struct rte_flow_item *item, 
struct rte_flow *flow,
ipv6.val.next_hdr &= ipv6.mask.next_hdr;
ipv6.val.hop_limit &= ipv6.mask.hop_limit;
}
-   mlx5_flow_spec_verbs_add(flow, &ipv6, size);
+   flow->l3_protocol_en = !!ipv6.mask.next_hdr;
+   flow->l3_protocol = ipv6.val.next_hdr;
+   if (size <= flow_size)
+   mlx5_flow_spec_verbs_add(flow, &ipv6, size);
+   return size;
+}
+
+/**
+ * Convert the @p item into a Verbs specification after ensuring the NIC
+ * will understand and process it correctly.
+ * If the necessary size for the conversion is greater than the @p flow_size,
+ * nothing is written in @p flow, the validation is still performed.
+ *
+ * @param[in] item
+ *   Item specification.
+ * @param[in, out] flow
+ *   Pointer to flow structure.
+ * @param[in] flow_size
+ *   Size in bytes of the available space in @p flow, if too small, nothing is
+ *   written.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *   On success the number of bytes consumed/necessary, if the returned value
+ *   is lesser or equal to @p flow_size, the @p item has fully been converted,
+ *   otherwise another call with this returned memory size should be done.
+ *   On error, a negative errno value is returned and rte_errno is set.
+ */
+static int
+mlx5_flow_item_udp(const struct rte_flow_item *item, struct rte_flow *flow,
+  const size_t flow_size, struct rte_flow_error *error)
+{
+   const struct rte_flow_item_udp *spec = item->spec;
+   const struct rte_flow_item_udp *mask = item->mask;
+   unsigned int size = sizeof(struct ibv_flow_spec_tcp_udp);
+   struct ibv_flow_spec_tcp_udp udp = {
+   .type = IBV_FLOW_SPEC_UDP,
+   .size = size,
+   };
+   int ret;
+
+   if (!(flow->layers & MLX5_FLOW_LAYER_OUTER_L3))
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "L3 is mandatory to filter on L4");
+   if (flow->layers & MLX5_FLOW_LAYER_OUTER_L4)
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ 

[dpdk-dev] [PATCH v4 12/21] net/mlx5: add mark/flag flow action

2018-07-12 Thread Nelio Laranjeiro
Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5_flow.c | 252 +++
 drivers/net/mlx5/mlx5_rxtx.h |   1 +
 2 files changed, 253 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index f646eee01..1280db486 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -52,6 +52,10 @@ extern const struct eth_dev_ops mlx5_dev_ops_isolate;
 #define MLX5_FLOW_FATE_DROP (1u << 0)
 #define MLX5_FLOW_FATE_QUEUE (1u << 1)
 
+/* Modify a packet. */
+#define MLX5_FLOW_MOD_FLAG (1u << 0)
+#define MLX5_FLOW_MOD_MARK (1u << 1)
+
 /* possible L3 layers protocols filtering. */
 #define MLX5_IP_PROTOCOL_TCP 6
 #define MLX5_IP_PROTOCOL_UDP 17
@@ -75,6 +79,8 @@ struct rte_flow {
uint32_t l3_protocol_en:1; /**< Protocol filtering requested. */
uint32_t layers;
/**< Bit-fields of present layers see MLX5_FLOW_LAYER_*. */
+   uint32_t modifier;
+   /**< Bit-fields of present modifier see MLX5_FLOW_MOD_*. */
uint32_t fate;
/**< Bit-fields of present fate see MLX5_FLOW_FATE_*. */
uint8_t l3_protocol; /**< valid when l3_protocol_en is set. */
@@ -984,6 +990,12 @@ mlx5_flow_action_drop(const struct rte_flow_action *action,
  action,
  "multiple fate actions are not"
  " supported");
+   if (flow->modifier & (MLX5_FLOW_MOD_FLAG | MLX5_FLOW_MOD_MARK))
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ action,
+ "drop is not compatible with"
+ " flag/mark action");
if (size < flow_size)
mlx5_flow_spec_verbs_add(flow, &drop, size);
flow->fate |= MLX5_FLOW_FATE_DROP;
@@ -1036,6 +1048,161 @@ mlx5_flow_action_queue(struct rte_eth_dev *dev,
return 0;
 }
 
+/**
+ * Convert the @p action into a Verbs specification after ensuring the NIC
+ * will understand and process it correctly.
+ * If the necessary size for the conversion is greater than the @p flow_size,
+ * nothing is written in @p flow, the validation is still performed.
+ *
+ * @param[in] action
+ *   Action configuration.
+ * @param[in, out] flow
+ *   Pointer to flow structure.
+ * @param[in] flow_size
+ *   Size in bytes of the available space in @p flow, if too small, nothing is
+ *   written.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *   On success the number of bytes consumed/necessary, if the returned value
+ *   is lesser or equal to @p flow_size, the @p action has fully been
+ *   converted, otherwise another call with this returned memory size should
+ *   be done.
+ *   On error, a negative errno value is returned and rte_errno is set.
+ */
+static int
+mlx5_flow_action_flag(const struct rte_flow_action *action,
+ struct rte_flow *flow, const size_t flow_size,
+ struct rte_flow_error *error)
+{
+   unsigned int size = sizeof(struct ibv_flow_spec_action_tag);
+   struct ibv_flow_spec_action_tag tag = {
+   .type = IBV_FLOW_SPEC_ACTION_TAG,
+   .size = size,
+   .tag_id = mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT),
+   };
+
+   if (flow->modifier & MLX5_FLOW_MOD_FLAG)
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ action,
+ "flag action already present");
+   if (flow->fate & MLX5_FLOW_FATE_DROP)
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ action,
+ "flag is not compatible with drop"
+ " action");
+   if (flow->modifier & MLX5_FLOW_MOD_MARK)
+   return 0;
+   flow->modifier |= MLX5_FLOW_MOD_FLAG;
+   if (size <= flow_size)
+   mlx5_flow_spec_verbs_add(flow, &tag, size);
+   return size;
+}
+
+/**
+ * Update verbs specification to modify the flag to mark.
+ *
+ * @param[in, out] flow
+ *   Pointer to the rte_flow structure.
+ * @param[in] mark_id
+ *   Mark identifier to replace the flag.
+ */
+static void
+mlx5_flow_verbs_mark_update(struct rte_flow *flow, uint32_t mark_id)
+{
+   struct ibv_spec_header *hdr;
+   int i;
+
+   /* Update Verbs specification. */
+   hdr = (struct ibv_spec_header *)flow->verbs.specs;
+   if (!hdr)
+   return;
+   for (i = 0; i != flow->verbs.attr->num_of_specs; ++i) {
+   if (hdr->type == IBV_FLOW_SPEC_ACTION_TAG) {
+   stru

[dpdk-dev] [PATCH v4 13/21] net/mlx5: use a macro for the RSS key size

2018-07-12 Thread Nelio Laranjeiro
ConnectX 4-5 support only 40 bytes of RSS key, using a compiled size
hash key is not necessary.

Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5_ethdev.c | 14 +++---
 drivers/net/mlx5/mlx5_flow.c   |  4 ++--
 drivers/net/mlx5/mlx5_prm.h|  3 +++
 drivers/net/mlx5/mlx5_rss.c|  7 ---
 drivers/net/mlx5/mlx5_rxq.c| 12 +++-
 drivers/net/mlx5/mlx5_rxtx.h   |  1 -
 6 files changed, 23 insertions(+), 18 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 05f66f7b6..6e44d5ff0 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -377,15 +377,15 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
 
if (use_app_rss_key &&
(dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len !=
-rss_hash_default_key_len)) {
-   DRV_LOG(ERR, "port %u RSS key len must be %zu Bytes long",
-   dev->data->port_id, rss_hash_default_key_len);
+MLX5_RSS_HASH_KEY_LEN)) {
+   DRV_LOG(ERR, "port %u RSS key len must be %s Bytes long",
+   dev->data->port_id, RTE_STR(MLX5_RSS_HASH_KEY_LEN));
rte_errno = EINVAL;
return -rte_errno;
}
priv->rss_conf.rss_key =
rte_realloc(priv->rss_conf.rss_key,
-   rss_hash_default_key_len, 0);
+   MLX5_RSS_HASH_KEY_LEN, 0);
if (!priv->rss_conf.rss_key) {
DRV_LOG(ERR, "port %u cannot allocate RSS hash key memory (%u)",
dev->data->port_id, rxqs_n);
@@ -396,8 +396,8 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
   use_app_rss_key ?
   dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key :
   rss_hash_default_key,
-  rss_hash_default_key_len);
-   priv->rss_conf.rss_key_len = rss_hash_default_key_len;
+  MLX5_RSS_HASH_KEY_LEN);
+   priv->rss_conf.rss_key_len = MLX5_RSS_HASH_KEY_LEN;
priv->rss_conf.rss_hf = dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
priv->rxqs = (void *)dev->data->rx_queues;
priv->txqs = (void *)dev->data->tx_queues;
@@ -515,7 +515,7 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *info)
info->if_index = if_nametoindex(ifname);
info->reta_size = priv->reta_idx_n ?
priv->reta_idx_n : config->ind_table_max_size;
-   info->hash_key_size = rss_hash_default_key_len;
+   info->hash_key_size = MLX5_RSS_HASH_KEY_LEN;
info->speed_capa = priv->link_speed_capa;
info->flow_type_rss_offloads = ~MLX5_RSS_HF_MASK;
mlx5_set_default_params(dev, info);
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 1280db486..77483bd1f 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1493,11 +1493,11 @@ mlx5_flow_apply(struct rte_eth_dev *dev, struct 
rte_flow *flow,
struct mlx5_hrxq *hrxq;
 
hrxq = mlx5_hrxq_get(dev, rss_hash_default_key,
-rss_hash_default_key_len, 0,
+MLX5_RSS_HASH_KEY_LEN, 0,
 &flow->queue, 1, 0, 0);
if (!hrxq)
hrxq = mlx5_hrxq_new(dev, rss_hash_default_key,
-rss_hash_default_key_len, 0,
+MLX5_RSS_HASH_KEY_LEN, 0,
 &flow->queue, 1, 0, 0);
if (!hrxq)
return rte_flow_error_set(error, rte_errno,
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index f9fae1e50..0870d32fd 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -21,6 +21,9 @@
 #include 
 #include "mlx5_autoconf.h"
 
+/* RSS hash key size. */
+#define MLX5_RSS_HASH_KEY_LEN 40
+
 /* Get CQE owner bit. */
 #define MLX5_CQE_OWNER(op_own) ((op_own) & MLX5_CQE_OWNER_MASK)
 
diff --git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c
index d69b4c09e..b95778a8c 100644
--- a/drivers/net/mlx5/mlx5_rss.c
+++ b/drivers/net/mlx5/mlx5_rss.c
@@ -50,10 +50,11 @@ mlx5_rss_hash_update(struct rte_eth_dev *dev,
return -rte_errno;
}
if (rss_conf->rss_key && rss_conf->rss_key_len) {
-   if (rss_conf->rss_key_len != rss_hash_default_key_len) {
+   if (rss_conf->rss_key_len != MLX5_RSS_HASH_KEY_LEN) {
DRV_LOG(ERR,
-   "port %u RSS key len must be %zu Bytes long",
-   dev->data->port_id, rss_hash_default_key_len);
+   "port %u RSS key len must be %s Bytes long",
+   dev->data->port_id,
+   RTE_STR(MLX5_RSS_HASH_KEY_LEN)

[dpdk-dev] [PATCH v4 11/21] net/mlx5: add flow TCP item

2018-07-12 Thread Nelio Laranjeiro
Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5_flow.c | 79 
 1 file changed, 79 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 0096ed8a2..f646eee01 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -53,6 +53,7 @@ extern const struct eth_dev_ops mlx5_dev_ops_isolate;
 #define MLX5_FLOW_FATE_QUEUE (1u << 1)
 
 /* possible L3 layers protocols filtering. */
+#define MLX5_IP_PROTOCOL_TCP 6
 #define MLX5_IP_PROTOCOL_UDP 17
 
 /** Handles information leading to a drop fate. */
@@ -787,6 +788,81 @@ mlx5_flow_item_udp(const struct rte_flow_item *item, 
struct rte_flow *flow,
return size;
 }
 
+/**
+ * Convert the @p item into a Verbs specification after ensuring the NIC
+ * will understand and process it correctly.
+ * If the necessary size for the conversion is greater than the @p flow_size,
+ * nothing is written in @p flow, the validation is still performed.
+ *
+ * @param[in] item
+ *   Item specification.
+ * @param[in, out] flow
+ *   Pointer to flow structure.
+ * @param[in] flow_size
+ *   Size in bytes of the available space in @p flow, if too small, nothing is
+ *   written.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *   On success the number of bytes consumed/necessary, if the returned value
+ *   is lesser or equal to @p flow_size, the @p item has fully been converted,
+ *   otherwise another call with this returned memory size should be done.
+ *   On error, a negative errno value is returned and rte_errno is set.
+ */
+static int
+mlx5_flow_item_tcp(const struct rte_flow_item *item, struct rte_flow *flow,
+  const size_t flow_size, struct rte_flow_error *error)
+{
+   const struct rte_flow_item_tcp *spec = item->spec;
+   const struct rte_flow_item_tcp *mask = item->mask;
+   unsigned int size = sizeof(struct ibv_flow_spec_tcp_udp);
+   struct ibv_flow_spec_tcp_udp tcp = {
+   .type = IBV_FLOW_SPEC_TCP,
+   .size = size,
+   };
+   int ret;
+
+   if (flow->l3_protocol_en && flow->l3_protocol != MLX5_IP_PROTOCOL_TCP)
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "protocol filtering not compatible"
+ " with TCP layer");
+   if (!(flow->layers & MLX5_FLOW_LAYER_OUTER_L3))
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "L3 is mandatory to filter on L4");
+   if (flow->layers & MLX5_FLOW_LAYER_OUTER_L4)
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "L4 layer is already present");
+   if (!mask)
+   mask = &rte_flow_item_tcp_mask;
+   ret = mlx5_flow_item_acceptable
+   (item, (const uint8_t *)mask,
+(const uint8_t *)&rte_flow_item_tcp_mask,
+sizeof(struct rte_flow_item_tcp), error);
+   if (ret < 0)
+   return ret;
+   flow->layers |= MLX5_FLOW_LAYER_OUTER_L4_TCP;
+   if (size > flow_size)
+   return size;
+   if (spec) {
+   tcp.val.dst_port = spec->hdr.dst_port;
+   tcp.val.src_port = spec->hdr.src_port;
+   tcp.mask.dst_port = mask->hdr.dst_port;
+   tcp.mask.src_port = mask->hdr.src_port;
+   /* Remove unwanted bits from values. */
+   tcp.val.src_port &= tcp.mask.src_port;
+   tcp.val.dst_port &= tcp.mask.dst_port;
+   }
+   mlx5_flow_spec_verbs_add(flow, &tcp, size);
+   return size;
+}
+
 /**
  * Convert the @p pattern into a Verbs specifications after ensuring the NIC
  * will understand and process it correctly.
@@ -841,6 +917,9 @@ mlx5_flow_items(const struct rte_flow_item pattern[],
case RTE_FLOW_ITEM_TYPE_UDP:
ret = mlx5_flow_item_udp(pattern, flow, remain, error);
break;
+   case RTE_FLOW_ITEM_TYPE_TCP:
+   ret = mlx5_flow_item_tcp(pattern, flow, remain, error);
+   break;
default:
return rte_flow_error_set(error, ENOTSUP,
  RTE_FLOW_ERROR_TYPE_ITEM,
-- 
2.18.0



[dpdk-dev] [PATCH v4 14/21] net/mlx5: add RSS flow action

2018-07-12 Thread Nelio Laranjeiro
Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5_flow.c | 686 +++
 1 file changed, 541 insertions(+), 145 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 77483bd1f..758c611a6 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -51,6 +51,7 @@ extern const struct eth_dev_ops mlx5_dev_ops_isolate;
 /* Actions that modify the fate of matching traffic. */
 #define MLX5_FLOW_FATE_DROP (1u << 0)
 #define MLX5_FLOW_FATE_QUEUE (1u << 1)
+#define MLX5_FLOW_FATE_RSS (1u << 2)
 
 /* Modify a packet. */
 #define MLX5_FLOW_MOD_FLAG (1u << 0)
@@ -60,8 +61,68 @@ extern const struct eth_dev_ops mlx5_dev_ops_isolate;
 #define MLX5_IP_PROTOCOL_TCP 6
 #define MLX5_IP_PROTOCOL_UDP 17
 
+/* Priority reserved for default flows. */
+#define MLX5_FLOW_PRIO_RSVD ((uint32_t)-1)
+
+enum mlx5_expansion {
+   MLX5_EXPANSION_ROOT,
+   MLX5_EXPANSION_ETH,
+   MLX5_EXPANSION_IPV4,
+   MLX5_EXPANSION_IPV4_UDP,
+   MLX5_EXPANSION_IPV4_TCP,
+   MLX5_EXPANSION_IPV6,
+   MLX5_EXPANSION_IPV6_UDP,
+   MLX5_EXPANSION_IPV6_TCP,
+};
+
+/** Supported expansion of items. */
+static const struct rte_flow_expand_node mlx5_support_expansion[] = {
+   [MLX5_EXPANSION_ROOT] = {
+   .next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_ETH,
+MLX5_EXPANSION_IPV4,
+MLX5_EXPANSION_IPV6),
+   .type = RTE_FLOW_ITEM_TYPE_END,
+   },
+   [MLX5_EXPANSION_ETH] = {
+   .next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4,
+MLX5_EXPANSION_IPV6),
+   .type = RTE_FLOW_ITEM_TYPE_ETH,
+   },
+   [MLX5_EXPANSION_IPV4] = {
+   .next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4_UDP,
+MLX5_EXPANSION_IPV4_TCP),
+   .type = RTE_FLOW_ITEM_TYPE_IPV4,
+   .rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
+   ETH_RSS_NONFRAG_IPV4_OTHER,
+   },
+   [MLX5_EXPANSION_IPV4_UDP] = {
+   .type = RTE_FLOW_ITEM_TYPE_UDP,
+   .rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+   },
+   [MLX5_EXPANSION_IPV4_TCP] = {
+   .type = RTE_FLOW_ITEM_TYPE_TCP,
+   .rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+   },
+   [MLX5_EXPANSION_IPV6] = {
+   .next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV6_UDP,
+MLX5_EXPANSION_IPV6_TCP),
+   .type = RTE_FLOW_ITEM_TYPE_IPV6,
+   .rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
+   ETH_RSS_NONFRAG_IPV6_OTHER,
+   },
+   [MLX5_EXPANSION_IPV6_UDP] = {
+   .type = RTE_FLOW_ITEM_TYPE_UDP,
+   .rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+   },
+   [MLX5_EXPANSION_IPV6_TCP] = {
+   .type = RTE_FLOW_ITEM_TYPE_TCP,
+   .rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+   },
+};
+
 /** Handles information leading to a drop fate. */
 struct mlx5_flow_verbs {
+   LIST_ENTRY(mlx5_flow_verbs) next;
unsigned int size; /**< Size of the attribute. */
struct {
struct ibv_flow_attr *attr;
@@ -70,6 +131,7 @@ struct mlx5_flow_verbs {
};
struct ibv_flow *flow; /**< Verbs flow pointer. */
struct mlx5_hrxq *hrxq; /**< Hash Rx queue object. */
+   uint64_t hash_fields; /**< Verbs hash Rx queue hash fields. */
 };
 
 /* Flow structure. */
@@ -84,8 +146,12 @@ struct rte_flow {
uint32_t fate;
/**< Bit-fields of present fate see MLX5_FLOW_FATE_*. */
uint8_t l3_protocol; /**< valid when l3_protocol_en is set. */
-   struct mlx5_flow_verbs verbs; /* Verbs flow. */
-   uint16_t queue; /**< Destination queue to redirect traffic to. */
+   LIST_HEAD(verbs, mlx5_flow_verbs) verbs; /**< Verbs flows list. */
+   struct mlx5_flow_verbs *cur_verbs;
+   /**< Current Verbs flow structure being filled. */
+   struct rte_flow_action_rss rss;/**< RSS context. */
+   uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
+   uint16_t (*queue)[]; /**< Destination queues to redirect traffic to. */
 };
 
 static const struct rte_flow_ops mlx5_flow_ops = {
@@ -128,16 +194,38 @@ struct ibv_spec_header {
uint16_t size;
 };
 
- /**
-  * Discover the maximum number of priority available.
-  *
-  * @param[in] dev
-  *   Pointer to Ethernet device.
-  *
-  * @return
-  *   number of supported flow priority on success, a negative errno value
-  *   otherwise and rte_errno is set.
-  */
+/*
+ * Number of sub priorities.
+ * For each kind of pattern matching i.e. L2, L3, L4 to have a correct
+ * matching on the NIC (firmware dependent) L4 most have the higher priority
+ * followed by L3 and ending with L2.

[dpdk-dev] [PATCH v4 15/21] net/mlx5: remove useless arguments in hrxq API

2018-07-12 Thread Nelio Laranjeiro
RSS level is necessary to had a bit in the hash_fields which is already
provided in this API, for the tunnel, it is necessary to request such
queue to compute the checksum on the inner most, this last one should
always be activated.

Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5_flow.c |  4 ++--
 drivers/net/mlx5/mlx5_rxq.c  | 39 +---
 drivers/net/mlx5/mlx5_rxtx.h |  8 ++--
 3 files changed, 13 insertions(+), 38 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 758c611a6..730360b22 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1875,13 +1875,13 @@ mlx5_flow_apply(struct rte_eth_dev *dev, struct 
rte_flow *flow,
 MLX5_RSS_HASH_KEY_LEN,
 verbs->hash_fields,
 (*flow->queue),
-flow->rss.queue_num, 0, 0);
+flow->rss.queue_num);
if (!hrxq)
hrxq = mlx5_hrxq_new(dev, flow->key,
 MLX5_RSS_HASH_KEY_LEN,
 verbs->hash_fields,
 (*flow->queue),
-flow->rss.queue_num, 0, 0);
+flow->rss.queue_num);
if (!hrxq) {
rte_flow_error_set
(error, rte_errno,
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index d50b82c69..071740b6d 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1740,10 +1740,6 @@ mlx5_ind_table_ibv_verify(struct rte_eth_dev *dev)
  *   first queue index will be taken for the indirection table.
  * @param queues_n
  *   Number of queues.
- * @param tunnel
- *   Tunnel type, implies tunnel offloading like inner checksum if available.
- * @param rss_level
- *   RSS hash on tunnel level.
  *
  * @return
  *   The Verbs object initialised, NULL otherwise and rte_errno is set.
@@ -1752,17 +1748,13 @@ struct mlx5_hrxq *
 mlx5_hrxq_new(struct rte_eth_dev *dev,
  const uint8_t *rss_key, uint32_t rss_key_len,
  uint64_t hash_fields,
- const uint16_t *queues, uint32_t queues_n,
- uint32_t tunnel, uint32_t rss_level)
+ const uint16_t *queues, uint32_t queues_n)
 {
struct priv *priv = dev->data->dev_private;
struct mlx5_hrxq *hrxq;
struct mlx5_ind_table_ibv *ind_tbl;
struct ibv_qp *qp;
int err;
-#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT
-   struct mlx5dv_qp_init_attr qp_init_attr = {0};
-#endif
 
queues_n = hash_fields ? queues_n : 1;
ind_tbl = mlx5_ind_table_ibv_get(dev, queues, queues_n);
@@ -1777,11 +1769,6 @@ mlx5_hrxq_new(struct rte_eth_dev *dev,
rss_key = rss_hash_default_key;
}
 #ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT
-   if (tunnel) {
-   qp_init_attr.comp_mask =
-   MLX5DV_QP_INIT_ATTR_MASK_QP_CREATE_FLAGS;
-   qp_init_attr.create_flags = MLX5DV_QP_CREATE_TUNNEL_OFFLOADS;
-   }
qp = mlx5_glue->dv_create_qp
(priv->ctx,
 &(struct ibv_qp_init_attr_ex){
@@ -1797,14 +1784,17 @@ mlx5_hrxq_new(struct rte_eth_dev *dev,
.rx_hash_key = rss_key ?
   (void *)(uintptr_t)rss_key :
   rss_hash_default_key,
-   .rx_hash_fields_mask = hash_fields |
-   (tunnel && rss_level > 1 ?
-   (uint32_t)IBV_RX_HASH_INNER : 0),
+   .rx_hash_fields_mask = hash_fields,
},
.rwq_ind_tbl = ind_tbl->ind_table,
.pd = priv->pd,
 },
-&qp_init_attr);
+&(struct mlx5dv_qp_init_attr){
+   .comp_mask = (hash_fields & IBV_RX_HASH_INNER) ?
+MLX5DV_QP_INIT_ATTR_MASK_QP_CREATE_FLAGS :
+0,
+   .create_flags = MLX5DV_QP_CREATE_TUNNEL_OFFLOADS,
+});
 #else
qp = mlx5_glue->create_qp_ex
(priv->ctx,
@@ -1838,8 +1828,6 @@ mlx5_hrxq_new(struct rte_eth_dev *dev,
hrxq->qp = qp;
hrxq->rss_key_len = rss_key_len;
hrxq->hash_fields = hash_fields;
-   hrxq->tunnel = tunnel;
-   hrxq->rss_level = rss_level;
memcpy(hrxq->rss_key, rss_key, rss_key_len);
rte_atomic32_inc(&hrxq->re

[dpdk-dev] [PATCH v4 16/21] net/mlx5: support inner RSS computation

2018-07-12 Thread Nelio Laranjeiro
Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5_flow.c | 245 ++-
 1 file changed, 185 insertions(+), 60 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 730360b22..84bd99b3e 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -35,18 +35,42 @@
 extern const struct eth_dev_ops mlx5_dev_ops;
 extern const struct eth_dev_ops mlx5_dev_ops_isolate;
 
-/* Pattern Layer bits. */
+/* Pattern outer Layer bits. */
 #define MLX5_FLOW_LAYER_OUTER_L2 (1u << 0)
 #define MLX5_FLOW_LAYER_OUTER_L3_IPV4 (1u << 1)
 #define MLX5_FLOW_LAYER_OUTER_L3_IPV6 (1u << 2)
 #define MLX5_FLOW_LAYER_OUTER_L4_UDP (1u << 3)
 #define MLX5_FLOW_LAYER_OUTER_L4_TCP (1u << 4)
 #define MLX5_FLOW_LAYER_OUTER_VLAN (1u << 5)
-/* Masks. */
+
+/* Pattern inner Layer bits. */
+#define MLX5_FLOW_LAYER_INNER_L2 (1u << 6)
+#define MLX5_FLOW_LAYER_INNER_L3_IPV4 (1u << 7)
+#define MLX5_FLOW_LAYER_INNER_L3_IPV6 (1u << 8)
+#define MLX5_FLOW_LAYER_INNER_L4_UDP (1u << 9)
+#define MLX5_FLOW_LAYER_INNER_L4_TCP (1u << 10)
+#define MLX5_FLOW_LAYER_INNER_VLAN (1u << 11)
+
+/* Outer Masks. */
 #define MLX5_FLOW_LAYER_OUTER_L3 \
(MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6)
 #define MLX5_FLOW_LAYER_OUTER_L4 \
(MLX5_FLOW_LAYER_OUTER_L4_UDP | MLX5_FLOW_LAYER_OUTER_L4_TCP)
+#define MLX5_FLOW_LAYER_OUTER \
+   (MLX5_FLOW_LAYER_OUTER_L2 | MLX5_FLOW_LAYER_OUTER_L3 | \
+MLX5_FLOW_LAYER_OUTER_L4)
+
+/* Tunnel Masks. */
+#define MLX5_FLOW_LAYER_TUNNEL 0
+
+/* Inner Masks. */
+#define MLX5_FLOW_LAYER_INNER_L3 \
+   (MLX5_FLOW_LAYER_INNER_L3_IPV4 | MLX5_FLOW_LAYER_INNER_L3_IPV6)
+#define MLX5_FLOW_LAYER_INNER_L4 \
+   (MLX5_FLOW_LAYER_INNER_L4_UDP | MLX5_FLOW_LAYER_INNER_L4_TCP)
+#define MLX5_FLOW_LAYER_INNER \
+   (MLX5_FLOW_LAYER_INNER_L2 | MLX5_FLOW_LAYER_INNER_L3 | \
+MLX5_FLOW_LAYER_INNER_L4)
 
 /* Actions that modify the fate of matching traffic. */
 #define MLX5_FLOW_FATE_DROP (1u << 0)
@@ -66,6 +90,14 @@ extern const struct eth_dev_ops mlx5_dev_ops_isolate;
 
 enum mlx5_expansion {
MLX5_EXPANSION_ROOT,
+   MLX5_EXPANSION_ROOT_OUTER,
+   MLX5_EXPANSION_OUTER_ETH,
+   MLX5_EXPANSION_OUTER_IPV4,
+   MLX5_EXPANSION_OUTER_IPV4_UDP,
+   MLX5_EXPANSION_OUTER_IPV4_TCP,
+   MLX5_EXPANSION_OUTER_IPV6,
+   MLX5_EXPANSION_OUTER_IPV6_UDP,
+   MLX5_EXPANSION_OUTER_IPV6_TCP,
MLX5_EXPANSION_ETH,
MLX5_EXPANSION_IPV4,
MLX5_EXPANSION_IPV4_UDP,
@@ -83,6 +115,50 @@ static const struct rte_flow_expand_node 
mlx5_support_expansion[] = {
 MLX5_EXPANSION_IPV6),
.type = RTE_FLOW_ITEM_TYPE_END,
},
+   [MLX5_EXPANSION_ROOT_OUTER] = {
+   .next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_OUTER_ETH,
+MLX5_EXPANSION_OUTER_IPV4,
+MLX5_EXPANSION_OUTER_IPV6),
+   .type = RTE_FLOW_ITEM_TYPE_END,
+   },
+   [MLX5_EXPANSION_OUTER_ETH] = {
+   .next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_OUTER_IPV4,
+MLX5_EXPANSION_OUTER_IPV6),
+   .type = RTE_FLOW_ITEM_TYPE_ETH,
+   .rss_types = 0,
+   },
+   [MLX5_EXPANSION_OUTER_IPV4] = {
+   .next = RTE_FLOW_EXPAND_RSS_NEXT
+   (MLX5_EXPANSION_OUTER_IPV4_UDP,
+MLX5_EXPANSION_OUTER_IPV4_TCP),
+   .type = RTE_FLOW_ITEM_TYPE_IPV4,
+   .rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
+   ETH_RSS_NONFRAG_IPV4_OTHER,
+   },
+   [MLX5_EXPANSION_OUTER_IPV4_UDP] = {
+   .type = RTE_FLOW_ITEM_TYPE_UDP,
+   .rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+   },
+   [MLX5_EXPANSION_OUTER_IPV4_TCP] = {
+   .type = RTE_FLOW_ITEM_TYPE_TCP,
+   .rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+   },
+   [MLX5_EXPANSION_OUTER_IPV6] = {
+   .next = RTE_FLOW_EXPAND_RSS_NEXT
+   (MLX5_EXPANSION_OUTER_IPV6_UDP,
+MLX5_EXPANSION_OUTER_IPV6_TCP),
+   .type = RTE_FLOW_ITEM_TYPE_IPV6,
+   .rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
+   ETH_RSS_NONFRAG_IPV6_OTHER,
+   },
+   [MLX5_EXPANSION_OUTER_IPV6_UDP] = {
+   .type = RTE_FLOW_ITEM_TYPE_UDP,
+   .rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+   },
+   [MLX5_EXPANSION_OUTER_IPV6_TCP] = {
+   .type = RTE_FLOW_ITEM_TYPE_TCP,
+   .rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+   },
[MLX5_EXPANSION_ETH] = {
.next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4,
 MLX5_EXPANSION_IPV6),
@@ -453,6 +529,34 @@ mlx5_flow_spec

[dpdk-dev] [PATCH v4 17/21] net/mlx5: add flow VXLAN item

2018-07-12 Thread Nelio Laranjeiro
Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5_flow.c | 190 +--
 drivers/net/mlx5/mlx5_rxtx.h |   1 +
 2 files changed, 163 insertions(+), 28 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 84bd99b3e..7eb5d7da3 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -51,6 +51,9 @@ extern const struct eth_dev_ops mlx5_dev_ops_isolate;
 #define MLX5_FLOW_LAYER_INNER_L4_TCP (1u << 10)
 #define MLX5_FLOW_LAYER_INNER_VLAN (1u << 11)
 
+/* Pattern tunnel Layer bits. */
+#define MLX5_FLOW_LAYER_VXLAN (1u << 12)
+
 /* Outer Masks. */
 #define MLX5_FLOW_LAYER_OUTER_L3 \
(MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6)
@@ -61,7 +64,7 @@ extern const struct eth_dev_ops mlx5_dev_ops_isolate;
 MLX5_FLOW_LAYER_OUTER_L4)
 
 /* Tunnel Masks. */
-#define MLX5_FLOW_LAYER_TUNNEL 0
+#define MLX5_FLOW_LAYER_TUNNEL MLX5_FLOW_LAYER_VXLAN
 
 /* Inner Masks. */
 #define MLX5_FLOW_LAYER_INNER_L3 \
@@ -98,6 +101,7 @@ enum mlx5_expansion {
MLX5_EXPANSION_OUTER_IPV6,
MLX5_EXPANSION_OUTER_IPV6_UDP,
MLX5_EXPANSION_OUTER_IPV6_TCP,
+   MLX5_EXPANSION_VXLAN,
MLX5_EXPANSION_ETH,
MLX5_EXPANSION_IPV4,
MLX5_EXPANSION_IPV4_UDP,
@@ -136,6 +140,7 @@ static const struct rte_flow_expand_node 
mlx5_support_expansion[] = {
ETH_RSS_NONFRAG_IPV4_OTHER,
},
[MLX5_EXPANSION_OUTER_IPV4_UDP] = {
+   .next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN),
.type = RTE_FLOW_ITEM_TYPE_UDP,
.rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
},
@@ -152,6 +157,7 @@ static const struct rte_flow_expand_node 
mlx5_support_expansion[] = {
ETH_RSS_NONFRAG_IPV6_OTHER,
},
[MLX5_EXPANSION_OUTER_IPV6_UDP] = {
+   .next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN),
.type = RTE_FLOW_ITEM_TYPE_UDP,
.rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
},
@@ -159,6 +165,10 @@ static const struct rte_flow_expand_node 
mlx5_support_expansion[] = {
.type = RTE_FLOW_ITEM_TYPE_TCP,
.rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
},
+   [MLX5_EXPANSION_VXLAN] = {
+   .next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_ETH),
+   .type = RTE_FLOW_ITEM_TYPE_VXLAN,
+   },
[MLX5_EXPANSION_ETH] = {
.next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4,
 MLX5_EXPANSION_IPV6),
@@ -226,6 +236,8 @@ struct rte_flow {
struct mlx5_flow_verbs *cur_verbs;
/**< Current Verbs flow structure being filled. */
struct rte_flow_action_rss rss;/**< RSS context. */
+   uint32_t tunnel_ptype;
+   /**< Store tunnel packet type data to store in Rx queue. */
uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
uint16_t (*queue)[]; /**< Destination queues to redirect traffic to. */
 };
@@ -1160,6 +1172,103 @@ mlx5_flow_item_tcp(const struct rte_flow_item *item, 
struct rte_flow *flow,
return size;
 }
 
+/**
+ * Convert the @p item into a Verbs specification after ensuring the NIC
+ * will understand and process it correctly.
+ * If the necessary size for the conversion is greater than the @p flow_size,
+ * nothing is written in @p flow, the validation is still performed.
+ *
+ * @param[in] item
+ *   Item specification.
+ * @param[in, out] flow
+ *   Pointer to flow structure.
+ * @param[in] flow_size
+ *   Size in bytes of the available space in @p flow, if too small, nothing is
+ *   written.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *   On success the number of bytes consumed/necessary, if the returned value
+ *   is lesser or equal to @p flow_size, the @p item has fully been converted,
+ *   otherwise another call with this returned memory size should be done.
+ *   On error, a negative errno value is returned and rte_errno is set.
+ */
+static int
+mlx5_flow_item_vxlan(const struct rte_flow_item *item, struct rte_flow *flow,
+const size_t flow_size, struct rte_flow_error *error)
+{
+   const struct rte_flow_item_vxlan *spec = item->spec;
+   const struct rte_flow_item_vxlan *mask = item->mask;
+   unsigned int size = sizeof(struct ibv_flow_spec_tunnel);
+   struct ibv_flow_spec_tunnel vxlan = {
+   .type = IBV_FLOW_SPEC_VXLAN_TUNNEL,
+   .size = size,
+   };
+   int ret;
+   union vni {
+   uint32_t vlan_id;
+   uint8_t vni[4];
+   } id = { .vlan_id = 0, };
+
+   if (flow->layers & MLX5_FLOW_LAYER_TUNNEL)
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+  

[dpdk-dev] [PATCH v4 20/21] net/mlx5: add flow MPLS item

2018-07-12 Thread Nelio Laranjeiro
Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5_flow.c | 102 ++-
 drivers/net/mlx5/mlx5_rxtx.h |   2 +-
 2 files changed, 101 insertions(+), 3 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index b05e30204..1d7b72ac1 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -55,6 +55,7 @@ extern const struct eth_dev_ops mlx5_dev_ops_isolate;
 #define MLX5_FLOW_LAYER_VXLAN (1u << 12)
 #define MLX5_FLOW_LAYER_VXLAN_GPE (1u << 13)
 #define MLX5_FLOW_LAYER_GRE (1u << 14)
+#define MLX5_FLOW_LAYER_MPLS (1u << 15)
 
 /* Outer Masks. */
 #define MLX5_FLOW_LAYER_OUTER_L3 \
@@ -68,7 +69,7 @@ extern const struct eth_dev_ops mlx5_dev_ops_isolate;
 /* Tunnel Masks. */
 #define MLX5_FLOW_LAYER_TUNNEL \
(MLX5_FLOW_LAYER_VXLAN | MLX5_FLOW_LAYER_VXLAN_GPE | \
-MLX5_FLOW_LAYER_GRE)
+MLX5_FLOW_LAYER_GRE | MLX5_FLOW_LAYER_MPLS)
 
 /* Inner Masks. */
 #define MLX5_FLOW_LAYER_INNER_L3 \
@@ -92,6 +93,7 @@ extern const struct eth_dev_ops mlx5_dev_ops_isolate;
 #define MLX5_IP_PROTOCOL_TCP 6
 #define MLX5_IP_PROTOCOL_UDP 17
 #define MLX5_IP_PROTOCOL_GRE 47
+#define MLX5_IP_PROTOCOL_MPLS 147
 
 /* Priority reserved for default flows. */
 #define MLX5_FLOW_PRIO_RSVD ((uint32_t)-1)
@@ -109,6 +111,7 @@ enum mlx5_expansion {
MLX5_EXPANSION_VXLAN,
MLX5_EXPANSION_VXLAN_GPE,
MLX5_EXPANSION_GRE,
+   MLX5_EXPANSION_MPLS,
MLX5_EXPANSION_ETH,
MLX5_EXPANSION_IPV4,
MLX5_EXPANSION_IPV4_UDP,
@@ -134,7 +137,8 @@ static const struct rte_flow_expand_node 
mlx5_support_expansion[] = {
},
[MLX5_EXPANSION_OUTER_ETH] = {
.next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_OUTER_IPV4,
-MLX5_EXPANSION_OUTER_IPV6),
+MLX5_EXPANSION_OUTER_IPV6,
+MLX5_EXPANSION_MPLS),
.type = RTE_FLOW_ITEM_TYPE_ETH,
.rss_types = 0,
},
@@ -189,6 +193,11 @@ static const struct rte_flow_expand_node 
mlx5_support_expansion[] = {
.next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4),
.type = RTE_FLOW_ITEM_TYPE_GRE,
},
+   [MLX5_EXPANSION_MPLS] = {
+   .next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4,
+MLX5_EXPANSION_IPV6),
+   .type = RTE_FLOW_ITEM_TYPE_MPLS,
+   },
[MLX5_EXPANSION_ETH] = {
.next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4,
 MLX5_EXPANSION_IPV6),
@@ -341,6 +350,14 @@ static struct mlx5_flow_tunnel_info tunnels_info[] = {
.tunnel = MLX5_FLOW_LAYER_GRE,
.ptype = RTE_PTYPE_TUNNEL_GRE,
},
+   {
+   .tunnel = MLX5_FLOW_LAYER_MPLS | MLX5_FLOW_LAYER_OUTER_L4_UDP,
+   .ptype = RTE_PTYPE_TUNNEL_MPLS_IN_GRE | RTE_PTYPE_L4_UDP,
+   },
+   {
+   .tunnel = MLX5_FLOW_LAYER_MPLS,
+   .ptype = RTE_PTYPE_TUNNEL_MPLS_IN_GRE,
+   },
 };
 
 /**
@@ -1593,6 +1610,84 @@ mlx5_flow_item_gre(const struct rte_flow_item *item,
return size;
 }
 
+/**
+ * Convert the @p item into a Verbs specification after ensuring the NIC
+ * will understand and process it correctly.
+ * If the necessary size for the conversion is greater than the @p flow_size,
+ * nothing is written in @p flow, the validation is still performed.
+ *
+ * @param[in] item
+ *   Item specification.
+ * @param[in, out] flow
+ *   Pointer to flow structure.
+ * @param[in] flow_size
+ *   Size in bytes of the available space in @p flow, if too small, nothing is
+ *   written.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *   On success the number of bytes consumed/necessary, if the returned value
+ *   is lesser or equal to @p flow_size, the @p item has fully been converted,
+ *   otherwise another call with this returned memory size should be done.
+ *   On error, a negative errno value is returned and rte_errno is set.
+ */
+static int
+mlx5_flow_item_mpls(const struct rte_flow_item *item __rte_unused,
+   struct rte_flow *flow __rte_unused,
+   const size_t flow_size __rte_unused,
+   struct rte_flow_error *error)
+{
+#ifdef HAVE_IBV_DEVICE_MPLS_SUPPORT
+   const struct rte_flow_item_mpls *spec = item->spec;
+   const struct rte_flow_item_mpls *mask = item->mask;
+   unsigned int size = sizeof(struct ibv_flow_spec_mpls);
+   struct ibv_flow_spec_mpls mpls = {
+   .type = IBV_FLOW_SPEC_MPLS,
+   .size = size,
+   };
+   int ret;
+
+   if (flow->l3_protocol_en && flow->l3_protocol != MLX5_IP_PROTOCOL_MPLS)
+   return rte_flow_error_set(error, ENOTSUP,
+ 

[dpdk-dev] [PATCH v4 18/21] net/mlx5: add flow VXLAN-GPE item

2018-07-12 Thread Nelio Laranjeiro
Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5_flow.c | 219 ---
 drivers/net/mlx5/mlx5_rxtx.h |   5 +-
 2 files changed, 209 insertions(+), 15 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 7eb5d7da3..5d0ad4a04 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -53,6 +53,7 @@ extern const struct eth_dev_ops mlx5_dev_ops_isolate;
 
 /* Pattern tunnel Layer bits. */
 #define MLX5_FLOW_LAYER_VXLAN (1u << 12)
+#define MLX5_FLOW_LAYER_VXLAN_GPE (1u << 13)
 
 /* Outer Masks. */
 #define MLX5_FLOW_LAYER_OUTER_L3 \
@@ -64,7 +65,8 @@ extern const struct eth_dev_ops mlx5_dev_ops_isolate;
 MLX5_FLOW_LAYER_OUTER_L4)
 
 /* Tunnel Masks. */
-#define MLX5_FLOW_LAYER_TUNNEL MLX5_FLOW_LAYER_VXLAN
+#define MLX5_FLOW_LAYER_TUNNEL \
+   (MLX5_FLOW_LAYER_VXLAN | MLX5_FLOW_LAYER_VXLAN_GPE)
 
 /* Inner Masks. */
 #define MLX5_FLOW_LAYER_INNER_L3 \
@@ -102,6 +104,7 @@ enum mlx5_expansion {
MLX5_EXPANSION_OUTER_IPV6_UDP,
MLX5_EXPANSION_OUTER_IPV6_TCP,
MLX5_EXPANSION_VXLAN,
+   MLX5_EXPANSION_VXLAN_GPE,
MLX5_EXPANSION_ETH,
MLX5_EXPANSION_IPV4,
MLX5_EXPANSION_IPV4_UDP,
@@ -140,7 +143,8 @@ static const struct rte_flow_expand_node 
mlx5_support_expansion[] = {
ETH_RSS_NONFRAG_IPV4_OTHER,
},
[MLX5_EXPANSION_OUTER_IPV4_UDP] = {
-   .next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN),
+   .next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
+MLX5_EXPANSION_VXLAN_GPE),
.type = RTE_FLOW_ITEM_TYPE_UDP,
.rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
},
@@ -157,7 +161,8 @@ static const struct rte_flow_expand_node 
mlx5_support_expansion[] = {
ETH_RSS_NONFRAG_IPV6_OTHER,
},
[MLX5_EXPANSION_OUTER_IPV6_UDP] = {
-   .next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN),
+   .next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
+MLX5_EXPANSION_VXLAN_GPE),
.type = RTE_FLOW_ITEM_TYPE_UDP,
.rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
},
@@ -169,6 +174,12 @@ static const struct rte_flow_expand_node 
mlx5_support_expansion[] = {
.next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_ETH),
.type = RTE_FLOW_ITEM_TYPE_VXLAN,
},
+   [MLX5_EXPANSION_VXLAN_GPE] = {
+   .next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_ETH,
+MLX5_EXPANSION_IPV4,
+MLX5_EXPANSION_IPV6),
+   .type = RTE_FLOW_ITEM_TYPE_VXLAN_GPE,
+   },
[MLX5_EXPANSION_ETH] = {
.next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4,
 MLX5_EXPANSION_IPV6),
@@ -236,8 +247,6 @@ struct rte_flow {
struct mlx5_flow_verbs *cur_verbs;
/**< Current Verbs flow structure being filled. */
struct rte_flow_action_rss rss;/**< RSS context. */
-   uint32_t tunnel_ptype;
-   /**< Store tunnel packet type data to store in Rx queue. */
uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
uint16_t (*queue)[]; /**< Destination queues to redirect traffic to. */
 };
@@ -304,6 +313,23 @@ static const uint32_t 
priority_map_5[][MLX5_PRIORITY_MAP_MAX] = {
{ 9, 10, 11 }, { 12, 13, 14 },
 };
 
+/* Tunnel information. */
+struct mlx5_flow_tunnel_info {
+   uint32_t tunnel; /**< Tunnel bit (see MLX5_FLOW_*). */
+   uint32_t ptype; /**< Tunnel Ptype (see RTE_PTYPE_*). */
+};
+
+static struct mlx5_flow_tunnel_info tunnels_info[] = {
+   {
+   .tunnel = MLX5_FLOW_LAYER_VXLAN,
+   .ptype = RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_L4_UDP,
+   },
+   {
+   .tunnel = MLX5_FLOW_LAYER_VXLAN_GPE,
+   .ptype = RTE_PTYPE_TUNNEL_VXLAN_GPE | RTE_PTYPE_L4_UDP,
+   },
+};
+
 /**
  * Discover the maximum number of priority available.
  *
@@ -1265,7 +1291,119 @@ mlx5_flow_item_vxlan(const struct rte_flow_item *item, 
struct rte_flow *flow,
flow->cur_verbs->attr->priority = MLX5_PRIORITY_MAP_L2;
}
flow->layers |= MLX5_FLOW_LAYER_VXLAN;
-   flow->tunnel_ptype = RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_L4_UDP;
+   return size;
+}
+
+/**
+ * Convert the @p item into a Verbs specification after ensuring the NIC
+ * will understand and process it correctly.
+ * If the necessary size for the conversion is greater than the @p flow_size,
+ * nothing is written in @p flow, the validation is still performed.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param[in] item
+ *   Item specification.
+ * @param[in, out] flow
+ *   Pointer to flow structure.
+ * @param[in]

[dpdk-dev] [PATCH v4 19/21] net/mlx5: add flow GRE item

2018-07-12 Thread Nelio Laranjeiro
Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5_flow.c | 193 ++-
 drivers/net/mlx5/mlx5_rxtx.h |   2 +-
 2 files changed, 192 insertions(+), 3 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 5d0ad4a04..b05e30204 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -54,6 +54,7 @@ extern const struct eth_dev_ops mlx5_dev_ops_isolate;
 /* Pattern tunnel Layer bits. */
 #define MLX5_FLOW_LAYER_VXLAN (1u << 12)
 #define MLX5_FLOW_LAYER_VXLAN_GPE (1u << 13)
+#define MLX5_FLOW_LAYER_GRE (1u << 14)
 
 /* Outer Masks. */
 #define MLX5_FLOW_LAYER_OUTER_L3 \
@@ -66,7 +67,8 @@ extern const struct eth_dev_ops mlx5_dev_ops_isolate;
 
 /* Tunnel Masks. */
 #define MLX5_FLOW_LAYER_TUNNEL \
-   (MLX5_FLOW_LAYER_VXLAN | MLX5_FLOW_LAYER_VXLAN_GPE)
+   (MLX5_FLOW_LAYER_VXLAN | MLX5_FLOW_LAYER_VXLAN_GPE | \
+MLX5_FLOW_LAYER_GRE)
 
 /* Inner Masks. */
 #define MLX5_FLOW_LAYER_INNER_L3 \
@@ -89,6 +91,7 @@ extern const struct eth_dev_ops mlx5_dev_ops_isolate;
 /* possible L3 layers protocols filtering. */
 #define MLX5_IP_PROTOCOL_TCP 6
 #define MLX5_IP_PROTOCOL_UDP 17
+#define MLX5_IP_PROTOCOL_GRE 47
 
 /* Priority reserved for default flows. */
 #define MLX5_FLOW_PRIO_RSVD ((uint32_t)-1)
@@ -105,6 +108,7 @@ enum mlx5_expansion {
MLX5_EXPANSION_OUTER_IPV6_TCP,
MLX5_EXPANSION_VXLAN,
MLX5_EXPANSION_VXLAN_GPE,
+   MLX5_EXPANSION_GRE,
MLX5_EXPANSION_ETH,
MLX5_EXPANSION_IPV4,
MLX5_EXPANSION_IPV4_UDP,
@@ -137,7 +141,8 @@ static const struct rte_flow_expand_node 
mlx5_support_expansion[] = {
[MLX5_EXPANSION_OUTER_IPV4] = {
.next = RTE_FLOW_EXPAND_RSS_NEXT
(MLX5_EXPANSION_OUTER_IPV4_UDP,
-MLX5_EXPANSION_OUTER_IPV4_TCP),
+MLX5_EXPANSION_OUTER_IPV4_TCP,
+MLX5_EXPANSION_GRE),
.type = RTE_FLOW_ITEM_TYPE_IPV4,
.rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
ETH_RSS_NONFRAG_IPV4_OTHER,
@@ -180,6 +185,10 @@ static const struct rte_flow_expand_node 
mlx5_support_expansion[] = {
 MLX5_EXPANSION_IPV6),
.type = RTE_FLOW_ITEM_TYPE_VXLAN_GPE,
},
+   [MLX5_EXPANSION_GRE] = {
+   .next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4),
+   .type = RTE_FLOW_ITEM_TYPE_GRE,
+   },
[MLX5_EXPANSION_ETH] = {
.next = RTE_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4,
 MLX5_EXPANSION_IPV6),
@@ -328,6 +337,10 @@ static struct mlx5_flow_tunnel_info tunnels_info[] = {
.tunnel = MLX5_FLOW_LAYER_VXLAN_GPE,
.ptype = RTE_PTYPE_TUNNEL_VXLAN_GPE | RTE_PTYPE_L4_UDP,
},
+   {
+   .tunnel = MLX5_FLOW_LAYER_GRE,
+   .ptype = RTE_PTYPE_TUNNEL_GRE,
+   },
 };
 
 /**
@@ -968,6 +981,18 @@ mlx5_flow_item_ipv6(const struct rte_flow_item *item, 
struct rte_flow *flow,
  RTE_FLOW_ERROR_TYPE_ITEM,
  item,
  "L3 cannot follow an L4 layer.");
+   /*
+* IPv6 is not recognised by the NIC inside a GRE tunnel.
+* Such support has to be disabled as the rule will be
+* accepted.  Issue reproduced with Mellanox OFED 4.3-3.0.2.1 and
+* Mellanox OFED 4.4-1.0.0.0.
+*/
+   if (tunnel && flow->layers & MLX5_FLOW_LAYER_GRE)
+   return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "IPv6 inside a GRE tunnel is"
+ " not recognised.");
if (!mask)
mask = &rte_flow_item_ipv6_mask;
ret = mlx5_flow_item_acceptable
@@ -1407,6 +1432,167 @@ mlx5_flow_item_vxlan_gpe(struct rte_eth_dev *dev,
return size;
 }
 
+/**
+ * Update the protocol in Verbs IPv4/IPv6 spec.
+ *
+ * @param[in, out] attr
+ *   Pointer to Verbs attributes structure.
+ * @param[in] search
+ *   Specification type to search in order to update the IP protocol.
+ * @param[in] protocol
+ *   Protocol value to set if none is present in the specification.
+ */
+static void
+mlx5_flow_item_gre_ip_protocol_update(struct ibv_flow_attr *attr,
+ enum ibv_flow_spec_type search,
+ uint8_t protocol)
+{
+   unsigned int i;
+   struct ibv_spec_header *hdr = (struct ibv_spec_header *)
+   ((uint8_t *)attr + sizeof(struct ibv_flow_attr));
+
+   if (!attr)
+   return;
+   for (i = 0; i != attr->num_of_specs; ++i) {
+   if (hd

[dpdk-dev] [PATCH v4 21/21] net/mlx5: add count flow action

2018-07-12 Thread Nelio Laranjeiro
This is only supported by Mellanox OFED.

Signed-off-by: Nelio Laranjeiro 
Acked-by: Yongseok Koh 
---
 drivers/net/mlx5/mlx5.h  |   2 +
 drivers/net/mlx5/mlx5_flow.c | 242 +++
 2 files changed, 244 insertions(+)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 9949cd3fa..131be334c 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -188,6 +188,8 @@ struct priv {
struct mlx5_drop drop_queue; /* Flow drop queues. */
struct mlx5_flows flows; /* RTE Flow rules. */
struct mlx5_flows ctrl_flows; /* Control flow rules. */
+   LIST_HEAD(counters, mlx5_flow_counter) flow_counters;
+   /* Flow counters. */
struct {
uint32_t dev_gen; /* Generation number to flush local caches. */
rte_rwlock_t rwlock; /* MR Lock. */
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 1d7b72ac1..89bfc670f 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -88,6 +88,7 @@ extern const struct eth_dev_ops mlx5_dev_ops_isolate;
 /* Modify a packet. */
 #define MLX5_FLOW_MOD_FLAG (1u << 0)
 #define MLX5_FLOW_MOD_MARK (1u << 1)
+#define MLX5_FLOW_MOD_COUNT (1u << 2)
 
 /* possible L3 layers protocols filtering. */
 #define MLX5_IP_PROTOCOL_TCP 6
@@ -249,6 +250,17 @@ struct mlx5_flow_verbs {
uint64_t hash_fields; /**< Verbs hash Rx queue hash fields. */
 };
 
+/* Counters information. */
+struct mlx5_flow_counter {
+   LIST_ENTRY(mlx5_flow_counter) next; /**< Pointer to the next counter. */
+   uint32_t shared:1; /**< Share counter ID with other flow rules. */
+   uint32_t ref_cnt:31; /**< Reference counter. */
+   uint32_t id; /**< Counter ID. */
+   struct ibv_counter_set *cs; /**< Holds the counters for the rule. */
+   uint64_t hits; /**< Number of packets matched by the rule. */
+   uint64_t bytes; /**< Number of bytes matched by the rule. */
+};
+
 /* Flow structure. */
 struct rte_flow {
TAILQ_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */
@@ -264,6 +276,7 @@ struct rte_flow {
LIST_HEAD(verbs, mlx5_flow_verbs) verbs; /**< Verbs flows list. */
struct mlx5_flow_verbs *cur_verbs;
/**< Current Verbs flow structure being filled. */
+   struct mlx5_flow_counter *counter; /**< Holds Verbs flow counter. */
struct rte_flow_action_rss rss;/**< RSS context. */
uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
uint16_t (*queue)[]; /**< Destination queues to redirect traffic to. */
@@ -275,6 +288,7 @@ static const struct rte_flow_ops mlx5_flow_ops = {
.destroy = mlx5_flow_destroy,
.flush = mlx5_flow_flush,
.isolate = mlx5_flow_isolate,
+   .query = mlx5_flow_query,
 };
 
 /* Convert FDIR request to Generic flow. */
@@ -454,6 +468,80 @@ mlx5_flow_adjust_priority(struct rte_eth_dev *dev, struct 
rte_flow *flow)
flow->cur_verbs->attr->priority = priority;
 }
 
+/**
+ * Get a flow counter.
+ *
+ * @param[in] dev
+ *   Pointer to Ethernet device.
+ * @param[in] shared
+ *   Indicate if this counter is shared with other flows.
+ * @param[in] id
+ *   Counter identifier.
+ *
+ * @return
+ *   A pointer to the counter, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_flow_counter *
+mlx5_flow_counter_new(struct rte_eth_dev *dev, uint32_t shared, uint32_t id)
+{
+   struct priv *priv = dev->data->dev_private;
+   struct mlx5_flow_counter *cnt;
+
+   LIST_FOREACH(cnt, &priv->flow_counters, next) {
+   if (cnt->shared != shared)
+   continue;
+   if (cnt->id != id)
+   continue;
+   cnt->ref_cnt++;
+   return cnt;
+   }
+#ifdef HAVE_IBV_DEVICE_COUNTERS_SET_SUPPORT
+
+   struct mlx5_flow_counter tmpl = {
+   .shared = shared,
+   .id = id,
+   .cs = mlx5_glue->create_counter_set
+   (priv->ctx,
+&(struct ibv_counter_set_init_attr){
+.counter_set_id = id,
+}),
+   .hits = 0,
+   .bytes = 0,
+   };
+
+   if (!tmpl.cs) {
+   rte_errno = errno;
+   return NULL;
+   }
+   cnt = rte_calloc(__func__, 1, sizeof(*cnt), 0);
+   if (!cnt) {
+   rte_errno = ENOMEM;
+   return NULL;
+   }
+   *cnt = tmpl;
+   LIST_INSERT_HEAD(&priv->flow_counters, cnt, next);
+   return cnt;
+#endif
+   rte_errno = ENOTSUP;
+   return NULL;
+}
+
+/**
+ * Release a flow counter.
+ *
+ * @param[in] counter
+ *   Pointer to the counter handler.
+ */
+static void
+mlx5_flow_counter_release(struct mlx5_flow_counter *counter)
+{
+   if (--counter->ref_cnt == 0) {
+   claim_zero(mlx5_glue->destroy_counter_set(counter->cs));
+   LIST_REMOVE(counter,

Re: [dpdk-dev] [PATCH v13 02/19] bus/pci: fix PCI address compare

2018-07-12 Thread Gaëtan Rivet
Hi,

On Thu, Jul 12, 2018 at 10:24:44AM +0100, Burakov, Anatoly wrote:
> On 12-Jul-18 2:14 AM, Qi Zhang wrote:
> > When use memcmp to compare two PCI address, sizeof(struct rte_pci_addr)
> > is 4 bytes aligned, and it is 8. While only 7 byte of struct rte_pci_addr
> > is valid. So compare the 8th byte will cause the unexpected result, which
> > happens when repeatedly attach/detach a device.
> > 
> > Fixes: c752998b5e2e ("pci: introduce library and driver")

Shouldn't be the original commit be

Fixes: 94c0776b1bad ("vfio: support hotplug")

instead?

> > Cc: sta...@dpdk.org
> > 
> > Signed-off-by: Qi Zhang 
> > ---
> >   drivers/bus/pci/linux/pci_vfio.c | 13 -
> >   1 file changed, 12 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/bus/pci/linux/pci_vfio.c 
> > b/drivers/bus/pci/linux/pci_vfio.c
> > index aeeaa9ed8..dd25c3542 100644
> > --- a/drivers/bus/pci/linux/pci_vfio.c
> > +++ b/drivers/bus/pci/linux/pci_vfio.c
> > @@ -43,6 +43,17 @@ static struct rte_tailq_elem rte_vfio_tailq = {
> >   };
> >   EAL_REGISTER_TAILQ(rte_vfio_tailq)
> > +/* Compair two pci address */
> > +static int pci_addr_cmp(struct rte_pci_addr *addr1, struct rte_pci_addr 
> > *addr2)
> > +{
> > +   if (addr1->domain == addr2->domain &&
> > +   addr1->bus == addr2->bus &&
> > +   addr1->devid == addr2->devid &&
> > +   addr1->function == addr2->function)
> > +   return 0;
> > +   return 1;
> > +}
> 
> Generally, change looks OK to me, but I think we already have this function
> in PCI library - rte_pci_addr_cmp(). Is there a specific reason to
> reimplement it here?
> 

+1

> > +
> >   int
> >   pci_vfio_read_config(const struct rte_intr_handle *intr_handle,
> > void *buf, size_t len, off_t offs)
> > @@ -642,7 +653,7 @@ pci_vfio_unmap_resource(struct rte_pci_device *dev)
> > vfio_res_list = RTE_TAILQ_CAST(rte_vfio_tailq.head, 
> > mapped_pci_res_list);
> > /* Get vfio_res */
> > TAILQ_FOREACH(vfio_res, vfio_res_list, next) {
> > -   if (memcmp(&vfio_res->pci_addr, &dev->addr, sizeof(dev->addr)))
> > +   if (pci_addr_cmp(&vfio_res->pci_addr, &dev->addr))
> > continue;
> > break;
> > }
> > 
> 
> 
> -- 
> Thanks,
> Anatoly

-- 
Gaëtan Rivet
6WIND


Re: [dpdk-dev] [PATCH] net/ixgbe: fix missing NULL point check

2018-07-12 Thread Remy Horton

Patch doesn't apply to latest master, but otherwise seems fine to me.

On 02/07/2018 05:18, Qi Zhang wrote:

Add missing NULL point check in ixgbe_pf_host_uninit, or it may cause
segement fault when detach a device.

Fixes: cf80ba6e2038 ("net/ixgbe: add support for representor ports")
Cc: sta...@dpdk.org

Signed-off-by: Qi Zhang 


Re: [dpdk-dev] [PATCH v11 08/25] devargs: add function to parse device layers

2018-07-12 Thread Shreyansh Jain

On Thursday 12 July 2018 03:14 AM, Gaetan Rivet wrote:

This function is private to the EAL.
It is used to parse each layers in a device description string,
and store the result in an rte_devargs structure.

Signed-off-by: Gaetan Rivet 
---
  lib/librte_eal/common/eal_common_devargs.c  | 144 
  lib/librte_eal/common/eal_private.h |  34 +
  lib/librte_eal/common/include/rte_devargs.h |  13 +-
  3 files changed, 188 insertions(+), 3 deletions(-)



[...]

Acked-by: Shreyansh Jain 



Re: [dpdk-dev] [PATCH v11 01/19] ethdev: add function to release port in local process

2018-07-12 Thread Andrew Rybchenko

On 12.07.2018 03:23, Zhang, Qi Z wrote:



-Original Message-
From: Andrew Rybchenko [mailto:arybche...@solarflare.com]
Sent: Thursday, July 12, 2018 12:05 AM
To: Zhang, Qi Z ; tho...@monjalon.net; Burakov,
Anatoly 
Cc: Ananyev, Konstantin ; dev@dpdk.org;
Richardson, Bruce ; Yigit, Ferruh
; Shelton, Benjamin H
; Vangati, Narender

Subject: Re: [dpdk-dev] [PATCH v11 01/19] ethdev: add function to release port
in local process

On 11.07.2018 15:30, Zhang, Qi Z wrote:

-Original Message-
From: Andrew Rybchenko [mailto:arybche...@solarflare.com]
Sent: Wednesday, July 11, 2018 5:27 PM
To: Zhang, Qi Z ; tho...@monjalon.net; Burakov,
Anatoly 
Cc: Ananyev, Konstantin ; dev@dpdk.org;
Richardson, Bruce ; Yigit, Ferruh
; Shelton, Benjamin H
; Vangati, Narender

Subject: Re: [dpdk-dev] [PATCH v11 01/19] ethdev: add function to
release port in local process

On 11.07.2018 06:08, Qi Zhang wrote:

Add driver API rte_eth_release_port_private to support the case when
an ethdev need to be detached on a secondary process.
Local state is set to unused and shared data will not be reset so
the primary process can still use it.

Signed-off-by: Qi Zhang 
Reviewed-by: Andrew Rybchenko 
Acked-by: Remy Horton 
---

<...>

+   /**
+* PCI device can only be globally detached directly by a
+* primary process. In secondary process, we only need to
+* release port.
+*/
+   if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+   return rte_eth_dev_release_port_private(eth_dev);

I've realized that some uninit functions which will not be called
anymore in secondary processes have check for process type and
handling of secondary process case. It makes code inconsistent and should

be fixed.

Good point, I did a scan and check all the places that

rte_eth_dev_pci_generic_remove be involved.

I found only sfc driver (sfc_eth_dev_unit) will call some cleanup on

secondary process as below.

The patch makes impossible dev_uninit to be executed for secondary process
for all cases if rte_eth_dev_pci_generic_remove() is used. However, many
drivers still check for process type. Yes, sfc does cleanup, but some drivers
return -EPERM, some return 0. In fact it does not matter. It leaves dead code
which is really confusing.

OK, l can do a cleanup in a separate patchset if this one will be merged.


For now, I'd like to revoke my Reviewed-by. I'll review once again when
complete solution is suggested.


[dpdk-dev] [PATCH v2] test: add unit tests for bitrate library

2018-07-12 Thread Jananee Parthasarathy
Unit Test Cases for BitRate library.
Dependency patch is
"add sample functions for packet forwarding"
Patch Link is http://patches.dpdk.org/patch/42946/

Signed-off-by: Chaitanya Babu Talluri 
Reviewed-by: Reshma Pattan 
---
v2: corrected data type for tx_portid and rx_portid
---
 test/test/Makefile|   1 +
 test/test/autotest_data.py|   6 ++
 test/test/test_bitratestats.c | 187 ++
 3 files changed, 194 insertions(+)
 create mode 100644 test/test/test_bitratestats.c

diff --git a/test/test/Makefile b/test/test/Makefile
index eccc8efcf..039b6ac6d 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -179,6 +179,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += test_pmd_ring_perf.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_blockcipher.c
 SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c
+SRCS-$(CONFIG_RTE_LIBRTE_BITRATE) += test_bitratestats.c
 
 ifeq ($(CONFIG_RTE_COMPRESSDEV_TEST),y)
 SRCS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += test_compressdev.c
diff --git a/test/test/autotest_data.py b/test/test/autotest_data.py
index aacfe0a66..419520342 100644
--- a/test/test/autotest_data.py
+++ b/test/test/autotest_data.py
@@ -293,6 +293,12 @@ def per_sockets(num):
 "Tests":
 [
 {
+"Name":"Bitratestats autotest",
+"Command": "bitratestats_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
 "Name":"PMD ring autotest",
 "Command": "ring_pmd_autotest",
 "Func":default_autotest,
diff --git a/test/test/test_bitratestats.c b/test/test/test_bitratestats.c
new file mode 100644
index 0..093bd275e
--- /dev/null
+++ b/test/test/test_bitratestats.c
@@ -0,0 +1,187 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "test.h"
+#include "sample_packet_forward.h"
+
+uint16_t tx_portid, rx_portid;
+struct rte_stats_bitrates *bitrate_data;
+
+/* To test whether rte_stats_bitrate_create is successful */
+static int
+test_stats_bitrate_create(void)
+{
+   bitrate_data = rte_stats_bitrate_create();
+   TEST_ASSERT(bitrate_data != NULL, "rte_stats_bitrate_create failed");
+
+   return TEST_SUCCESS;
+}
+
+/* To test bit rate registration */
+static int
+test_stats_bitrate_reg(void)
+{
+   int ret = 0;
+
+   /* Test to register bit rate without metrics init */
+   ret = rte_stats_bitrate_reg(bitrate_data);
+   TEST_ASSERT(ret < 0, "Test Failed: rte_stats_bitrate_reg succeeded "
+   "without metrics init, ret:%d", ret);
+
+   /* Metrics initialization */
+   rte_metrics_init(rte_socket_id());
+
+   /* Test to register bit rate after metrics init */
+   ret = rte_stats_bitrate_reg(bitrate_data);
+   TEST_ASSERT((ret >= 0), "Test Failed: rte_stats_bitrate_reg %d", ret);
+
+   return TEST_SUCCESS;
+}
+
+/* To test the bit rate registration with invalid pointer */
+static int
+test_stats_bitrate_reg_invalidpointer(void)
+{
+   int ret = 0;
+
+   ret = rte_stats_bitrate_reg(NULL);
+   TEST_ASSERT(ret < 0, "Test Failed: Expected failure < 0 but "
+   "got %d", ret);
+
+   return TEST_SUCCESS;
+}
+
+/* To test bit rate calculation with invalid bit rate data pointer */
+static int
+test_stats_bitrate_calc_invalid_bitrate_data(void)
+{
+   int ret = 0;
+
+   ret = rte_stats_bitrate_calc(NULL, tx_portid);
+   TEST_ASSERT(ret < 0, "Test Failed: rte_stats_bitrate_calc "
+   "ret:%d", ret);
+
+   return TEST_SUCCESS;
+}
+
+/* To test the bit rate calculation with invalid portid
+ * (higher than max ports)
+ */
+static int
+test_stats_bitrate_calc_invalid_portid_1(void)
+{
+   int ret = 0;
+
+   ret = rte_stats_bitrate_calc(bitrate_data, 33);
+   TEST_ASSERT(ret == -EINVAL, "Test Failed: Expected -%d for higher "
+   "portid rte_stats_bitrate_calc ret:%d", EINVAL, ret);
+
+   return TEST_SUCCESS;
+}
+
+/* To test the bit rate calculation with invalid portid (lesser than 0) */
+static int
+test_stats_bitrate_calc_invalid_portid_2(void)
+{
+   int ret = 0;
+
+   ret = rte_stats_bitrate_calc(bitrate_data, -1);
+   TEST_ASSERT(ret == -EINVAL, "Test Failed: Expected -%d for invalid "
+   "portid rte_stats_bitrate_calc ret:%d", EINVAL, ret);
+
+   return TEST_SUCCESS;
+}
+
+/* To test the bit rate calculation with non-existing portid */
+static int
+test_stats_bitrate_calc_non_existing_portid(void)
+{
+   int ret = 0;
+
+   ret = rte_stats_bitrate_calc(bitrate_data, 31);
+   TEST_ASSERT(ret ==  -EINVAL, "Test Failed: Expected -%d for "
+   "non-existing portid rte_stats_bitrate_calc ret:%d",
+  

[dpdk-dev] [PATCH v2] test: add unit tests for latencystats library

2018-07-12 Thread Jananee Parthasarathy
Unit Test Cases added for latencystats library.
Dependency patch is
"add sample functions for packet forwarding"
Patch Link is http://patches.dpdk.org/patch/42946/

Signed-off-by: Agalya Babu RadhaKrishnan 
Reviewed-by: Reshma Pattan 
---
v2:
  Latency test is added to autotest_data.py.
  NUM_STATS is added to latencystats test file.
---
 test/test/Makefile|   3 +
 test/test/autotest_data.py|   6 ++
 test/test/test_latencystats.c | 179 ++
 3 files changed, 188 insertions(+)
 create mode 100644 test/test/test_latencystats.c

diff --git a/test/test/Makefile b/test/test/Makefile
index eccc8efcf..5b73dab0e 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -134,6 +134,7 @@ SRCS-y += test_func_reentrancy.c
 
 SRCS-y += test_service_cores.c
 
+
 SRCS-$(CONFIG_RTE_LIBRTE_CMDLINE) += test_cmdline.c
 SRCS-$(CONFIG_RTE_LIBRTE_CMDLINE) += test_cmdline_num.c
 SRCS-$(CONFIG_RTE_LIBRTE_CMDLINE) += test_cmdline_etheraddr.c
@@ -180,6 +181,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += test_pmd_ring_perf.c
 SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_blockcipher.c
 SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_LATENCY_STATS) += test_latencystats.c
+
 ifeq ($(CONFIG_RTE_COMPRESSDEV_TEST),y)
 SRCS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += test_compressdev.c
 endif
diff --git a/test/test/autotest_data.py b/test/test/autotest_data.py
index aacfe0a66..646ca832d 100644
--- a/test/test/autotest_data.py
+++ b/test/test/autotest_data.py
@@ -293,6 +293,12 @@ def per_sockets(num):
 "Tests":
 [
 {
+"Name":"Latency Stats Autotest",
+"Command": "latencystats_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
 "Name":"PMD ring autotest",
 "Command": "ring_pmd_autotest",
 "Func":default_autotest,
diff --git a/test/test/test_latencystats.c b/test/test/test_latencystats.c
new file mode 100644
index 0..39b8b734d
--- /dev/null
+++ b/test/test/test_latencystats.c
@@ -0,0 +1,179 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "test.h"
+#include "sample_packet_forward.h"
+#define NUM_STATS 4
+
+struct rte_metric_name lat_stats_strings[] = {
+   {"min_latency_ns"},
+   {"avg_latency_ns"},
+   {"max_latency_ns"},
+   {"jitter_ns"},
+};
+
+/* Test case for latency init with metrics init */
+static int
+test_latency_init(void)
+{
+   int ret = 0;
+
+/* Metrics Initialization */
+   rte_metrics_init(rte_socket_id());
+
+   ret = rte_latencystats_init(1, NULL);
+   TEST_ASSERT(ret >= 0, "Test Failed: rte_latencystats_init failed");
+
+   return TEST_SUCCESS;
+}
+
+/* Test case to update the latency stats */
+static int
+test_latency_update(void)
+{
+   int ret = 0;
+
+   ret = rte_latencystats_update();
+   TEST_ASSERT(ret >= 0, "Test Failed: rte_latencystats_update failed");
+
+   return TEST_SUCCESS;
+}
+
+/* Test case to uninit latency stats */
+static int
+test_latency_uninit(void)
+{
+   int ret = 0;
+
+   ret = rte_latencystats_uninit();
+   TEST_ASSERT(ret >= 0, "Test Failed: rte_latencystats_uninit failed");
+
+   return TEST_SUCCESS;
+}
+
+/* Test case to get names of latency stats */
+static int
+test_latencystats_get_names(void)
+{
+   int ret = 0;
+   int size = 0;
+   struct rte_metric_name names[NUM_STATS] = {0};
+   struct rte_metric_name wrongnames[NUM_STATS-2] = {0};
+
+   /* Success Test: Valid names and size */
+   size = NUM_STATS;
+   ret = rte_latencystats_get_names(names, size);
+   for (int i = 0; i <= NUM_STATS; i++) {
+   if (strcmp(lat_stats_strings[i].name, names[i].name) == 0)
+   printf(" %s\n", names[i].name);
+   else
+   printf("Failed: Names are not matched\n");
+   }
+   TEST_ASSERT((ret == NUM_STATS), "Test Failed to get metrics names");
+
+   /* Failure Test: Invalid names and valid size */
+   ret = rte_latencystats_get_names(NULL, size);
+   TEST_ASSERT((ret == NUM_STATS), "Test Failed to get the metrics count,"
+   "Actual: %d Expected: %d", ret, NUM_STATS);
+
+   /* Failure Test: Valid names and invalid size */
+   size = 0;
+   ret = rte_latencystats_get_names(names, size);
+   TEST_ASSERT((ret == NUM_STATS), "Test Failed to get the metrics count,"
+   "Actual: %d Expected: %d", ret, NUM_STATS);
+
+   /* Failure Test: Invalid names (array size lesser than size) */
+   size = NUM_STATS + 1;
+   ret = rte_latencystats_get_names(wrongnames, size);
+   TEST_ASSERT((ret == NUM_STATS), "Test Failed to get m

Re: [dpdk-dev] [PATCH v2 0/8] Enable 32-bit native builds with meson

2018-07-12 Thread Thomas Monjalon
03/07/2018 12:31, Bruce Richardson:
> This patchset enables building DPDK on 32-bit systems, and has been tested
> using debian 32-bit on x86 i.e. doing an "i686" build in the old build
> system.
> 
> v2:
>  - fixed LIB_LIBRTE_KNI to RTE_LIBRTE_KNI in examples/kni patch [Ferruh]
>  - added patch to make same change in drivers/net/kni [Ferruh]
> 
> Bruce Richardson (8):
>   kni: disable for 32-bit meson builds
>   bpf: fix 32-bit build support with meson
>   net/sfc: disable for 32-bit builds
>   build: disable pointer to int conversion warnings for 32-bit
>   dpaa2: fix default IOVA build setting for meson builds
>   examples/kni: fix dependency check for building with meson
>   net/avp: fix 32-bit meson builds
>   net/kni: fix check for meson build

Applied, thanks




Re: [dpdk-dev] [PATCH v4 00/21] net/mlx5: flow rework

2018-07-12 Thread Shahaf Shuler
Thursday, July 12, 2018 12:31 PM, Nelio Laranjeiro:
> Subject: [dpdk-dev] [PATCH v4 00/21] net/mlx5: flow rework
> 
> Re-work flow engine to support port redirection actions through TC.
> 
> This first series depends on [1] which is implemented in commit
> "net/mlx5: support inner RSS computation" and on [2].
> Next series will bring the port redirection as announced[3].
> 
> [1]
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fma
> ils.dpdk.org%2Farchives%2Fdev%2F2018-
> July%2F107378.html&data=02%7C01%7Cshahafs%40mellanox.com%7Cd
> 4468642a7bc46f895a408d5e7da3f23%7Ca652971c7d2e4d9ba6a4d149256f461b
> %7C0%7C0%7C636669846916246569&sdata=INTMuEpP%2BGQP2xj05JM
> ti4jCyHL8rqJiUgrrHNMpAC4%3D&reserved=0
> [2]
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fma
> ils.dpdk.org%2Farchives%2Fdev%2F2018-
> June%2F104192.html&data=02%7C01%7Cshahafs%40mellanox.com%7C
> d4468642a7bc46f895a408d5e7da3f23%7Ca652971c7d2e4d9ba6a4d149256f461
> b%7C0%7C0%7C636669846916246569&sdata=aq%2Bl5GWgAgnIdWYxG2
> VB4JlhLvAOVdB7JuLMScgYaN0%3D&reserved=0
> [3]
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fma
> ils.dpdk.org%2Farchives%2Fdev%2F2018-
> May%2F103043.html&data=02%7C01%7Cshahafs%40mellanox.com%7C
> d4468642a7bc46f895a408d5e7da3f23%7Ca652971c7d2e4d9ba6a4d149256f461
> b%7C0%7C0%7C636669846916246569&sdata=G4uhC1VCDA4z6UgC9Vfo
> CdUsndngUgMNbTHYZ4UjAqM%3D&reserved=0
> 
> Changes in v4:
> 
> - fix compilation on redhat 7.5 without Mellanox OFED.
> - avoid multiple pattern parsing for the expansion.
> 
> Changes in v3:
> 
> - remove redundant parameters in drop queues internal API.
> - simplify the RSS expansion by only adding missing items in the pattern.
> - document all functions.
> 
> 
> Nelio Laranjeiro (21):
>   net/mlx5: remove flow support
>   net/mlx5: handle drop queues as regular queues
>   net/mlx5: replace verbs priorities by flow
>   net/mlx5: support flow Ethernet item along with drop action
>   net/mlx5: add flow queue action
>   net/mlx5: add flow stop/start
>   net/mlx5: add flow VLAN item
>   net/mlx5: add flow IPv4 item
>   net/mlx5: add flow IPv6 item
>   net/mlx5: add flow UDP item
>   net/mlx5: add flow TCP item
>   net/mlx5: add mark/flag flow action
>   net/mlx5: use a macro for the RSS key size
>   net/mlx5: add RSS flow action
>   net/mlx5: remove useless arguments in hrxq API
>   net/mlx5: support inner RSS computation
>   net/mlx5: add flow VXLAN item
>   net/mlx5: add flow VXLAN-GPE item
>   net/mlx5: add flow GRE item
>   net/mlx5: add flow MPLS item
>   net/mlx5: add count flow action
> 
>  drivers/net/mlx5/mlx5.c|   22 +-
>  drivers/net/mlx5/mlx5.h|   18 +-
>  drivers/net/mlx5/mlx5_ethdev.c |   14 +-
>  drivers/net/mlx5/mlx5_flow.c   | 4821 
>  drivers/net/mlx5/mlx5_prm.h|3 +
>  drivers/net/mlx5/mlx5_rss.c|7 +-
>  drivers/net/mlx5/mlx5_rxq.c|  281 +-
>  drivers/net/mlx5/mlx5_rxtx.h   |   21 +-
>  8 files changed, 2640 insertions(+), 2547 deletions(-)

Series applied to next-net-mlx, thanks. 

> 
> --
> 2.18.0



Re: [dpdk-dev] [PATCH] hash: validate hash bucket entries while compiling

2018-07-12 Thread Thomas Monjalon
12/07/2018 10:05, De Lara Guarch, Pablo:
> From: Thomas Monjalon [mailto:tho...@monjalon.net]
> > 
> > Review please?
> > 
> > 31/05/2018 17:30, Honnappa Nagarahalli:
> > > Validate RTE_HASH_BUCKET_ENTRIES during compilation instead of run
> > > time.
> > >
> > > Signed-off-by: Honnappa Nagarahalli 
> > > Reviewed-by: Gavin Hu 
> > > ---
> 
> Acked-by: Pablo de Lara 

Applied, thanks




Re: [dpdk-dev] [PATCH 2/6] net/mlx5: add framework for switch flow rules

2018-07-12 Thread Adrien Mazarguil
On Wed, Jul 11, 2018 at 05:59:18PM -0700, Yongseok Koh wrote:
> On Wed, Jun 27, 2018 at 08:08:12PM +0200, Adrien Mazarguil wrote:
> > Because mlx5 switch flow rules are configured through Netlink (TC
> > interface) and have little in common with Verbs, this patch adds a separate
> > parser function to handle them.
> > 
> > - mlx5_nl_flow_transpose() converts a rte_flow rule to its TC equivalent
> >   and stores the result in a buffer.
> > 
> > - mlx5_nl_flow_brand() gives a unique handle to a flow rule buffer.
> > 
> > - mlx5_nl_flow_create() instantiates a flow rule on the device based on
> >   such a buffer.
> > 
> > - mlx5_nl_flow_destroy() performs the reverse operation.
> > 
> > These functions are called by the existing implementation when encountering
> > flow rules which must be offloaded to the switch (currently relying on the
> > transfer attribute).
> > 
> > Signed-off-by: Adrien Mazarguil 
> > Signed-off-by: Nelio Laranjeiro 

> > diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
> > index 9241855be..93b245991 100644
> > --- a/drivers/net/mlx5/mlx5_flow.c
> > +++ b/drivers/net/mlx5/mlx5_flow.c
> > @@ -4,6 +4,7 @@
> >   */
> >  
> >  #include 
> > +#include 
> >  #include 
> >  #include 
> >  
> > @@ -271,6 +272,7 @@ struct rte_flow {
> > /**< Store tunnel packet type data to store in Rx queue. */
> > uint8_t key[40]; /**< RSS hash key. */
> > uint16_t (*queue)[]; /**< Destination queues to redirect traffic to. */
> > +   void *nl_flow; /**< Netlink flow buffer if relevant. */
> >  };
> >  
> >  static const struct rte_flow_ops mlx5_flow_ops = {
> > @@ -2403,6 +2405,106 @@ mlx5_flow_actions(struct rte_eth_dev *dev,
> >  }
> >  
> >  /**
> > + * Validate flow rule and fill flow structure accordingly.
> > + *
> > + * @param dev
> > + *   Pointer to Ethernet device.
> > + * @param[out] flow
> > + *   Pointer to flow structure.
> > + * @param flow_size
> > + *   Size of allocated space for @p flow.
> > + * @param[in] attr
> > + *   Flow rule attributes.
> > + * @param[in] pattern
> > + *   Pattern specification (list terminated by the END pattern item).
> > + * @param[in] actions
> > + *   Associated actions (list terminated by the END action).
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *
> > + * @return
> > + *   A positive value representing the size of the flow object in bytes
> > + *   regardless of @p flow_size on success, a negative errno value 
> > otherwise
> > + *   and rte_errno is set.
> > + */
> > +static int
> > +mlx5_flow_merge_switch(struct rte_eth_dev *dev,
> > +  struct rte_flow *flow,
> > +  size_t flow_size,
> > +  const struct rte_flow_attr *attr,
> > +  const struct rte_flow_item pattern[],
> > +  const struct rte_flow_action actions[],
> > +  struct rte_flow_error *error)
> > +{
> > +   struct priv *priv = dev->data->dev_private;
> > +   unsigned int n = mlx5_domain_to_port_id(priv->domain_id, NULL, 0);
> > +   uint16_t port_list[!n + n];
> > +   struct mlx5_nl_flow_ptoi ptoi[!n + n + 1];
> > +   size_t off = RTE_ALIGN_CEIL(sizeof(*flow), alignof(max_align_t));
> > +   unsigned int i;
> > +   unsigned int own = 0;
> > +   int ret;
> > +
> > +   /* At least one port is needed when no switch domain is present. */
> > +   if (!n) {
> > +   n = 1;
> > +   port_list[0] = dev->data->port_id;
> > +   } else {
> > +   n = mlx5_domain_to_port_id(priv->domain_id, port_list, n);
> > +   if (n > RTE_DIM(port_list))
> > +   n = RTE_DIM(port_list);
> > +   }
> > +   for (i = 0; i != n; ++i) {
> > +   struct rte_eth_dev_info dev_info;
> > +
> > +   rte_eth_dev_info_get(port_list[i], &dev_info);
> > +   if (port_list[i] == dev->data->port_id)
> > +   own = i;
> > +   ptoi[i].port_id = port_list[i];
> > +   ptoi[i].ifindex = dev_info.if_index;
> > +   }
> > +   /* Ensure first entry of ptoi[] is the current device. */
> > +   if (own) {
> > +   ptoi[n] = ptoi[0];
> > +   ptoi[0] = ptoi[own];
> > +   ptoi[own] = ptoi[n];
> > +   }
> > +   /* An entry with zero ifindex terminates ptoi[]. */
> > +   ptoi[n].port_id = 0;
> > +   ptoi[n].ifindex = 0;
> > +   if (flow_size < off)
> > +   flow_size = 0;
> > +   ret = mlx5_nl_flow_transpose((uint8_t *)flow + off,
> > +flow_size ? flow_size - off : 0,
> > +ptoi, attr, pattern, actions, error);
> > +   if (ret < 0)
> > +   return ret;
> 
> So, there's an assumption that the buffer allocated outside of this API is
> enough to include all the messages in mlx5_nl_flow_transpose(), right? If
> flow_size isn't enough, buf_tmp will be used and _transpose() doesn't return
> error but required size. Sounds confusing, may need to make a change or to 
> have
> clearer documentation.

Well

Re: [dpdk-dev] [PATCH 1/2] examples/ethtool: add to meson build

2018-07-12 Thread Bruce Richardson
On Thu, Jul 12, 2018 at 09:54:32AM +0200, Thomas Monjalon wrote:
> 29/03/2018 16:04, Bruce Richardson:
> > Add the ethtool example to the meson build. This example is more
> > complicated than the previously added ones as it has files in two
> > subdirectories. An ethtool "wrapper lib" in one, used by the actual
> > example "ethtool app" in the other.
> > 
> > Rather than using recursive operation, like is done with the makefiles,
> > we instead can just special-case the building of the library from the
> > single .c file, and then use that as a dependency when building the app
> > proper.
> > 
> > Signed-off-by: Bruce Richardson 
> 
> It does not compile because of experimental function:
> examples/ethtool/lib/rte_ethtool.c:186:2: error:
> ‘rte_eth_dev_get_module_info’ is deprecated: Symbol is not yet part of stable 
> ABI
> 
Ok. This set is fairly old, and I think I've found other issues with it
since. I suggest we drop this set for 18.08 consideration.


Re: [dpdk-dev] [PATCH 1/6] net/mlx5: lay groundwork for switch offloads

2018-07-12 Thread Adrien Mazarguil
On Wed, Jul 11, 2018 at 05:17:09PM -0700, Yongseok Koh wrote:
> On Wed, Jun 27, 2018 at 08:08:10PM +0200, Adrien Mazarguil wrote:
> > With mlx5, unlike normal flow rules implemented through Verbs for traffic
> > emitted and received by the application, those targeting different logical
> > ports of the device (VF representors for instance) are offloaded at the
> > switch level and must be configured through Netlink (TC interface).
> > 
> > This patch adds preliminary support to manage such flow rules through the
> > flow API (rte_flow).
> > 
> > Instead of rewriting tons of Netlink helpers and as previously suggested by
> > Stephen [1], this patch introduces a new dependency to libmnl [2]
> > (LGPL-2.1) when compiling mlx5.
> > 
> > [1] 
> > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.dpdk.org%2Farchives%2Fdev%2F2018-March%2F092676.html&data=02%7C01%7Cyskoh%40mellanox.com%7C1250093eca0c4ad6d9f008d5dc58fbb4%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636657197116524482&sdata=JrAyzK1s3JG5CnuquNcA7XRN4d2WYtHUi1KXyloGdvA%3D&reserved=0
> > [2] 
> > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fnetfilter.org%2Fprojects%2Flibmnl%2F&data=02%7C01%7Cyskoh%40mellanox.com%7C1250093eca0c4ad6d9f008d5dc58fbb4%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636657197116524482&sdata=yLYa0NzsTyE62BHDCZDoDah31snt6w4Coq47pY913Oo%3D&reserved=0
> > 
> > Signed-off-by: Adrien Mazarguil 

> > diff --git a/drivers/net/mlx5/mlx5_nl_flow.c 
> > b/drivers/net/mlx5/mlx5_nl_flow.c
> > new file mode 100644
> > index 0..7a8683b03
> > --- /dev/null
> > +++ b/drivers/net/mlx5/mlx5_nl_flow.c
> > @@ -0,0 +1,139 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright 2018 6WIND S.A.
> > + * Copyright 2018 Mellanox Technologies, Ltd
> > + */
> > +
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +
> > +#include 
> > +#include 
> > +
> > +#include "mlx5.h"
> > +
> > +/**
> > + * Send Netlink message with acknowledgment.
> > + *
> > + * @param nl
> > + *   Libmnl socket to use.
> > + * @param nlh
> > + *   Message to send. This function always raises the NLM_F_ACK flag before
> > + *   sending.
> > + *
> > + * @return
> > + *   0 on success, a negative errno value otherwise and rte_errno is set.
> > + */
> > +static int
> > +mlx5_nl_flow_nl_ack(struct mnl_socket *nl, struct nlmsghdr *nlh)
> > +{
> > +   alignas(struct nlmsghdr)
> > +   uint8_t ans[MNL_SOCKET_BUFFER_SIZE];
> 
> There are total 3 of this buffer. On a certain host having large pagesize, 
> this
> can be 8kB * 3 = 24kB. This is not a gigantic buffer but as all the functions
> here are sequentially accessed, how about having just one global buffer 
> instead?

All right it's not ideal, I opted for simplicity though. This is a generic
ack function. When NETLINK_CAP_ACK is not supported (note: this was made
optional for v2, some systems do not support it), an ack consumes a bit more
space than the original message, which may itself be huge, and failure to
receive acks is deemed fatal.

Its callers are mlx5_nl_flow_init(), called once per device during
initialization, and mlx5_nl_flow_create/destroy(), called for each
created/removed flow rule.

These last two are called often but do not put their own buffer on the
stack, they reuse previously generated messages from the heap.

So to improve stack consumption a bit, what I can do is size this buffer
according to nlh->nlmsg_len + extra room for ack header, yet still allocate
it locally since it would be a pain otherwise. Callers may not want their
own buffers to be overwritten with useless acks.

-- 
Adrien Mazarguil
6WIND


Re: [dpdk-dev] [PATCH 5/6] net/mlx5: add VLAN item and actions to switch flow rules

2018-07-12 Thread Adrien Mazarguil
On Wed, Jul 11, 2018 at 06:10:25PM -0700, Yongseok Koh wrote:
> On Wed, Jun 27, 2018 at 08:08:18PM +0200, Adrien Mazarguil wrote:
> > This enables flow rules to explicitly match VLAN traffic (VLAN pattern
> > item) and perform various operations on VLAN headers at the switch level
> > (OF_POP_VLAN, OF_PUSH_VLAN, OF_SET_VLAN_VID and OF_SET_VLAN_PCP actions).
> > 
> > Testpmd examples:
> > 
> > - Directing all VLAN traffic received on port ID 1 to port ID 0:
> > 
> >   flow create 1 ingress transfer pattern eth / vlan / end actions
> >  port_id id 0 / end
> > 
> > - Adding a VLAN header to IPv6 traffic received on port ID 1 and directing
> >   it to port ID 0:
> > 
> >   flow create 1 ingress transfer pattern eth / ipv6 / end actions
> >  of_push_vlan ethertype 0x8100 / of_set_vlan_vid / port_id id 0 / end
> > 
> > Signed-off-by: Adrien Mazarguil 

> > @@ -681,6 +772,84 @@ mlx5_nl_flow_transpose(void *buf,
> > mnl_attr_nest_end(buf, act_index);
> > ++action;
> > break;
> > +   case ACTION_OF_POP_VLAN:
> > +   if (action->type != RTE_FLOW_ACTION_TYPE_OF_POP_VLAN)
> > +   goto trans;
> > +   conf.of_push_vlan = NULL;
> > +   i = TCA_VLAN_ACT_POP;
> > +   goto action_of_vlan;
> > +   case ACTION_OF_PUSH_VLAN:
> > +   if (action->type != RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN)
> > +   goto trans;
> > +   conf.of_push_vlan = action->conf;
> > +   i = TCA_VLAN_ACT_PUSH;
> > +   goto action_of_vlan;
> > +   case ACTION_OF_SET_VLAN_VID:
> > +   if (action->type != RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID)
> > +   goto trans;
> > +   conf.of_set_vlan_vid = action->conf;
> > +   if (na_vlan_id)
> > +   goto override_na_vlan_id;
> > +   i = TCA_VLAN_ACT_MODIFY;
> > +   goto action_of_vlan;
> > +   case ACTION_OF_SET_VLAN_PCP:
> > +   if (action->type != RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP)
> > +   goto trans;
> > +   conf.of_set_vlan_pcp = action->conf;
> > +   if (na_vlan_priority)
> > +   goto override_na_vlan_priority;
> > +   i = TCA_VLAN_ACT_MODIFY;
> > +   goto action_of_vlan;
> > +action_of_vlan:
> > +   act_index =
> > +   mnl_attr_nest_start_check(buf, size, act_index_cur++);
> > +   if (!act_index ||
> > +   !mnl_attr_put_strz_check(buf, size, TCA_ACT_KIND, "vlan"))
> > +   goto error_nobufs;
> > +   act = mnl_attr_nest_start_check(buf, size, TCA_ACT_OPTIONS);
> > +   if (!act)
> > +   goto error_nobufs;
> > +   if (!mnl_attr_put_check(buf, size, TCA_VLAN_PARMS,
> > +   sizeof(struct tc_vlan),
> > +   &(struct tc_vlan){
> > +   .action = TC_ACT_PIPE,
> > +   .v_action = i,
> > +   }))
> > +   goto error_nobufs;
> > +   if (i == TCA_VLAN_ACT_POP) {
> > +   mnl_attr_nest_end(buf, act);
> > +   ++action;
> > +   break;
> > +   }
> > +   if (i == TCA_VLAN_ACT_PUSH &&
> > +   !mnl_attr_put_u16_check(buf, size,
> > +   TCA_VLAN_PUSH_VLAN_PROTOCOL,
> > +   conf.of_push_vlan->ethertype))
> > +   goto error_nobufs;
> > +   na_vlan_id = mnl_nlmsg_get_payload_tail(buf);
> > +   if (!mnl_attr_put_u16_check(buf, size, TCA_VLAN_PAD, 0))
> > +   goto error_nobufs;
> > +   na_vlan_priority = mnl_nlmsg_get_payload_tail(buf);
> > +   if (!mnl_attr_put_u8_check(buf, size, TCA_VLAN_PAD, 0))
> > +   goto error_nobufs;
> > +   mnl_attr_nest_end(buf, act);
> > +   mnl_attr_nest_end(buf, act_index);
> > +   if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) {
> > +override_na_vlan_id:
> > +   na_vlan_id->nla_type = TCA_VLAN_PUSH_VLAN_ID;
> > +   *(uint16_t *)mnl_attr_get_payload(na_vlan_id) =
> > +   rte_be_to_cpu_16
> > +   (conf.of_set_vlan_vid->vlan_vid);
> > +   } else if (action->type ==
> > +  RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP) {
> > +override_na_vlan_priority:
> > +   na_vlan_priority->nla_type =
> > +   TCA_VLAN_PUSH_VLAN_PRIORITY;
> > +   *(uint8_t *)mnl_attr_get_payload(na_vlan_priority) =
> > +   conf.of_set_vlan_pcp->vlan_pcp;
> > +   }
> > +   ++action;
> > +   break;
> 
> I'm wondering if there's no need to check the existence of VLAN in pattern 
> when
> having VLAN modification actions. 

Re: [dpdk-dev] [PATCH v2] test: add unit tests for bitrate library

2018-07-12 Thread Remy Horton

Patch needs rebasing:

Checking patch test/test/Makefile...
error: while searching for:

SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_blockcipher.c
SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c

ifeq ($(CONFIG_RTE_COMPRESSDEV_TEST),y)
SRCS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += test_compressdev.c

error: patch failed: test/test/Makefile:179
error: test/test/Makefile: patch does not apply

Manually fixing the above, patchset seems fine.


On 12/07/2018 10:53, Jananee Parthasarathy wrote:

Unit Test Cases for BitRate library.
Dependency patch is
"add sample functions for packet forwarding"
Patch Link is http://patches.dpdk.org/patch/42946/

Signed-off-by: Chaitanya Babu Talluri 
Reviewed-by: Reshma Pattan 


Acked-by: Remy Horton 


[dpdk-dev] [PATCH v2] lib/bitratestats: add NULL sanity checks

2018-07-12 Thread Remy Horton
If rte_stats_bitrate_reg() or rte_stats_bitrate_calc() are
passed NULL as the parameter for the stats structure, the
result is a crash. Fixed by adding a sanity check that makes
sure the passed-in pointer is not NULL.

Fixes: 2ad7ba9a6567 ("bitrate: add bitrate statistics library")

Signed-off-by: Remy Horton 
---
 lib/librte_bitratestats/rte_bitrate.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/lib/librte_bitratestats/rte_bitrate.c 
b/lib/librte_bitratestats/rte_bitrate.c
index 964e3c3..c4b28f6 100644
--- a/lib/librte_bitratestats/rte_bitrate.c
+++ b/lib/librte_bitratestats/rte_bitrate.c
@@ -47,6 +47,9 @@ rte_stats_bitrate_reg(struct rte_stats_bitrates *bitrate_data)
};
int return_value;
 
+   if (bitrate_data == NULL)
+   return -EINVAL;
+
return_value = rte_metrics_reg_names(&names[0], ARRAY_SIZE(names));
if (return_value >= 0)
bitrate_data->id_stats_set = return_value;
@@ -65,6 +68,9 @@ rte_stats_bitrate_calc(struct rte_stats_bitrates 
*bitrate_data,
const int64_t alpha_percent = 20;
uint64_t values[6];
 
+   if (bitrate_data == NULL)
+   return -EINVAL;
+
ret_code = rte_eth_stats_get(port_id, ð_stats);
if (ret_code != 0)
return ret_code;
-- 
2.9.5



Re: [dpdk-dev] [PATCH] net/mlx5: fix compilation for rdma-core v19

2018-07-12 Thread Ori Kam



> -Original Message-
> From: Shahaf Shuler [mailto:shah...@mellanox.com]
> Sent: Thursday, July 12, 2018 9:57 AM
> To: Yongseok Koh 
> Cc: dev@dpdk.org; ferruh.yi...@intel.com; step...@networkplumber.org;
> sta...@dpdk.org; Ori Kam 
> Subject: [PATCH] net/mlx5: fix compilation for rdma-core v19
> 
> The flow counter support introduced by
> commit 9a761de8ea14 ("net/mlx5: flow counter support") was intend to
> work only with MLNX_OFED_4.3 as the upstream rdma-core
> libraries were lack such support.
> 
> On rdma-core v19 the support for the flow counters was added but with
> different user APIs, hence causing compilation issues on the PMD.
> 
> This patch fix the compilation errors by forcing the flow counters
> to be enabled only with MLNX_OFED APIs.
> Once MLNX_OFED and rdma-core APIs will be aligned, a proper patch to
> support the new API will be submitted.
> 
> Fixes: 9a761de8ea14 ("net/mlx5: flow counter support")
> Cc: sta...@dpdk.org
> Cc: or...@mellanox.com
> 
> Reported-by:Stephen Hemminger 
> Reported-by: Ferruh Yigit 
> Signed-off-by: Shahaf Shuler 
> ---
>  drivers/net/mlx5/Makefile | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/net/mlx5/Makefile b/drivers/net/mlx5/Makefile
> index 9e274964b4..d86c6bbab9 100644
> --- a/drivers/net/mlx5/Makefile
> +++ b/drivers/net/mlx5/Makefile
> @@ -150,7 +150,7 @@ mlx5_autoconf.h.new: $(RTE_SDK)/buildtools/auto-
> config-h.sh
>   $Q sh -- '$<' '$@' \
>   HAVE_IBV_DEVICE_COUNTERS_SET_SUPPORT \
>   infiniband/verbs.h \
> - enum IBV_FLOW_SPEC_ACTION_COUNT \
> + type 'struct ibv_counter_set_init_attr' \
>   $(AUTOCONF_OUTPUT)
>   $Q sh -- '$<' '$@' \
>   HAVE_RDMA_NL_NLDEV \
> --
> 2.12.0


Acked-by: Ori Kam 


Re: [dpdk-dev] [PATCH] net/mlx5: fix compilation for rdma-core v19

2018-07-12 Thread Shahaf Shuler
Thursday, July 12, 2018 1:54 PM, Ori Kam:
> Subject: RE: [PATCH] net/mlx5: fix compilation for rdma-core v19
> >
> > The flow counter support introduced by commit 9a761de8ea14 ("net/mlx5:
> > flow counter support") was intend to work only with MLNX_OFED_4.3 as
> > the upstream rdma-core libraries were lack such support.
> >
> > On rdma-core v19 the support for the flow counters was added but with
> > different user APIs, hence causing compilation issues on the PMD.
> >
> > This patch fix the compilation errors by forcing the flow counters to
> > be enabled only with MLNX_OFED APIs.
> > Once MLNX_OFED and rdma-core APIs will be aligned, a proper patch to
> > support the new API will be submitted.
> >
> > Fixes: 9a761de8ea14 ("net/mlx5: flow counter support")
> > Cc: sta...@dpdk.org
> > Cc: or...@mellanox.com
> >
> > Reported-by:Stephen Hemminger 
> > Reported-by: Ferruh Yigit 
> > Signed-off-by: Shahaf Shuler 
> > ---
> >  drivers/net/mlx5/Makefile | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/mlx5/Makefile b/drivers/net/mlx5/Makefile
> > index 9e274964b4..d86c6bbab9 100644
> > --- a/drivers/net/mlx5/Makefile
> > +++ b/drivers/net/mlx5/Makefile
> > @@ -150,7 +150,7 @@ mlx5_autoconf.h.new: $(RTE_SDK)/buildtools/auto-
> > config-h.sh
> > $Q sh -- '$<' '$@' \
> > HAVE_IBV_DEVICE_COUNTERS_SET_SUPPORT \
> > infiniband/verbs.h \
> > -   enum IBV_FLOW_SPEC_ACTION_COUNT \
> > +   type 'struct ibv_counter_set_init_attr' \
> > $(AUTOCONF_OUTPUT)
> > $Q sh -- '$<' '$@' \
> > HAVE_RDMA_NL_NLDEV \
> > --
> > 2.12.0
> 
> 
> Acked-by: Ori Kam 

Applied to next-net-mlx, thanks. 



Re: [dpdk-dev] [PATCH v11 11/25] eal/dev: implement device iteration

2018-07-12 Thread Shreyansh Jain

On Thursday 12 July 2018 03:15 AM, Gaetan Rivet wrote:

Use the iteration hooks in the abstraction layers to perform the
requested filtering on the internal device lists.

Signed-off-by: Gaetan Rivet 
---
  lib/librte_eal/common/eal_common_dev.c  | 168 
  lib/librte_eal/common/include/rte_dev.h |  26 
  lib/librte_eal/rte_eal_version.map  |   1 +
  3 files changed, 195 insertions(+)

diff --git a/lib/librte_eal/common/eal_common_dev.c 
b/lib/librte_eal/common/eal_common_dev.c
index 63e329bd8..b78845f02 100644
--- a/lib/librte_eal/common/eal_common_dev.c
+++ b/lib/librte_eal/common/eal_common_dev.c
@@ -45,6 +45,28 @@ static struct dev_event_cb_list dev_event_cbs;
  /* spinlock for device callbacks */
  static rte_spinlock_t dev_event_lock = RTE_SPINLOCK_INITIALIZER;
  
+struct dev_next_ctx {

+   struct rte_dev_iterator *it;
+   const char *bus_str;
+   const char *cls_str;
+};
+
+#define CTX(it, bus_str, cls_str) \
+   (&(const struct dev_next_ctx){ \
+   .it = it, \
+   .bus_str = bus_str, \
+   .cls_str = cls_str, \
+   })
+
+#define ITCTX(ptr) \
+   (((struct dev_next_ctx *)(intptr_t)ptr)->it)
+
+#define BUSCTX(ptr) \
+   (((struct dev_next_ctx *)(intptr_t)ptr)->bus_str)
+
+#define CLSCTX(ptr) \
+   (((struct dev_next_ctx *)(intptr_t)ptr)->cls_str)
+
  static int cmp_detached_dev_name(const struct rte_device *dev,
const void *_name)
  {
@@ -398,3 +420,149 @@ rte_dev_iterator_init(struct rte_dev_iterator *it,
  get_out:
return -rte_errno;
  }
+
+static char *
+dev_str_sane_copy(const char *str)
+{
+   size_t end;
+   char *copy;
+
+   end = strcspn(str, ",/");
+   if (str[end] == ',') {
+   copy = strdup(&str[end + 1]);
+   } else {
+   /* '/' or '\0' */
+   copy = strdup("");
+   }


Though it doesn't change anything functionally, if you can separate 
blocks of if-else with new lines, it really makes it easier to read.

Like here...


+   if (copy == NULL) {
+   rte_errno = ENOMEM;
+   } else {
+   char *slash;
+
+   slash = strchr(copy, '/');
+   if (slash != NULL)
+   slash[0] = '\0';
+   }
+   return copy;
+}
+
+static int
+class_next_dev_cmp(const struct rte_class *cls,
+  const void *ctx)
+{
+   struct rte_dev_iterator *it;
+   const char *cls_str = NULL;
+   void *dev;
+
+   if (cls->dev_iterate == NULL)
+   return 1;
+   it = ITCTX(ctx);
+   cls_str = CLSCTX(ctx);
+   dev = it->class_device;
+   /* it->cls_str != NULL means a class
+* was specified in the devstr.
+*/
+   if (it->cls_str != NULL && cls != it->cls)
+   return 1;
+   /* If an error occurred previously,
+* no need to test further.
+*/
+   if (rte_errno != 0)
+   return -1;


I am guessing here by '..error occurred previously..' you mean 
sane_copy. If so, why wait until this point to return? Anyway the caller 
(rte_bus_find, probably) would only look for '0' or non-zero.



+   dev = cls->dev_iterate(dev, cls_str, it);
+   it->class_device = dev;
+   return dev == NULL;
+}
+
+static int
+bus_next_dev_cmp(const struct rte_bus *bus,
+const void *ctx)
+{
+   struct rte_device *dev = NULL;
+   struct rte_class *cls = NULL;
+   struct rte_dev_iterator *it;
+   const char *bus_str = NULL;
+
+   if (bus->dev_iterate == NULL)
+   return 1;
+   it = ITCTX(ctx);
+   bus_str = BUSCTX(ctx);
+   dev = it->device;
+   /* it->bus_str != NULL means a bus
+* was specified in the devstr.
+*/
+   if (it->bus_str != NULL && bus != it->bus)
+   return 1;
+   /* If an error occurred previously,
+* no need to test further.
+*/
+   if (rte_errno != 0)
+   return -1;
+   if (it->cls_str == NULL) {
+   dev = bus->dev_iterate(dev, bus_str, it);
+   goto end;


This is slightly confusing. If it->cls_str == NULL, you do 
bus->dev_iterate... but



+   }
+   /* cls_str != NULL */
+   if (dev == NULL) {
+next_dev_on_bus:
+   dev = bus->dev_iterate(dev, bus_str, it);


When cls_str!=NULL, you still do bus->dev_iterate...
So, maybe they are OR case resulting in check of dev==NULL and return 
(as being done right now by jumping to out)...?


And, how can bus iterate over a 'null' device?


+   it->device = dev;
+   }
+   if (dev == NULL)
+   return 1;


Maybe, this check should move in the if(dev==NULL) above - that way, it 
can in path of 'next_dev_on_bus' yet do the same as function as its 
current location.



+   if (it->cls != NULL)


In what case would (it->cls_str == NULL) but (it->cls != NULL)?


+   cls = TAILQ_PREV(it->cls, rte_class_li

Re: [dpdk-dev] [PATCH v2] mk: change TLS model for DPAA machine

2018-07-12 Thread Thomas Monjalon
04/07/2018 11:54, Hemant Agrawal:
> From: Sachin Saxena 
> 
> Random corruptions observed on platfoms with using
> the dpdk library in shared mode with VPP software (plugin).
> 
> using traditional TLS scheme resolved the issue.
> 
> Tested with VPP with DPDK as a plugin.
> 
> Signed-off-by: Sachin Saxena 
> ---
> +# To avoid TLS corruption issue.
> +MACHINE_CFLAGS += -mtls-dialect=trad

Applied as a temporary fix.

What about generic issue for other CPU?
What about meson build?




Re: [dpdk-dev] [PATCH v13 02/19] bus/pci: fix PCI address compare

2018-07-12 Thread Zhang, Qi Z


> -Original Message-
> From: Burakov, Anatoly
> Sent: Thursday, July 12, 2018 5:25 PM
> To: Zhang, Qi Z ; tho...@monjalon.net
> Cc: Ananyev, Konstantin ; dev@dpdk.org;
> Richardson, Bruce ; Yigit, Ferruh
> ; Shelton, Benjamin H
> ; Vangati, Narender
> ; sta...@dpdk.org
> Subject: Re: [PATCH v13 02/19] bus/pci: fix PCI address compare
> 
> On 12-Jul-18 2:14 AM, Qi Zhang wrote:
> > When use memcmp to compare two PCI address, sizeof(struct
> > rte_pci_addr) is 4 bytes aligned, and it is 8. While only 7 byte of
> > struct rte_pci_addr is valid. So compare the 8th byte will cause the
> > unexpected result, which happens when repeatedly attach/detach a device.
> >
> > Fixes: c752998b5e2e ("pci: introduce library and driver")
> > Cc: sta...@dpdk.org
> >
> > Signed-off-by: Qi Zhang 
> > ---
> >   drivers/bus/pci/linux/pci_vfio.c | 13 -
> >   1 file changed, 12 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/bus/pci/linux/pci_vfio.c
> > b/drivers/bus/pci/linux/pci_vfio.c
> > index aeeaa9ed8..dd25c3542 100644
> > --- a/drivers/bus/pci/linux/pci_vfio.c
> > +++ b/drivers/bus/pci/linux/pci_vfio.c
> > @@ -43,6 +43,17 @@ static struct rte_tailq_elem rte_vfio_tailq = {
> >   };
> >   EAL_REGISTER_TAILQ(rte_vfio_tailq)
> >
> > +/* Compair two pci address */
> > +static int pci_addr_cmp(struct rte_pci_addr *addr1, struct
> > +rte_pci_addr *addr2) {
> > +   if (addr1->domain == addr2->domain &&
> > +   addr1->bus == addr2->bus &&
> > +   addr1->devid == addr2->devid &&
> > +   addr1->function == addr2->function)
> > +   return 0;
> > +   return 1;
> > +}
> 
> Generally, change looks OK to me, but I think we already have this function in
> PCI library - rte_pci_addr_cmp(). Is there a specific reason to reimplement it
> here?

NO, and rte_pci_addr_cmp is what I want :), thanks!
> 
> > +
> >   int
> >   pci_vfio_read_config(const struct rte_intr_handle *intr_handle,
> > void *buf, size_t len, off_t offs) @@ -642,7 +653,7 @@
> > pci_vfio_unmap_resource(struct rte_pci_device *dev)
> > vfio_res_list = RTE_TAILQ_CAST(rte_vfio_tailq.head,
> mapped_pci_res_list);
> > /* Get vfio_res */
> > TAILQ_FOREACH(vfio_res, vfio_res_list, next) {
> > -   if (memcmp(&vfio_res->pci_addr, &dev->addr, sizeof(dev->addr)))
> > +   if (pci_addr_cmp(&vfio_res->pci_addr, &dev->addr))
> > continue;
> > break;
> > }
> >
> 
> 
> --
> Thanks,
> Anatoly


Re: [dpdk-dev] [PATCH v13 02/19] bus/pci: fix PCI address compare

2018-07-12 Thread Zhang, Qi Z



> -Original Message-
> From: Gaëtan Rivet [mailto:gaetan.ri...@6wind.com]
> Sent: Thursday, July 12, 2018 5:32 PM
> To: Burakov, Anatoly 
> Cc: Zhang, Qi Z ; tho...@monjalon.net; Ananyev,
> Konstantin ; dev@dpdk.org; Richardson,
> Bruce ; Yigit, Ferruh ;
> Shelton, Benjamin H ; Vangati, Narender
> ; sta...@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v13 02/19] bus/pci: fix PCI address compare
> 
> Hi,
> 
> On Thu, Jul 12, 2018 at 10:24:44AM +0100, Burakov, Anatoly wrote:
> > On 12-Jul-18 2:14 AM, Qi Zhang wrote:
> > > When use memcmp to compare two PCI address, sizeof(struct
> > > rte_pci_addr) is 4 bytes aligned, and it is 8. While only 7 byte of
> > > struct rte_pci_addr is valid. So compare the 8th byte will cause the
> > > unexpected result, which happens when repeatedly attach/detach a
> device.
> > >
> > > Fixes: c752998b5e2e ("pci: introduce library and driver")
> 
> Shouldn't be the original commit be
> 
> Fixes: 94c0776b1bad ("vfio: support hotplug")
> 
> instead?

You are right, this should be one that introduce the issue, thanks!

> 
> > > Cc: sta...@dpdk.org
> > >
> > > Signed-off-by: Qi Zhang 
> > > ---
> > >   drivers/bus/pci/linux/pci_vfio.c | 13 -
> > >   1 file changed, 12 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/bus/pci/linux/pci_vfio.c
> > > b/drivers/bus/pci/linux/pci_vfio.c
> > > index aeeaa9ed8..dd25c3542 100644
> > > --- a/drivers/bus/pci/linux/pci_vfio.c
> > > +++ b/drivers/bus/pci/linux/pci_vfio.c
> > > @@ -43,6 +43,17 @@ static struct rte_tailq_elem rte_vfio_tailq = {
> > >   };
> > >   EAL_REGISTER_TAILQ(rte_vfio_tailq)
> > > +/* Compair two pci address */
> > > +static int pci_addr_cmp(struct rte_pci_addr *addr1, struct
> > > +rte_pci_addr *addr2) {
> > > + if (addr1->domain == addr2->domain &&
> > > + addr1->bus == addr2->bus &&
> > > + addr1->devid == addr2->devid &&
> > > + addr1->function == addr2->function)
> > > + return 0;
> > > + return 1;
> > > +}
> >
> > Generally, change looks OK to me, but I think we already have this
> > function in PCI library - rte_pci_addr_cmp(). Is there a specific
> > reason to reimplement it here?
> >
> 
> +1
> 
> > > +
> > >   int
> > >   pci_vfio_read_config(const struct rte_intr_handle *intr_handle,
> > >   void *buf, size_t len, off_t offs) @@ -642,7 +653,7 
> > > @@
> > > pci_vfio_unmap_resource(struct rte_pci_device *dev)
> > >   vfio_res_list = RTE_TAILQ_CAST(rte_vfio_tailq.head,
> mapped_pci_res_list);
> > >   /* Get vfio_res */
> > >   TAILQ_FOREACH(vfio_res, vfio_res_list, next) {
> > > - if (memcmp(&vfio_res->pci_addr, &dev->addr, sizeof(dev->addr)))
> > > + if (pci_addr_cmp(&vfio_res->pci_addr, &dev->addr))
> > >   continue;
> > >   break;
> > >   }
> > >
> >
> >
> > --
> > Thanks,
> > Anatoly
> 
> --
> Gaëtan Rivet
> 6WIND


[dpdk-dev] [PATCH v3] net/mlx5: add support for 32bit systems

2018-07-12 Thread Moti Haimovsky
This patch adds support for building and running mlx5 PMD on
32bit systems such as i686.

The main issue to tackle was handling the 32bit access to the UAR
as quoted from the mlx5 PRM:
QP and CQ DoorBells require 64-bit writes. For best performance, it
is recommended to execute the QP/CQ DoorBell as a single 64-bit write
operation. For platforms that do not support 64 bit writes, it is
possible to issue the 64 bits DoorBells through two consecutive
writes,
each write 32 bits, as described below:
* The order of writing each of the Dwords is from lower to upper
  addresses.
* No other DoorBell can be rung (or even start ringing) in the midst
 of an on-going write of a DoorBell over a given UAR page.
The last rule implies that in a multi-threaded environment, the access
to a UAR page (which can be accessible by all threads in the process)
must be synchronized (for example, using a semaphore) unless an atomic
write of 64 bits in a single bus operation is guaranteed. Such a
synchronization is not required for when ringing DoorBells on different
UAR pages.

Signed-off-by: Moti Haimovsky 
---
v3:
* Rebased upon latest changes in mlx5 PMD and rdma-core.

v2:
* Fixed coding style issues.
* Modified documentation according to review inputs.
* Fixed merge conflicts.
---
 doc/guides/nics/features/mlx5.ini |  1 +
 doc/guides/nics/mlx5.rst  |  6 +++-
 drivers/net/mlx5/mlx5.c   |  8 -
 drivers/net/mlx5/mlx5.h   |  5 +++
 drivers/net/mlx5/mlx5_defs.h  | 18 --
 drivers/net/mlx5/mlx5_rxq.c   |  6 +++-
 drivers/net/mlx5/mlx5_rxtx.c  | 22 +++--
 drivers/net/mlx5/mlx5_rxtx.h  | 69 ++-
 drivers/net/mlx5/mlx5_txq.c   | 13 +++-
 9 files changed, 131 insertions(+), 17 deletions(-)

diff --git a/doc/guides/nics/features/mlx5.ini 
b/doc/guides/nics/features/mlx5.ini
index e75b14b..b28b43e 100644
--- a/doc/guides/nics/features/mlx5.ini
+++ b/doc/guides/nics/features/mlx5.ini
@@ -43,5 +43,6 @@ Multiprocess aware   = Y
 Other kdrv   = Y
 ARMv8= Y
 Power8   = Y
+x86-32   = Y
 x86-64   = Y
 Usage doc= Y
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 0d0d217..ebf2336 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -49,7 +49,7 @@ libibverbs.
 Features
 
 
-- Multi arch support: x86_64, POWER8, ARMv8.
+- Multi arch support: x86_64, POWER8, ARMv8, i686.
 - Multiple TX and RX queues.
 - Support for scattered TX and RX frames.
 - IPv4, IPv6, TCPv4, TCPv6, UDPv4 and UDPv6 RSS on any number of queues.
@@ -489,6 +489,10 @@ RMDA Core with Linux Kernel
 - Minimal kernel version : v4.14 or the most recent 4.14-rc (see `Linux 
installation documentation`_)
 - Minimal rdma-core version: v15+ commit 0c5f5765213a ("Merge pull request 
#227 from yishaih/tm")
   (see `RDMA Core installation documentation`_)
+- When building for i686 use:
+
+  - rdma-core version 18.0 or above built with 32bit support.
+  - Kernel version 4.14.41 or above.
 
 .. _`Linux installation documentation`: 
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/plain/Documentation/admin-guide/README.rst
 .. _`RDMA Core installation documentation`: 
https://raw.githubusercontent.com/linux-rdma/rdma-core/master/README.md
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index dda50b8..15f1a17 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -598,7 +598,7 @@
rte_memseg_walk(find_lower_va_bound, &addr);
 
/* keep distance to hugepages to minimize potential conflicts. */
-   addr = RTE_PTR_SUB(addr, MLX5_UAR_OFFSET + MLX5_UAR_SIZE);
+   addr = RTE_PTR_SUB(addr, (uintptr_t)(MLX5_UAR_OFFSET + MLX5_UAR_SIZE));
/* anonymous mmap, no real memory consumption. */
addr = mmap(addr, MLX5_UAR_SIZE,
PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
@@ -939,6 +939,12 @@
priv->device_attr = attr;
priv->pd = pd;
priv->mtu = ETHER_MTU;
+#ifndef RTE_ARCH_64
+   /* Initialize UAR access locks for 32bit implementations. */
+   rte_spinlock_init(&priv->uar_lock_cq);
+   for (i = 0; i < MLX5_UAR_PAGE_NUM_MAX; i++)
+   rte_spinlock_init(&priv->uar_lock[i]);
+#endif
/* Some internal functions rely on Netlink sockets, open them now. */
priv->nl_socket_rdma = mlx5_nl_init(0, NETLINK_RDMA);
priv->nl_socket_route = mlx5_nl_init(RTMGRP_LINK, NETLINK_ROUTE);
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 131be33..896158a 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -215,6 +215,11 @@ struct priv {
int nl_socket_rdma; /* Netlink socket (NETLINK_RDMA). */
int nl_socket_route; /* Netlink socket (NETLINK_ROUTE). */
uint32_t nl_sn; /* Netlink message sequence number. */
+#ifndef RTE_ARCH_64
+   rte_spinlock_t uar_lock_cq; /* CQs share a com

Re: [dpdk-dev] [PATCH v4 13/23] net/softnic: add connection agent

2018-07-12 Thread Dumitrescu, Cristian


> -Original Message-
> From: Thomas Monjalon [mailto:tho...@monjalon.net]
> Sent: Wednesday, July 11, 2018 8:58 PM
> To: Singh, Jasvinder 
> Cc: dev@dpdk.org; Dumitrescu, Cristian 
> Subject: Re: [dpdk-dev] [PATCH v4 13/23] net/softnic: add connection agent
> 
> 05/07/2018 17:47, Jasvinder Singh:
> > Add connection agent to enable connectivity with external agen (e.g.
> > telnet, netcat, Python script, etc).
> >
> > Signed-off-by: Cristian Dumitrescu 
> > Signed-off-by: Jasvinder Singh 
> > ---
> >  config/common_base |   2 +-
> >  config/common_linuxapp |   1 +
> >  drivers/net/softnic/Makefile   |  12 +-
> >  drivers/net/softnic/conn.c | 332 
> > +
> >  drivers/net/softnic/conn.h |  49 +++
> >  drivers/net/softnic/rte_eth_softnic.c  |  79 -
> >  drivers/net/softnic/rte_eth_softnic.h  |  16 +
> >  drivers/net/softnic/rte_eth_softnic_internals.h|   3 +
> >  ...nic_version.map => rte_eth_softnic_version.map} |   6 +
> >  9 files changed, 496 insertions(+), 4 deletions(-)  create mode
> > 100644 drivers/net/softnic/conn.c  create mode 100644
> > drivers/net/softnic/conn.h  rename
> > drivers/net/softnic/{rte_pmd_softnic_version.map =>
> > rte_eth_softnic_version.map} (52%)
> 
> Why are you renaming this file?
> 
> If you test the compilation with devtools/test-meson-builds.sh you will see 
> this
> error:
>   drivers/meson.build:111:3: ERROR:
>   File drivers/net/softnic/rte_pmd_softnic_version.map does not exist.
> 
> 

Fixed. Rebased on DPDK master latest as well.

> > +ifneq ($(CONFIG_RTE_EXEC_ENV),"linuxapp")
> > +$(info Softnic PMD can only operate in a linuxapp environment, \
> 
> I think it is a really wrong idea to limit softnic to Linux only.
> 

Main reasons are: use of epoll API, use of TAP ports, etc which are Linux only. 
At some point we'll consider re-implementing some of the internals with 
alternatives that are portable to Free BSD as well.



Re: [dpdk-dev] [PATCH v3 16/16] net/dpaa: implement scatter offload support

2018-07-12 Thread Thomas Monjalon
Title can be "net/dpaa: support scatter offload"

06/07/2018 10:10, Hemant Agrawal:
> + /* Max packet can fit in single buffer */
> + if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
> + ;

Why an empty statement?

> + } else if (dev->data->dev_conf.rxmode.enable_scatter) {

error: ‘struct rte_eth_rxmode’ has no member named ‘enable_scatter’





Re: [dpdk-dev] [PATCH v5 02/16] compress/qat: add makefiles for PMD

2018-07-12 Thread De Lara Guarch, Pablo



> -Original Message-
> From: Trahe, Fiona
> Sent: Wednesday, July 11, 2018 12:57 PM
> To: dev@dpdk.org
> Cc: De Lara Guarch, Pablo ; Trahe, Fiona
> ; Jozwiak, TomaszX 
> Subject: [PATCH v5 02/16] compress/qat: add makefiles for PMD
> 
> Add Makefiles, directory and empty source files for compression PMD.
> Handle cases for building either symmetric crypto PMD or compression PMD or
> both and the common files both depend on.
> 
> Signed-off-by: Fiona Trahe 
> Signed-off-by: Tomasz Jozwiak 
> ---
>  MAINTAINERS |  4 +++
>  config/common_base  |  3 +-
>  drivers/common/qat/Makefile | 60 
> +++--
>  drivers/compress/qat/qat_comp.c |  5 
>  drivers/compress/qat/qat_comp.h | 14 +
>  drivers/compress/qat/qat_comp_pmd.c |  5 
> drivers/compress/qat/qat_comp_pmd.h | 15 ++
>  test/test/test_cryptodev.c  |  6 ++--
>  8 files changed, 86 insertions(+), 26 deletions(-)  create mode 100644
> drivers/compress/qat/qat_comp.c  create mode 100644
> drivers/compress/qat/qat_comp.h  create mode 100644
> drivers/compress/qat/qat_comp_pmd.c
>  create mode 100644 drivers/compress/qat/qat_comp_pmd.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 8050b5d..50b2dff 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -852,6 +852,10 @@ F: drivers/compress/isal/
>  F: doc/guides/compressdevs/isal.rst
>  F: doc/guides/compressdevs/features/isal.ini
> 
> +Intel QuickAssist
> +M: Fiona Trahe 
> +F: drivers/compress/qat/
> +F: drivers/common/qat/
> 
>  Eventdev Drivers
>  
> diff --git a/config/common_base b/config/common_base index
> e4241db..1e340b4 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -480,7 +480,8 @@ CONFIG_RTE_LIBRTE_DPAA_MAX_CRYPTODEV=4
>  #
>  # Compile PMD for QuickAssist based devices  # -
> CONFIG_RTE_LIBRTE_PMD_QAT=n
> +CONFIG_RTE_LIBRTE_PMD_QAT=y
> +CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n

Since now you are enabling QAT driver by default, mk/rte.app.mk needs to be 
changed.
QAT_SYM requires libcrypto, not QAT itself, so right now, by default libcrypto 
is needed.
A change like the following would solve the problem, but not sure if it is 
correct.

+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += -lrte_pmd_qat
 ifeq ($(CONFIG_RTE_LIBRTE_CRYPTODEV),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)+= -lrte_pmd_aesni_mb
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)+= -lIPSec_MB
@@ -190,7 +191,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM)   += -lIPSec_MB
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_CCP) += -lrte_pmd_ccp -lcrypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_OPENSSL) += -lrte_pmd_openssl -lcrypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += -lrte_pmd_null_crypto
-_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += -lrte_pmd_qat -lcrypto
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT_SYM) += -lcrypto


[dpdk-dev] DPDK Release Status Meeting 12/07/2018

2018-07-12 Thread De Lara Guarch, Pablo
Minutes of 12 July 2018
---

Agenda:
* Merge Deadline for 18.08
* Subtrees


Participants:
* Intel
* Mellanox
* NXP
* Cavium

Merge Deadline for 18.08

* *RC1* date pushed to *Friday 13 July*.
* "Enable hotplug on multi-process" will not be included in 18.08, due to 
insufficient reviews
* Some patchsets require more attention:
  * Memory patches from Anatoly
  * Some fixes from Hotplug patchset, which will be extracted from the set and 
merged in 18.08.
* RC2 date targeting 24-25 July.


Subtrees


* main
  * More patches to be merged, more review required

* next-net
  * Subtree was pulled this week
  * Other patches have been applied to the other subtrees 
(next-net-intel/next-net-mlx).
  * Thomas might get more patches directly from these subtrees for RC1

* next-crypto
  * Subtree was pulled this week
  * All library changes have been applied
  * QAT Compression PMD still needs to be applied, so it will be done before 
RC1 is out
  * Other PMDs will be merged in RC2

* next-virtio
  * Subtree was pulled this week

* next-eventdev
  * Subtree was pulled this week

* next-pipeline
  * Some issues need to be resolved before the subtree can be pulled for RC1.

* next-qos
  * No update was given


DPDK Release Status Meetings


The DPDK Release Status Meeting is intended for DPDK Committers to discuss the 
status of the master tree and sub-trees, and for project managers to track 
progress or milestone dates.

The meeting occurs on Thursdays at 8:30 UTC. If you wish to attend just send me 
an email and I will send you the invite.



Re: [dpdk-dev] [PATCH v3 01/16] bus/dpaa: fix phandle support for kernel 4.16

2018-07-12 Thread Thomas Monjalon
06/07/2018 10:09, Hemant Agrawal:
> From: Alok Makhariya 
> 
> Fixes: 2183c6f69d7e ("bus/dpaa: add OF parser for device scanning")
> Cc: Shreyansh Jain 
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Alok Makhariya 
> Acked-by: Shreyansh Jain 

Series applied without last patch (because must be reworked).

This series has a lot of cleanups. Starting from now, I will consider
NXP drivers as mature enough. I won't accept anymore such patch without
(or not enough) explanation.
For your users and readers, please start considering to explain
what was wrong and what needs to be changed.
Thanks





Re: [dpdk-dev] [PATCH] raw/dpaa2_qdma: fix the IOVA as VA flag for driver

2018-07-12 Thread Thomas Monjalon
21/06/2018 11:15, Hemant Agrawal:
> Fixes: b1ee472fed58 ("raw/dpaa2_qdma: introduce the DPAA2 QDMA driver")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Hemant Agrawal 

Applied




Re: [dpdk-dev] [PATCH] eal/rwlocks: Try read/write and relock write to read locks added.

2018-07-12 Thread Thomas Monjalon
Hi,

Unfortunately, after 2 months, nobody reviewed this patch.

You could motivate some reviews by providing some explanations
or context of use.


21/05/2018 11:08, Leonid Myravjev:
> From: Leonid Myravjev 
> 
> Signed-off-by: Leonid Myravjev 
> ---
>  lib/librte_eal/common/include/generic/rte_rwlock.h | 61 
> ++
>  1 file changed, 61 insertions(+)
> 
> diff --git a/lib/librte_eal/common/include/generic/rte_rwlock.h 
> b/lib/librte_eal/common/include/generic/rte_rwlock.h
> index 899e9bc43..11212e2b8 100644
> --- a/lib/librte_eal/common/include/generic/rte_rwlock.h
> +++ b/lib/librte_eal/common/include/generic/rte_rwlock.h
> @@ -76,6 +76,30 @@ rte_rwlock_read_lock(rte_rwlock_t *rwl)
>  }
>  
>  /**
> + * Try take lock a read lock.
> + *
> + * @param rwl
> + *   A pointer to a rwlock structure.
> + * @return
> + *   1 if the lock is successfully taken; 0 otherwise.
> + */
> +static inline int
> +rte_rwlock_read_trylock(rte_rwlock_t *rwl)
> +{
> + int32_t x;
> + int success = 0;
> +
> + x = rwl->cnt;
> + /* write lock is held */
> + if (x < 0)
> + return 0;
> + success = rte_atomic32_cmpset((volatile uint32_t *)&rwl->cnt, x, x + 1);
> + if (success == 0)
> + return 0;
> + return 1;
> +}
> +
> +/**
>   * Release a read lock.
>   *
>   * @param rwl
> @@ -110,6 +134,29 @@ rte_rwlock_write_lock(rte_rwlock_t *rwl)
> 0, -1);
>   }
>  }
> +/**
> + * Try take a write lock.
> + *
> + * @param rwl
> + *   A pointer to a rwlock structure.
> + * @return
> + *   1 if the lock is successfully taken; 0 otherwise.
> + */
> +static inline int
> +rte_rwlock_write_trylock(rte_rwlock_t *rwl)
> +{
> + int32_t x;
> + int success = 0;
> +
> + x = rwl->cnt;
> + /* a lock is held */
> + if (x != 0)
> + return 0;
> + success = rte_atomic32_cmpset((volatile uint32_t *)&rwl->cnt, 0, -1);
> + if (success == 0)
> + return 0;
> + return 1;
> +}
>  
>  /**
>   * Release a write lock.
> @@ -124,6 +171,20 @@ rte_rwlock_write_unlock(rte_rwlock_t *rwl)
>  }
>  
>  /**
> + * Relock write lock to read
> + *
> + * @param rwl
> + *   A pointer to a rwlock structure.
> + */
> +static inline void
> +rte_rwlock_write_relock_read(rte_rwlock_t *rwl)
> +{
> + rte_atomic32_add((rte_atomic32_t *)(intptr_t)&rwl->cnt, 2);
> +}
> +
> +
> +
> +/**
>   * Try to execute critical section in a hardware memory transaction, if it
>   * fails or not available take a read lock
>   *
> 







[dpdk-dev] [PATCH 0/4] couple hotplug fix

2018-07-12 Thread Qi Zhang
The patchset fix couple issues that related with hotplug add and hotplug
remove.

Qi Zhang (4):
  eal: fix hotplug add and hotplug remove
  bus/pci: fix PCI address compare
  bus/pci: enable vfio unmap resource for secondary
  vfio: remove uneccessary IPC for group fd clear

 drivers/bus/pci/linux/pci_vfio.c   | 118 +++--
 lib/librte_eal/common/eal_common_dev.c |  26 +++---
 lib/librte_eal/linuxapp/eal/eal_vfio.c |  45 ++
 lib/librte_eal/linuxapp/eal/eal_vfio.h |   1 -
 lib/librte_eal/linuxapp/eal/eal_vfio_mp_sync.c |   8 --
 5 files changed, 110 insertions(+), 88 deletions(-)

-- 
2.13.6



[dpdk-dev] [PATCH 2/4] bus/pci: fix PCI address compare

2018-07-12 Thread Qi Zhang
When use memcmp to compare two PCI address, sizeof(struct rte_pci_addr)
is 4 bytes aligned, and it is 8. While only 7 byte of struct rte_pci_addr
is valid. So compare the 8th byte will cause the unexpected result, which
happens when repeatedly attach/detach a device.

Fixes: 94c0776b1bad ("vfio: support hotplug")
Cc: sta...@dpdk.org

Signed-off-by: Qi Zhang 
---
 drivers/bus/pci/linux/pci_vfio.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index aeeaa9ed8..933b95540 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -642,7 +642,7 @@ pci_vfio_unmap_resource(struct rte_pci_device *dev)
vfio_res_list = RTE_TAILQ_CAST(rte_vfio_tailq.head, 
mapped_pci_res_list);
/* Get vfio_res */
TAILQ_FOREACH(vfio_res, vfio_res_list, next) {
-   if (memcmp(&vfio_res->pci_addr, &dev->addr, sizeof(dev->addr)))
+   if (rte_pci_addr_cmp(&vfio_res->pci_addr, &dev->addr))
continue;
break;
}
-- 
2.13.6



[dpdk-dev] [PATCH 3/4] bus/pci: enable vfio unmap resource for secondary

2018-07-12 Thread Qi Zhang
Subroutine to unmap VFIO resource is shared by secondary and
primary, and it does not work on the secondary process. Since
for secondary process, it is not necessary to close interrupt
handler, set pci bus mastering and remove vfio_res from
vfio_res_list. So, the patch adds a dedicate function to handle
the situation when a device is unmapped on a secondary process.

Signed-off-by: Qi Zhang 
Reviewed-by: Anatoly Burakov 
---
 drivers/bus/pci/linux/pci_vfio.c | 118 +--
 1 file changed, 90 insertions(+), 28 deletions(-)

diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index 933b95540..686386d6a 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -584,6 +584,9 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev)
dev->mem_resource[i].addr = maps[i].addr;
}
 
+   /* we need save vfio_dev_fd, so it can be used during release */
+   dev->intr_handle.vfio_dev_fd = vfio_dev_fd;
+
return 0;
 err_vfio_dev_fd:
close(vfio_dev_fd);
@@ -603,22 +606,58 @@ pci_vfio_map_resource(struct rte_pci_device *dev)
return pci_vfio_map_resource_secondary(dev);
 }
 
-int
-pci_vfio_unmap_resource(struct rte_pci_device *dev)
+static struct mapped_pci_resource *
+find_and_unmap_vfio_resource(struct mapped_pci_res_list *vfio_res_list,
+   struct rte_pci_device *dev,
+   const char *pci_addr)
+{
+   struct mapped_pci_resource *vfio_res = NULL;
+   struct pci_map *maps;
+   int i;
+
+   /* Get vfio_res */
+   TAILQ_FOREACH(vfio_res, vfio_res_list, next) {
+   if (rte_pci_addr_cmp(&vfio_res->pci_addr, &dev->addr))
+   continue;
+   break;
+   }
+
+   if  (vfio_res == NULL)
+   return vfio_res;
+
+   RTE_LOG(INFO, EAL, "Releasing pci mapped resource for %s\n",
+   pci_addr);
+
+   maps = vfio_res->maps;
+   for (i = 0; i < (int) vfio_res->nb_maps; i++) {
+
+   /*
+* We do not need to be aware of MSI-X table BAR mappings as
+* when mapping. Just using current maps array is enough
+*/
+   if (maps[i].addr) {
+   RTE_LOG(INFO, EAL, "Calling pci_unmap_resource for %s 
at %p\n",
+   pci_addr, maps[i].addr);
+   pci_unmap_resource(maps[i].addr, maps[i].size);
+   }
+   }
+
+   return vfio_res;
+}
+
+static int
+pci_vfio_unmap_resource_primary(struct rte_pci_device *dev)
 {
char pci_addr[PATH_MAX] = {0};
struct rte_pci_addr *loc = &dev->addr;
-   int i, ret;
struct mapped_pci_resource *vfio_res = NULL;
struct mapped_pci_res_list *vfio_res_list;
-
-   struct pci_map *maps;
+   int ret;
 
/* store PCI address string */
snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
loc->domain, loc->bus, loc->devid, loc->function);
 
-
if (close(dev->intr_handle.fd) < 0) {
RTE_LOG(INFO, EAL, "Error when closing eventfd file descriptor 
for %s\n",
pci_addr);
@@ -639,13 +678,10 @@ pci_vfio_unmap_resource(struct rte_pci_device *dev)
return ret;
}
 
-   vfio_res_list = RTE_TAILQ_CAST(rte_vfio_tailq.head, 
mapped_pci_res_list);
-   /* Get vfio_res */
-   TAILQ_FOREACH(vfio_res, vfio_res_list, next) {
-   if (rte_pci_addr_cmp(&vfio_res->pci_addr, &dev->addr))
-   continue;
-   break;
-   }
+   vfio_res_list =
+   RTE_TAILQ_CAST(rte_vfio_tailq.head, mapped_pci_res_list);
+   vfio_res = find_and_unmap_vfio_resource(vfio_res_list, dev, pci_addr);
+
/* if we haven't found our tailq entry, something's wrong */
if (vfio_res == NULL) {
RTE_LOG(ERR, EAL, "  %s cannot find TAILQ entry for PCI 
device!\n",
@@ -653,30 +689,56 @@ pci_vfio_unmap_resource(struct rte_pci_device *dev)
return -1;
}
 
-   /* unmap BARs */
-   maps = vfio_res->maps;
+   TAILQ_REMOVE(vfio_res_list, vfio_res, next);
 
-   RTE_LOG(INFO, EAL, "Releasing pci mapped resource for %s\n",
-   pci_addr);
-   for (i = 0; i < (int) vfio_res->nb_maps; i++) {
+   return 0;
+}
 
-   /*
-* We do not need to be aware of MSI-X table BAR mappings as
-* when mapping. Just using current maps array is enough
-*/
-   if (maps[i].addr) {
-   RTE_LOG(INFO, EAL, "Calling pci_unmap_resource for %s 
at %p\n",
-   pci_addr, maps[i].addr);
-   pci_unmap_resource(maps[i].addr, maps[i].size);
-   }
+static int
+pci_vfio_unmap_resource_secondary(struct rte_pci_device *dev)
+{
+   char p

[dpdk-dev] [PATCH 1/4] eal: fix hotplug add and hotplug remove

2018-07-12 Thread Qi Zhang
If hotplug add an already plugged PCI device, it will
cause rte_pci_device->device.name be corrupted due to unexpected
rte_devargs_remove. Also if try to hotplug remove an already
unplugged device, it will cause segment fault due to unexpected
bus->unplug on a rte_device whose driver is NULL.
The patch fix these issues.

Fixes: 7e8b26650146 ("eal: fix hotplug add / remove")
Cc: sta...@dpdk.org

Signed-off-by: Qi Zhang 
---
 lib/librte_eal/common/eal_common_dev.c | 26 --
 1 file changed, 12 insertions(+), 14 deletions(-)

diff --git a/lib/librte_eal/common/eal_common_dev.c 
b/lib/librte_eal/common/eal_common_dev.c
index 61cb3b162..0fa8c815d 100644
--- a/lib/librte_eal/common/eal_common_dev.c
+++ b/lib/librte_eal/common/eal_common_dev.c
@@ -42,18 +42,6 @@ static struct dev_event_cb_list dev_event_cbs;
 /* spinlock for device callbacks */
 static rte_spinlock_t dev_event_lock = RTE_SPINLOCK_INITIALIZER;
 
-static int cmp_detached_dev_name(const struct rte_device *dev,
-   const void *_name)
-{
-   const char *name = _name;
-
-   /* skip attached devices */
-   if (dev->driver != NULL)
-   return 1;
-
-   return strcmp(dev->name, name);
-}
-
 static int cmp_dev_name(const struct rte_device *dev, const void *_name)
 {
const char *name = _name;
@@ -151,14 +139,19 @@ int __rte_experimental rte_eal_hotplug_add(const char 
*busname, const char *devn
if (ret)
goto err_devarg;
 
-   dev = bus->find_device(NULL, cmp_detached_dev_name, devname);
+   dev = bus->find_device(NULL, cmp_dev_name, devname);
if (dev == NULL) {
-   RTE_LOG(ERR, EAL, "Cannot find unplugged device (%s)\n",
+   RTE_LOG(ERR, EAL, "Cannot find device (%s)\n",
devname);
ret = -ENODEV;
goto err_devarg;
}
 
+   if (dev->driver != NULL) {
+   RTE_LOG(ERR, EAL, "Device is already plugged\n");
+   return -EEXIST;
+   }
+
ret = bus->plug(dev);
if (ret) {
RTE_LOG(ERR, EAL, "Driver cannot attach the device (%s)\n",
@@ -200,6 +193,11 @@ rte_eal_hotplug_remove(const char *busname, const char 
*devname)
return -EINVAL;
}
 
+   if (dev->driver == NULL) {
+   RTE_LOG(ERR, EAL, "Device is already unplugged\n");
+   return -ENOENT;
+   }
+
ret = bus->unplug(dev);
if (ret)
RTE_LOG(ERR, EAL, "Driver cannot detach the device (%s)\n",
-- 
2.13.6



[dpdk-dev] [PATCH 4/4] vfio: remove uneccessary IPC for group fd clear

2018-07-12 Thread Qi Zhang
Clear vfio_group_fd is not necessary to involve any IPC.
Also, current IPC implementation for SOCKET_CLR_GROUP is not
correct. rte_vfio_clear_group on secondary will always fail,
that prevent device be detached correctly on a secondary process.
The patch simply removes all IPC related stuff in
rte_vfio_clear_group.

Fixes: 83a73c5fef66 ("vfio: use generic multi-process channel")
Cc: sta...@dpdk.org

Signed-off-by: Qi Zhang 
Acked-by: Anatoly Burakov 
---
 lib/librte_eal/linuxapp/eal/eal_vfio.c | 45 +-
 lib/librte_eal/linuxapp/eal/eal_vfio.h |  1 -
 lib/librte_eal/linuxapp/eal/eal_vfio_mp_sync.c |  8 -
 3 files changed, 8 insertions(+), 46 deletions(-)

diff --git a/lib/librte_eal/linuxapp/eal/eal_vfio.c 
b/lib/librte_eal/linuxapp/eal/eal_vfio.c
index a2bbdfbf4..c0eccddc3 100644
--- a/lib/librte_eal/linuxapp/eal/eal_vfio.c
+++ b/lib/librte_eal/linuxapp/eal/eal_vfio.c
@@ -575,10 +575,6 @@ int
 rte_vfio_clear_group(int vfio_group_fd)
 {
int i;
-   struct rte_mp_msg mp_req, *mp_rep;
-   struct rte_mp_reply mp_reply;
-   struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
-   struct vfio_mp_param *p = (struct vfio_mp_param *)mp_req.param;
struct vfio_config *vfio_cfg;
 
vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd);
@@ -587,40 +583,15 @@ rte_vfio_clear_group(int vfio_group_fd)
return -1;
}
 
-   if (internal_config.process_type == RTE_PROC_PRIMARY) {
-
-   i = get_vfio_group_idx(vfio_group_fd);
-   if (i < 0)
-   return -1;
-   vfio_cfg->vfio_groups[i].group_num = -1;
-   vfio_cfg->vfio_groups[i].fd = -1;
-   vfio_cfg->vfio_groups[i].devices = 0;
-   vfio_cfg->vfio_active_groups--;
-   return 0;
-   }
-
-   p->req = SOCKET_CLR_GROUP;
-   p->group_num = vfio_group_fd;
-   strcpy(mp_req.name, EAL_VFIO_MP);
-   mp_req.len_param = sizeof(*p);
-   mp_req.num_fds = 0;
-
-   if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
-   mp_reply.nb_received == 1) {
-   mp_rep = &mp_reply.msgs[0];
-   p = (struct vfio_mp_param *)mp_rep->param;
-   if (p->result == SOCKET_OK) {
-   free(mp_reply.msgs);
-   return 0;
-   } else if (p->result == SOCKET_NO_FD)
-   RTE_LOG(ERR, EAL, "  BAD VFIO group fd!\n");
-   else
-   RTE_LOG(ERR, EAL, "  no such VFIO group fd!\n");
-
-   free(mp_reply.msgs);
-   }
+   i = get_vfio_group_idx(vfio_group_fd);
+   if (i < 0)
+   return -1;
+   vfio_cfg->vfio_groups[i].group_num = -1;
+   vfio_cfg->vfio_groups[i].fd = -1;
+   vfio_cfg->vfio_groups[i].devices = 0;
+   vfio_cfg->vfio_active_groups--;
 
-   return -1;
+   return 0;
 }
 
 int
diff --git a/lib/librte_eal/linuxapp/eal/eal_vfio.h 
b/lib/librte_eal/linuxapp/eal/eal_vfio.h
index e65b10374..68d4750a5 100644
--- a/lib/librte_eal/linuxapp/eal/eal_vfio.h
+++ b/lib/librte_eal/linuxapp/eal/eal_vfio.h
@@ -129,7 +129,6 @@ int vfio_mp_sync_setup(void);
 
 #define SOCKET_REQ_CONTAINER 0x100
 #define SOCKET_REQ_GROUP 0x200
-#define SOCKET_CLR_GROUP 0x300
 #define SOCKET_OK 0x0
 #define SOCKET_NO_FD 0x1
 #define SOCKET_ERR 0xFF
diff --git a/lib/librte_eal/linuxapp/eal/eal_vfio_mp_sync.c 
b/lib/librte_eal/linuxapp/eal/eal_vfio_mp_sync.c
index 9c202bb08..680a24aae 100644
--- a/lib/librte_eal/linuxapp/eal/eal_vfio_mp_sync.c
+++ b/lib/librte_eal/linuxapp/eal/eal_vfio_mp_sync.c
@@ -55,14 +55,6 @@ vfio_mp_primary(const struct rte_mp_msg *msg, const void 
*peer)
reply.fds[0] = fd;
}
break;
-   case SOCKET_CLR_GROUP:
-   r->req = SOCKET_CLR_GROUP;
-   r->group_num = m->group_num;
-   if (rte_vfio_clear_group(m->group_num) < 0)
-   r->result = SOCKET_NO_FD;
-   else
-   r->result = SOCKET_OK;
-   break;
case SOCKET_REQ_CONTAINER:
r->req = SOCKET_REQ_CONTAINER;
fd = rte_vfio_get_container_fd();
-- 
2.13.6



[dpdk-dev] [PATCH v4 0/3]crypto/openssl: support asymmetric crypto

2018-07-12 Thread Shally Verma
This patch series add asymmetric crypto support in openssl pmd
changes in v4:
- add openssl 1.1.0h support in openssl PMD for asym operations.
- A compat.h added for PMD compatibility with both 1.0.2 and 1.1.0
- update openssl document with asymmetric feature support 

For further history refer https://patches.dpdk.org/patch/40079/

Sunila Sahu (3):
  crypto/openssl: add rsa and mod asym op
  crypto/openssl: add dh and dsa asym op
  doc: add asym feature list

 doc/guides/cryptodevs/features/openssl.ini   |  11 +
 doc/guides/cryptodevs/openssl.rst|   1 +
 drivers/crypto/openssl/compat.h  | 108 +
 drivers/crypto/openssl/rte_openssl_pmd.c | 466 +++-
 drivers/crypto/openssl/rte_openssl_pmd_ops.c | 528 ++-
 drivers/crypto/openssl/rte_openssl_pmd_private.h |  28 ++
 6 files changed, 1130 insertions(+), 12 deletions(-)
 create mode 100644 drivers/crypto/openssl/compat.h

-- 
2.9.5



[dpdk-dev] [PATCH v4 1/3] crypto/openssl: add rsa and mod asym op

2018-07-12 Thread Shally Verma
From: Sunila Sahu 

- Add compat.h to make pmd compatible to openssl-1.1.0 and
  backward version
- Add rsa sign/verify/encrypt/decrypt and modular operation
  support

Signed-off-by: Sunila Sahu 
Signed-off-by: Shally Verma 
Signed-off-by: Ashish Gupta 
---
 drivers/crypto/openssl/compat.h  |  40 +++
 drivers/crypto/openssl/rte_openssl_pmd.c | 229 ++-
 drivers/crypto/openssl/rte_openssl_pmd_ops.c | 336 ++-
 drivers/crypto/openssl/rte_openssl_pmd_private.h |  19 ++
 4 files changed, 612 insertions(+), 12 deletions(-)

diff --git a/drivers/crypto/openssl/compat.h b/drivers/crypto/openssl/compat.h
new file mode 100644
index 000..8ece808
--- /dev/null
+++ b/drivers/crypto/openssl/compat.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Cavium Networks
+ */
+
+#ifndef __RTA_COMPAT_H__
+#define __RTA_COMPAT_H__
+
+#if (OPENSSL_VERSION_NUMBER < 0x1010L)
+
+#define set_rsa_params(rsa, p, q, ret) \
+   do {rsa->p = p; rsa->q = q; ret = 0; } while (0)
+
+#define set_rsa_crt_params(rsa, dmp1, dmq1, iqmp, ret) \
+   do { \
+   rsa->dmp1 = dmp1; \
+   rsa->dmq1 = dmq1; \
+   rsa->iqmp = iqmp; \
+   ret = 0; \
+   } while (0)
+
+#define set_rsa_keys(rsa, n, e, d, ret) \
+   do { \
+   rsa->n = n; rsa->e = e; rsa->d = d; ret = 0; \
+   } while (0)
+
+#else
+
+#define set_rsa_params(rsa, p, q, ret) \
+   (ret = !RSA_set0_factors(rsa, p, q))
+
+#define set_rsa_crt_params(rsa, dmp1, dmq1, iqmp, ret) \
+   (ret = !RSA_set0_crt_params(rsa, dmp1, dmq1, iqmp))
+
+/* n, e must be non-null, d can be NULL */
+#define set_rsa_keys(rsa, n, e, d, ret) \
+   (ret = !RSA_set0_key(rsa, n, e, d))
+
+#endif /* version < 1010 */
+
+#endif /* __RTA_COMPAT_H__ */
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c 
b/drivers/crypto/openssl/rte_openssl_pmd.c
index 5228b92..e21a6a1 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -14,6 +14,7 @@
 #include 
 
 #include "rte_openssl_pmd_private.h"
+#include "compat.h"
 
 #define DES_BLOCK_SIZE 8
 
@@ -727,19 +728,36 @@ openssl_reset_session(struct openssl_session *sess)
 }
 
 /** Provide session for operation */
-static struct openssl_session *
+static void *
 get_session(struct openssl_qp *qp, struct rte_crypto_op *op)
 {
struct openssl_session *sess = NULL;
+   struct openssl_asym_session *asym_sess = NULL;
 
if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
-   /* get existing session */
-   if (likely(op->sym->session != NULL))
-   sess = (struct openssl_session *)
-   get_sym_session_private_data(
-   op->sym->session,
-   cryptodev_driver_id);
+   if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+   /* get existing session */
+   if (likely(op->sym->session != NULL))
+   sess = (struct openssl_session *)
+   get_sym_session_private_data(
+   op->sym->session,
+   cryptodev_driver_id);
+   } else {
+   if (likely(op->asym->session != NULL))
+   asym_sess = (struct openssl_asym_session *)
+   get_asym_session_private_data(
+   op->asym->session,
+   cryptodev_driver_id);
+   if (asym_sess == NULL)
+   op->status =
+   RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
+   return asym_sess;
+   }
} else {
+   /* sessionless asymmetric not supported */
+   if (op->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC)
+   return NULL;
+
/* provide internal session */
void *_sess = NULL;
void *_sess_private_data = NULL;
@@ -1525,6 +1543,191 @@ process_openssl_auth_op(struct openssl_qp *qp, struct 
rte_crypto_op *op,
op->status = RTE_CRYPTO_OP_STATUS_ERROR;
 }
 
+/* process modinv operation */
+static int process_openssl_modinv_op(struct rte_crypto_op *cop,
+   struct openssl_asym_session *sess)
+{
+   struct rte_crypto_asym_op *op = cop->asym;
+   BIGNUM *base = BN_CTX_get(sess->u.m.ctx);
+   BIGNUM *res = BN_CTX_get(sess->u.m.ctx);
+
+   if (unlikely(base == NULL || res == NULL)) {
+   if (base)
+   BN_free(base);
+   if (res)
+   BN_free(res);
+   cop->status = RTE_CRYPTO_OP_ST

[dpdk-dev] [PATCH v4 2/3] crypto/openssl: add dh and dsa asym op

2018-07-12 Thread Shally Verma
From: Sunila Sahu 

- Add dh key generation and shared compute
- Add dsa sign and verify operation

Signed-off-by: Sunila Sahu 
Signed-off-by: Shally Verma 
Signed-off-by: Ashish Gupta 
---
 drivers/crypto/openssl/compat.h  |  68 +++
 drivers/crypto/openssl/rte_openssl_pmd.c | 237 +++
 drivers/crypto/openssl/rte_openssl_pmd_ops.c | 194 ++-
 drivers/crypto/openssl/rte_openssl_pmd_private.h |   9 +
 4 files changed, 507 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/openssl/compat.h b/drivers/crypto/openssl/compat.h
index 8ece808..45f9a33 100644
--- a/drivers/crypto/openssl/compat.h
+++ b/drivers/crypto/openssl/compat.h
@@ -23,6 +23,41 @@
rsa->n = n; rsa->e = e; rsa->d = d; ret = 0; \
} while (0)
 
+#define set_dh_params(dh, p, g, ret) \
+   do { \
+   dh->p = p; \
+   dh->q = NULL; \
+   dh->g = g; \
+   ret = 0; \
+   } while (0)
+
+#define set_dh_priv_key(dh, priv_key, ret) \
+   do { dh->priv_key = priv_key; ret = 0; } while (0)
+
+#define set_dsa_params(dsa, p, q, g, ret) \
+   do { dsa->p = p; dsa->q = q; dsa->g = g; ret = 0; } while (0)
+
+#define get_dh_pub_key(dh, pub_key) \
+   (pub_key = dh->pub_key)
+
+#define get_dh_priv_key(dh, priv_key) \
+   (priv_key = dh->priv_key)
+
+#define set_dsa_sign(sign, r, s) \
+   do { sign->r = r; sign->s = s; } while (0)
+
+#define get_dsa_sign(sign, r, s) \
+   do { r = sign->r; s = sign->s; } while (0)
+
+#define set_dsa_keys(dsa, pub, priv, ret) \
+   do { dsa->pub_key = pub; dsa->priv_key = priv; ret = 0; } while (0)
+
+#define set_dsa_pub_key(dsa, pub_key) \
+   (dsa->pub_key = pub_key)
+
+#define get_dsa_priv_key(dsa, priv_key) \
+   (priv_key = dsa->priv_key)
+
 #else
 
 #define set_rsa_params(rsa, p, q, ret) \
@@ -35,6 +70,39 @@
 #define set_rsa_keys(rsa, n, e, d, ret) \
(ret = !RSA_set0_key(rsa, n, e, d))
 
+#define set_dh_params(dh, p, g, ret) \
+   (ret = !DH_set0_pqg(dh, p, NULL, g))
+
+#define set_dh_priv_key(dh, priv_key, ret) \
+   (ret = !DH_set0_key(dh, NULL, priv_key))
+
+#define get_dh_pub_key(dh, pub_key) \
+   (DH_get0_key(dh_key, &pub_key, NULL))
+
+#define get_dh_priv_key(dh, priv_key) \
+   (DH_get0_key(dh_key, NULL, &priv_key))
+
+#define set_dsa_params(dsa, p, q, g, ret) \
+   (ret = !DSA_set0_pqg(dsa, p, q, g))
+
+#define set_dsa_priv_key(dsa, priv_key) \
+   (DSA_set0_key(dsa, NULL, priv_key))
+
+#define set_dsa_sign(sign, r, s) \
+   (DSA_SIG_set0(sign, r, s))
+
+#define get_dsa_sign(sign, r, s) \
+   (DSA_SIG_get0(sign, &r, &s))
+
+#define set_dsa_keys(dsa, pub, priv, ret) \
+   (ret = !DSA_set0_key(dsa, pub, priv))
+
+#define set_dsa_pub_key(dsa, pub_key) \
+   (DSA_set0_key(dsa, pub_key, NULL))
+
+#define get_dsa_priv_key(dsa, priv_key) \
+   (DSA_get0_key(dsa, NULL, &priv_key))
+
 #endif /* version < 1010 */
 
 #endif /* __RTA_COMPAT_H__ */
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c 
b/drivers/crypto/openssl/rte_openssl_pmd.c
index e21a6a1..3314802 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -1543,6 +1543,230 @@ process_openssl_auth_op(struct openssl_qp *qp, struct 
rte_crypto_op *op,
op->status = RTE_CRYPTO_OP_STATUS_ERROR;
 }
 
+/* process dsa sign operation */
+static int
+process_openssl_dsa_sign_op(struct rte_crypto_op *cop,
+   struct openssl_asym_session *sess)
+{
+   struct rte_crypto_dsa_op_param *op = &cop->asym->dsa;
+   DSA *dsa = sess->u.s.dsa;
+   DSA_SIG *sign = NULL;
+
+   sign = DSA_do_sign(op->message.data,
+   op->message.length,
+   dsa);
+
+   if (sign == NULL) {
+   OPENSSL_LOG(ERR, "%s:%d\n", __func__, __LINE__);
+   cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+   } else {
+   const BIGNUM *r = NULL, *s = NULL;
+   get_dsa_sign(sign, r, s);
+
+   op->r.length = BN_bn2bin(r, op->r.data);
+   op->s.length = BN_bn2bin(s, op->s.data);
+   cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+   }
+
+   DSA_SIG_free(sign);
+
+   return 0;
+}
+
+/* process dsa verify operation */
+static int
+process_openssl_dsa_verify_op(struct rte_crypto_op *cop,
+   struct openssl_asym_session *sess)
+{
+   struct rte_crypto_dsa_op_param *op = &cop->asym->dsa;
+   DSA *dsa = sess->u.s.dsa;
+   int ret;
+   DSA_SIG *sign = DSA_SIG_new();
+   BIGNUM *r = NULL, *s = NULL;
+   BIGNUM *pub_key = NULL;
+
+   if (sign == NULL) {
+   OPENSSL_LOG(ERR, " %s:%d\n", __func__, __LINE__);
+   cop->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+   return -1;
+   }
+
+   r = BN_bin2bn(op->r.data,
+   op->r.length,
+   r);
+   s =

[dpdk-dev] [PATCH v4 3/3] doc: add asym feature list

2018-07-12 Thread Shally Verma
From: Ashish Gupta 

Signed-off-by: Sunila Sahu 
Signed-off-by: Shally Verma 
Signed-off-by: Ashish Gupta 
---
 doc/guides/cryptodevs/features/openssl.ini | 11 +++
 doc/guides/cryptodevs/openssl.rst  |  1 +
 2 files changed, 12 insertions(+)

diff --git a/doc/guides/cryptodevs/features/openssl.ini 
b/doc/guides/cryptodevs/features/openssl.ini
index 626ec1b..b9c0bdc 100644
--- a/doc/guides/cryptodevs/features/openssl.ini
+++ b/doc/guides/cryptodevs/features/openssl.ini
@@ -8,6 +8,7 @@ Symmetric crypto   = Y
 Sym operation chaining = Y
 OOP SGL In LB  Out = Y
 OOP LB  In LB  Out = Y
+Asymmetric crypto  = Y
 
 ;
 ; Supported crypto algorithms of the 'openssl' crypto driver.
@@ -50,3 +51,13 @@ AES GCM (256) = Y
 AES CCM (128) = Y
 AES CCM (192) = Y
 AES CCM (256) = Y
+
+;
+; Supported Asymmetric algorithms of the 'openssl' crypto driver.
+;
+[Asymmetric]
+RSA = Y
+DSA = Y
+Modular Exponentiation = Y
+Modular Inversion = Y
+Diffie-hellman = Y
diff --git a/doc/guides/cryptodevs/openssl.rst 
b/doc/guides/cryptodevs/openssl.rst
index 427fc80..bdc30f6 100644
--- a/doc/guides/cryptodevs/openssl.rst
+++ b/doc/guides/cryptodevs/openssl.rst
@@ -80,6 +80,7 @@ crypto processing.
 
 Test name is cryptodev_openssl_autotest.
 For performance test cryptodev_openssl_perftest can be used.
+For asymmetric crypto operations testing, run cryptodev_openssl_asym_autotest.
 
 To verify real traffic l2fwd-crypto example can be used with this command:
 
-- 
2.9.5



Re: [dpdk-dev] [pull-request] next-pipeline 18.08 pre-rc1

2018-07-12 Thread Thomas Monjalon
27/06/2018 19:31, Cristian Dumitrescu:
>   http://dpdk.org/git/next/dpdk-next-pipeline 

Pulled, thanks




Re: [dpdk-dev] [PATCH v11 11/25] eal/dev: implement device iteration

2018-07-12 Thread Gaëtan Rivet
Hi Shreyansh,

On Thu, Jul 12, 2018 at 04:28:27PM +0530, Shreyansh Jain wrote:
> On Thursday 12 July 2018 03:15 AM, Gaetan Rivet wrote:
> > Use the iteration hooks in the abstraction layers to perform the
> > requested filtering on the internal device lists.
> > 
> > Signed-off-by: Gaetan Rivet 
> > ---
> >   lib/librte_eal/common/eal_common_dev.c  | 168 
> >   lib/librte_eal/common/include/rte_dev.h |  26 
> >   lib/librte_eal/rte_eal_version.map  |   1 +
> >   3 files changed, 195 insertions(+)
> > 
> > diff --git a/lib/librte_eal/common/eal_common_dev.c 
> > b/lib/librte_eal/common/eal_common_dev.c
> > index 63e329bd8..b78845f02 100644
> > --- a/lib/librte_eal/common/eal_common_dev.c
> > +++ b/lib/librte_eal/common/eal_common_dev.c
> > @@ -45,6 +45,28 @@ static struct dev_event_cb_list dev_event_cbs;
> >   /* spinlock for device callbacks */
> >   static rte_spinlock_t dev_event_lock = RTE_SPINLOCK_INITIALIZER;
> > +struct dev_next_ctx {
> > +   struct rte_dev_iterator *it;
> > +   const char *bus_str;
> > +   const char *cls_str;
> > +};
> > +
> > +#define CTX(it, bus_str, cls_str) \
> > +   (&(const struct dev_next_ctx){ \
> > +   .it = it, \
> > +   .bus_str = bus_str, \
> > +   .cls_str = cls_str, \
> > +   })
> > +
> > +#define ITCTX(ptr) \
> > +   (((struct dev_next_ctx *)(intptr_t)ptr)->it)
> > +
> > +#define BUSCTX(ptr) \
> > +   (((struct dev_next_ctx *)(intptr_t)ptr)->bus_str)
> > +
> > +#define CLSCTX(ptr) \
> > +   (((struct dev_next_ctx *)(intptr_t)ptr)->cls_str)
> > +
> >   static int cmp_detached_dev_name(const struct rte_device *dev,
> > const void *_name)
> >   {
> > @@ -398,3 +420,149 @@ rte_dev_iterator_init(struct rte_dev_iterator *it,
> >   get_out:
> > return -rte_errno;
> >   }
> > +
> > +static char *
> > +dev_str_sane_copy(const char *str)
> > +{
> > +   size_t end;
> > +   char *copy;
> > +
> > +   end = strcspn(str, ",/");
> > +   if (str[end] == ',') {
> > +   copy = strdup(&str[end + 1]);
> > +   } else {
> > +   /* '/' or '\0' */
> > +   copy = strdup("");
> > +   }
> 
> Though it doesn't change anything functionally, if you can separate blocks
> of if-else with new lines, it really makes it easier to read.
> Like here...
> 

sure,

> > +   if (copy == NULL) {
> > +   rte_errno = ENOMEM;
> > +   } else {
> > +   char *slash;
> > +
> > +   slash = strchr(copy, '/');
> > +   if (slash != NULL)
> > +   slash[0] = '\0';
> > +   }
> > +   return copy;
> > +}
> > +
> > +static int
> > +class_next_dev_cmp(const struct rte_class *cls,
> > +  const void *ctx)
> > +{
> > +   struct rte_dev_iterator *it;
> > +   const char *cls_str = NULL;
> > +   void *dev;
> > +
> > +   if (cls->dev_iterate == NULL)
> > +   return 1;
> > +   it = ITCTX(ctx);
> > +   cls_str = CLSCTX(ctx);
> > +   dev = it->class_device;
> > +   /* it->cls_str != NULL means a class
> > +* was specified in the devstr.
> > +*/
> > +   if (it->cls_str != NULL && cls != it->cls)
> > +   return 1;
> > +   /* If an error occurred previously,
> > +* no need to test further.
> > +*/
> > +   if (rte_errno != 0)
> > +   return -1;
> 
> I am guessing here by '..error occurred previously..' you mean sane_copy. If
> so, why wait until this point to return? Anyway the caller (rte_bus_find,
> probably) would only look for '0' or non-zero.
> 

No, rte_errno could be set by a bus / class implementation, for any
error occurring during a call to dev_iterate: maybe a device was lost
(hotplugged), etc. The return value of dev_iterate() cannot transmit an
error as not matching a filter is not an error. The only error channel
is rte_errno.

sane_copy was already checked before and should be cleared at this
point.

> > +   dev = cls->dev_iterate(dev, cls_str, it);
> > +   it->class_device = dev;
> > +   return dev == NULL;
> > +}
> > +
> > +static int
> > +bus_next_dev_cmp(const struct rte_bus *bus,
> > +const void *ctx)
> > +{
> > +   struct rte_device *dev = NULL;
> > +   struct rte_class *cls = NULL;
> > +   struct rte_dev_iterator *it;
> > +   const char *bus_str = NULL;
> > +
> > +   if (bus->dev_iterate == NULL)
> > +   return 1;
> > +   it = ITCTX(ctx);
> > +   bus_str = BUSCTX(ctx);
> > +   dev = it->device;
> > +   /* it->bus_str != NULL means a bus
> > +* was specified in the devstr.
> > +*/
> > +   if (it->bus_str != NULL && bus != it->bus)
> > +   return 1;
> > +   /* If an error occurred previously,
> > +* no need to test further.
> > +*/
> > +   if (rte_errno != 0)
> > +   return -1;
> > +   if (it->cls_str == NULL) {
> > +   dev = bus->dev_iterate(dev, bus_str, it);
> > +   goto end;
> 
> This is slightly confusing. If it->cls_str == NULL, you do
> bus->dev_iterate... but
> 
> > +   }
> > +   /* cls_str != NULL */
> > +   if (dev == NULL) {
> > +next_dev_on_bus:
> > +   

Re: [dpdk-dev] [PATCH v5 16/16] docs/qat: refactor docs adding compression guide

2018-07-12 Thread De Lara Guarch, Pablo



> -Original Message-
> From: Trahe, Fiona
> Sent: Wednesday, July 11, 2018 12:57 PM
> To: dev@dpdk.org
> Cc: De Lara Guarch, Pablo ; Trahe, Fiona
> ; Jozwiak, TomaszX 
> Subject: [PATCH v5 16/16] docs/qat: refactor docs adding compression guide
> 
> Extend QAT guide to cover crypto and compression and common information,
> particularly about kernel driver dependency.
> Update release note.
> Update compression feature list for qat.
> 
> 
> Signed-off-by: Fiona Trahe 
> ---
>  config/common_base   |   2 +-
>  doc/guides/compressdevs/features/qat.ini |  24 
>  doc/guides/compressdevs/index.rst|   1 +
>  doc/guides/compressdevs/qat_comp.rst |  49 +
>  doc/guides/cryptodevs/qat.rst| 183 
> +--
>  doc/guides/rel_notes/release_18_08.rst   |   5 +
>  6 files changed, 205 insertions(+), 59 deletions(-)  create mode 100644
> doc/guides/compressdevs/features/qat.ini
>  create mode 100644 doc/guides/compressdevs/qat_comp.rst
> 
> diff --git a/config/common_base b/config/common_base index
> 1e340b4..1380acf 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -478,7 +478,7 @@ CONFIG_RTE_LIBRTE_PMD_DPAA_SEC=n
>  CONFIG_RTE_LIBRTE_DPAA_MAX_CRYPTODEV=4
> 
>  #
> -# Compile PMD for QuickAssist based devices
> +# Compile PMD for QuickAssist based devices - see docs for details
>  #
>  CONFIG_RTE_LIBRTE_PMD_QAT=y
>  CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n
> diff --git a/doc/guides/compressdevs/features/qat.ini
> b/doc/guides/compressdevs/features/qat.ini
> new file mode 100644
> index 000..0d0e21d
> --- /dev/null
> +++ b/doc/guides/compressdevs/features/qat.ini
> @@ -0,0 +1,24 @@
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +; Supported features of 'QAT' compression driver.
> +;
> +[Features]
> +HW Accelerated  = Y
> +CPU SSE =
> +CPU AVX =
> +CPU AVX2=
> +CPU AVX512  =
> +CPU NEON=
> +Stateful=
> +Pass-through=
> +OOP SGL In SGL Out  =
> +OOP SGL In LB  Out  =
> +OOP LB  In SGL Out  =
> +Deflate = Y
> +LZS =
> +Adler32 = Y
> +Crc32   = Y
> +Adler32&Crc32   = Y
> +Fixed   = Y
> +Dynamic =

No need to add the features that are not supported here (the ones that do not 
have "Y").



Re: [dpdk-dev] [PATCH v2] add sample functions for packet forwarding

2018-07-12 Thread Pattan, Reshma
Hi Jananee,

> -Original Message-
> From: Parthasarathy, JananeeX M
> Sent: Thursday, July 12, 2018 9:53 AM
> To: dev@dpdk.org
> Cc: Horton, Remy ; Pattan, Reshma
> ; Parthasarathy, JananeeX M
> ; Chaitanya Babu, TalluriX
> 
> Subject: [PATCH v2] add sample functions for packet forwarding
> 

I could apply the patch succefully but there was a warning, just check  is the 
white line can be removed.
git apply v2-add-sample-functions-for-packet-forwarding
v2-add-sample-functions-for-packet-forwarding:160: new blank line at EOF.
+
warning: 1 line adds whitespace errors.

> + uint16_t socket_id = rte_socket_id();
> + struct rte_ring *rxtx[NUM_RINGS];
> + rxtx[0] = rte_ring_create("R0", RING_SIZE, socket_id,
> + RING_F_SP_ENQ|RING_F_SC_DEQ);
> + if (rxtx[0] == NULL) {
> + printf("%s() line %u: rte_ring_create R0 failed",
> + __func__, __LINE__);
> + return TEST_FAILED;
> + }
> + rxtx[1] = rte_ring_create("R1", RING_SIZE, socket_id,
> + RING_F_SP_ENQ|RING_F_SC_DEQ);
> + if (rxtx[1] == NULL) {
> + printf("%s() line %u: rte_ring_create R1 failed",
> + __func__, __LINE__);
> + return TEST_FAILED;
> + }
> + tx_portid = rte_eth_from_rings("net_ringa", rxtx, NUM_QUEUES, rxtx,
> + NUM_QUEUES, socket_id);
> + rx_portid = rte_eth_from_rings("net_ringb", rxtx, NUM_QUEUES, rxtx,
> + NUM_QUEUES, socket_id);

Also can you see if you can create only one port using rings instead of 2 and 
use it for 
two ports rx_port id and tx_ports ids?  And will that fit for all the UTs that 
you are writing.

Thanks,
Reshma


[dpdk-dev] [PATCH v6 00/16] compress/qat: add compression PMD

2018-07-12 Thread Fiona Trahe
Create compression PMD for Intel QuickAssist devices
Currently only the C62x and c3xxx devices are supported.

The qat comp PMD supports
 - stateless compression and
   decompression using the Deflate algorithm with Fixed Huffman
   encoding. Dynamic huffman encoding is not supported, it
   will be added in a later patch.
 - checksum generation: Adler32, CRC32 and combined.

The compression service is hosted on a QuickAssist VF PCI
device, which is managed by code in the
drivers/common/qat directory.

v6 changes:
 - fixed makefile issue when cross compiling

v5 changes:
 - rebased against latest r/n and features/default.ini
 - fixed common/qat/Makefile so no build output files
   left hanging around in compress/qat src dir.

v4 changes:
 - corrected capabilities

v3 changes:
 - only commit message changes, i.e. removed ChangeId and fixed typos


v2 changes:
- Added check for correct firmware
- Split patchset
- Added documentation
- removed support for scatter-gather-lists and related config flag
- Removed support for Dynamic huffman encoding and related IM buffer config flag
- Removed support for DH895xcc device


Fiona Trahe (16):
  common/qat: updated firmware headers
  compress/qat: add makefiles for PMD
  compress/qat: add meson build
  compress/qat: add xform processing
  compress/qat: create fw request and process response
  compress/qat: check that correct firmware is in use
  compress/qat: add stats functions
  compress/qat: setup queue-pairs for compression service
  compress/qat: add fns to configure and clear device
  compress/qat: add fn to return device info
  compress/qat: add enqueue/dequeue functions
  compress/qat: add device start and stop fns
  compress/qat: create and populate the ops structure
  compress/qat: add fns to create and destroy the PMD
  compress/qat: prevent device usage if incorrect firmware
  docs/qat: refactor docs adding compression guide

 MAINTAINERS  |   4 +
 config/common_base   |   5 +-
 doc/guides/compressdevs/features/qat.ini |  24 ++
 doc/guides/compressdevs/index.rst|   1 +
 doc/guides/compressdevs/qat_comp.rst |  49 +++
 doc/guides/cryptodevs/qat.rst| 183 ++
 doc/guides/rel_notes/release_18_08.rst   |   5 +
 drivers/common/qat/Makefile  |  47 ++-
 drivers/common/qat/qat_adf/icp_qat_fw.h  |  69 +++-
 drivers/common/qat/qat_adf/icp_qat_fw_comp.h | 482 +++
 drivers/common/qat/qat_adf/icp_qat_hw.h  | 130 +++-
 drivers/common/qat/qat_device.h  |   4 +
 drivers/common/qat/qat_qp.c  |  11 +-
 drivers/common/qat/qat_qp.h  |   5 +
 drivers/compress/meson.build |   2 +-
 drivers/compress/qat/meson.build |  18 +
 drivers/compress/qat/qat_comp.c  | 359 
 drivers/compress/qat/qat_comp.h  |  56 
 drivers/compress/qat/qat_comp_pmd.c  | 407 ++
 drivers/compress/qat/qat_comp_pmd.h  |  39 +++
 drivers/compress/qat/rte_pmd_qat_version.map |   3 +
 drivers/crypto/qat/meson.build   |  10 +-
 drivers/crypto/qat/rte_pmd_qat_version.map   |   3 -
 mk/rte.app.mk|   5 +-
 test/test/test_cryptodev.c   |   6 +-
 25 files changed, 1805 insertions(+), 122 deletions(-)
 create mode 100644 doc/guides/compressdevs/features/qat.ini
 create mode 100644 doc/guides/compressdevs/qat_comp.rst
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_fw_comp.h
 create mode 100644 drivers/compress/qat/meson.build
 create mode 100644 drivers/compress/qat/qat_comp.c
 create mode 100644 drivers/compress/qat/qat_comp.h
 create mode 100644 drivers/compress/qat/qat_comp_pmd.c
 create mode 100644 drivers/compress/qat/qat_comp_pmd.h
 create mode 100644 drivers/compress/qat/rte_pmd_qat_version.map
 delete mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map

-- 
2.7.4



[dpdk-dev] [PATCH v6 02/16] compress/qat: add makefiles for PMD

2018-07-12 Thread Fiona Trahe
Add Makefiles, directory and empty source files for compression PMD.
Handle cases for building either symmetric crypto PMD
or compression PMD or both and the common files both depend on.

Change-Id: Ic162d05db77e3421311c7bc364e0da69be7f797c
Signed-off-by: Fiona Trahe 
Signed-off-by: Tomasz Jozwiak 
---
 MAINTAINERS |  4 
 config/common_base  |  3 ++-
 drivers/common/qat/Makefile | 45 -
 drivers/compress/qat/qat_comp.c |  5 +
 drivers/compress/qat/qat_comp.h | 14 
 drivers/compress/qat/qat_comp_pmd.c |  5 +
 drivers/compress/qat/qat_comp_pmd.h | 15 +
 mk/rte.app.mk   |  5 -
 test/test/test_cryptodev.c  |  6 ++---
 9 files changed, 81 insertions(+), 21 deletions(-)
 create mode 100644 drivers/compress/qat/qat_comp.c
 create mode 100644 drivers/compress/qat/qat_comp.h
 create mode 100644 drivers/compress/qat/qat_comp_pmd.c
 create mode 100644 drivers/compress/qat/qat_comp_pmd.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 4d508de..412fd77 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -853,6 +853,10 @@ F: drivers/compress/isal/
 F: doc/guides/compressdevs/isal.rst
 F: doc/guides/compressdevs/features/isal.ini
 
+Intel QuickAssist
+M: Fiona Trahe 
+F: drivers/compress/qat/
+F: drivers/common/qat/
 
 Eventdev Drivers
 
diff --git a/config/common_base b/config/common_base
index c305a77..8b539af 100644
--- a/config/common_base
+++ b/config/common_base
@@ -480,7 +480,8 @@ CONFIG_RTE_LIBRTE_DPAA_MAX_CRYPTODEV=4
 #
 # Compile PMD for QuickAssist based devices
 #
-CONFIG_RTE_LIBRTE_PMD_QAT=n
+CONFIG_RTE_LIBRTE_PMD_QAT=y
+CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n
 #
 # Max. number of QuickAssist devices, which can be detected and attached
 #
diff --git a/drivers/common/qat/Makefile b/drivers/common/qat/Makefile
index 02e83f9..6ec0bd3 100644
--- a/drivers/common/qat/Makefile
+++ b/drivers/common/qat/Makefile
@@ -3,33 +3,28 @@
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
-# library name
-LIB = librte_pmd_qat.a
-
-# library version
-LIBABIVER := 1
-
-# build flags
-CFLAGS += $(WERROR_FLAGS)
-CFLAGS += -O3
-
 # build directories
 QAT_CRYPTO_DIR := $(RTE_SDK)/drivers/crypto/qat
-VPATH=$(QAT_CRYPTO_DIR)
+QAT_COMPRESS_DIR := $(RTE_SDK)/drivers/compress/qat
+VPATH=$(QAT_CRYPTO_DIR):$(QAT_COMPRESS_DIR)
 
 # external library include paths
 CFLAGS += -I$(SRCDIR)/qat_adf
 CFLAGS += -I$(SRCDIR)
 CFLAGS += -I$(QAT_CRYPTO_DIR)
+CFLAGS += -I$(QAT_COMPRESS_DIR)
 
-# library common source files
-SRCS-y += qat_device.c
-SRCS-y += qat_common.c
-SRCS-y += qat_logs.c
-SRCS-y += qat_qp.c
+
+ifeq ($(CONFIG_RTE_LIBRTE_COMPRESSDEV),y)
+   CFLAGS += -DALLOW_EXPERIMENTAL_API
+   LDLIBS += -lrte_compressdev
+   SRCS-y += qat_comp.c
+   SRCS-y += qat_comp_pmd.c
+endif
 
 # library symmetric crypto source files
 ifeq ($(CONFIG_RTE_LIBRTE_CRYPTODEV),y)
+ifeq ($(CONFIG_RTE_LIBRTE_PMD_QAT_SYM),y)
LDLIBS += -lrte_cryptodev
LDLIBS += -lcrypto
CFLAGS += -DBUILD_QAT_SYM
@@ -37,6 +32,23 @@ ifeq ($(CONFIG_RTE_LIBRTE_CRYPTODEV),y)
SRCS-y += qat_sym_session.c
SRCS-y += qat_sym_pmd.c
 endif
+endif
+
+
+# library name
+LIB = librte_pmd_qat.a
+
+# library version
+LIBABIVER := 1
+# build flags
+CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -O3
+
+# library common source files
+SRCS-y += qat_device.c
+SRCS-y += qat_common.c
+SRCS-y += qat_logs.c
+SRCS-y += qat_qp.c
 
 LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool
 LDLIBS += -lrte_pci -lrte_bus_pci
@@ -47,4 +59,5 @@ SYMLINK-y-include +=
 # versioning export map
 EXPORT_MAP := ../../crypto/qat/rte_pmd_qat_version.map
 
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
new file mode 100644
index 000..caa1158
--- /dev/null
+++ b/drivers/compress/qat/qat_comp.c
@@ -0,0 +1,5 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include "qat_comp.h"
diff --git a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h
new file mode 100644
index 000..89c475e
--- /dev/null
+++ b/drivers/compress/qat/qat_comp.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2018 Intel Corporation
+ */
+
+#ifndef _QAT_COMP_H_
+#define _QAT_COMP_H_
+
+#ifdef RTE_LIBRTE_COMPRESSDEV
+
+#include 
+#include 
+
+#endif
+#endif
diff --git a/drivers/compress/qat/qat_comp_pmd.c 
b/drivers/compress/qat/qat_comp_pmd.c
new file mode 100644
index 000..fb035d1
--- /dev/null
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -0,0 +1,5 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2018 Intel Corporation
+ */
+
+#include "qat_comp_pmd.h"
diff --git a/drivers/compress/qat/qat_comp_pmd.h 
b/drivers/compress/qat/qat_comp_pmd.h
new file mode 100644
index 000..9b5b543
--- /dev/null
+++ b/drivers/compress/qat/qat_comp_pmd.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *

[dpdk-dev] [PATCH v6 03/16] compress/qat: add meson build

2018-07-12 Thread Fiona Trahe
Add meson build files.

Signed-off-by: Tomasz Jozwiak 
Signed-off-by: Fiona Trahe 
---
 drivers/common/qat/Makefile  |  2 +-
 drivers/compress/meson.build |  2 +-
 drivers/compress/qat/meson.build | 18 ++
 drivers/compress/qat/rte_pmd_qat_version.map |  3 +++
 drivers/crypto/qat/meson.build   | 10 ++
 drivers/crypto/qat/rte_pmd_qat_version.map   |  3 ---
 6 files changed, 25 insertions(+), 13 deletions(-)
 create mode 100644 drivers/compress/qat/meson.build
 create mode 100644 drivers/compress/qat/rte_pmd_qat_version.map
 delete mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map

diff --git a/drivers/common/qat/Makefile b/drivers/common/qat/Makefile
index 6ec0bd3..e23f727 100644
--- a/drivers/common/qat/Makefile
+++ b/drivers/common/qat/Makefile
@@ -57,7 +57,7 @@ LDLIBS += -lrte_pci -lrte_bus_pci
 SYMLINK-y-include +=
 
 # versioning export map
-EXPORT_MAP := ../../crypto/qat/rte_pmd_qat_version.map
+EXPORT_MAP := ../../compress/qat/rte_pmd_qat_version.map
 
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/compress/meson.build b/drivers/compress/meson.build
index fb136e1..2352ad5 100644
--- a/drivers/compress/meson.build
+++ b/drivers/compress/meson.build
@@ -1,7 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Intel Corporation
 
-drivers = ['isal']
+drivers = ['isal', 'qat']
 
 std_deps = ['compressdev'] # compressdev pulls in all other needed deps
 config_flag_fmt = 'RTE_LIBRTE_@0@_PMD'
diff --git a/drivers/compress/qat/meson.build b/drivers/compress/qat/meson.build
new file mode 100644
index 000..9d15076
--- /dev/null
+++ b/drivers/compress/qat/meson.build
@@ -0,0 +1,18 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2017-2018 Intel Corporation
+
+
+# Add our sources files to the list
+allow_experimental_apis = true
+qat_sources += files('qat_comp_pmd.c',
+'qat_comp.c')
+qat_includes += include_directories('.')
+qat_deps += 'compressdev'
+qat_ext_deps += dep
+
+# build the whole driver
+sources += qat_sources
+cflags += qat_cflags
+deps += qat_deps
+ext_deps += qat_ext_deps
+includes += qat_includes
diff --git a/drivers/compress/qat/rte_pmd_qat_version.map 
b/drivers/compress/qat/rte_pmd_qat_version.map
new file mode 100644
index 000..ad6e191
--- /dev/null
+++ b/drivers/compress/qat/rte_pmd_qat_version.map
@@ -0,0 +1,3 @@
+DPDK_18.08 {
+   local: *;
+};
diff --git a/drivers/crypto/qat/meson.build b/drivers/crypto/qat/meson.build
index 2873637..d7cff68 100644
--- a/drivers/crypto/qat/meson.build
+++ b/drivers/crypto/qat/meson.build
@@ -1,6 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2017-2018 Intel Corporation
 
+# this does not build the QAT driver, instead that is done in the compression
+# driver which comes later. Here we just add our sources files to the list
 build = false
 dep = dependency('libcrypto', required: false)
 if dep.found()
@@ -13,12 +15,4 @@ if dep.found()
qat_ext_deps += dep
pkgconfig_extra_libs += '-lcrypto'
qat_cflags += '-DBUILD_QAT_SYM'
-
-   # build the whole driver
-   sources += qat_sources
-   cflags += qat_cflags
-   deps += qat_deps
-   ext_deps += qat_ext_deps
-   includes += qat_includes
-   build = true
 endif
diff --git a/drivers/crypto/qat/rte_pmd_qat_version.map 
b/drivers/crypto/qat/rte_pmd_qat_version.map
deleted file mode 100644
index bbaf1c8..000
--- a/drivers/crypto/qat/rte_pmd_qat_version.map
+++ /dev/null
@@ -1,3 +0,0 @@
-DPDK_2.2 {
-   local: *;
-};
\ No newline at end of file
-- 
2.7.4



[dpdk-dev] [PATCH v6 01/16] common/qat: updated firmware headers

2018-07-12 Thread Fiona Trahe
Updated to latest firmware headers files for QuickAssist devices.
Includes updates for symmetric crypto, PKE and Compression services.

Signed-off-by: Fiona Trahe 
---
 drivers/common/qat/qat_adf/icp_qat_fw.h  |  69 +++-
 drivers/common/qat/qat_adf/icp_qat_fw_comp.h | 482 +++
 drivers/common/qat/qat_adf/icp_qat_hw.h  | 130 +++-
 3 files changed, 654 insertions(+), 27 deletions(-)
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_fw_comp.h

diff --git a/drivers/common/qat/qat_adf/icp_qat_fw.h 
b/drivers/common/qat/qat_adf/icp_qat_fw.h
index ae39b7f..8f7cb37 100644
--- a/drivers/common/qat/qat_adf/icp_qat_fw.h
+++ b/drivers/common/qat/qat_adf/icp_qat_fw.h
@@ -117,6 +117,10 @@ struct icp_qat_fw_comn_resp {
 #define ICP_QAT_FW_COMN_VALID_FLAG_BITPOS 7
 #define ICP_QAT_FW_COMN_VALID_FLAG_MASK 0x1
 #define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK 0x7F
+#define ICP_QAT_FW_COMN_CNV_FLAG_BITPOS 6
+#define ICP_QAT_FW_COMN_CNV_FLAG_MASK 0x1
+#define ICP_QAT_FW_COMN_CNVNR_FLAG_BITPOS 5
+#define ICP_QAT_FW_COMN_CNVNR_FLAG_MASK 0x1
 
 #define ICP_QAT_FW_COMN_OV_SRV_TYPE_GET(icp_qat_fw_comn_req_hdr_t) \
icp_qat_fw_comn_req_hdr_t.service_type
@@ -133,6 +137,16 @@ struct icp_qat_fw_comn_resp {
 #define ICP_QAT_FW_COMN_HDR_VALID_FLAG_GET(hdr_t) \
ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_t.hdr_flags)
 
+#define ICP_QAT_FW_COMN_HDR_CNVNR_FLAG_GET(hdr_flags) \
+   QAT_FIELD_GET(hdr_flags, \
+   ICP_QAT_FW_COMN_CNVNR_FLAG_BITPOS, \
+   ICP_QAT_FW_COMN_CNVNR_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_CNV_FLAG_GET(hdr_flags) \
+   QAT_FIELD_GET(hdr_flags, \
+   ICP_QAT_FW_COMN_CNV_FLAG_BITPOS, \
+   ICP_QAT_FW_COMN_CNV_FLAG_MASK)
+
 #define ICP_QAT_FW_COMN_HDR_VALID_FLAG_SET(hdr_t, val) \
ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val)
 
@@ -204,29 +218,44 @@ struct icp_qat_fw_comn_resp {
& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)); }
 
+#define ICP_QAT_FW_COMN_NEXT_ID_SET_2(next_curr_id, val)   
\
+   do {   \
+   (next_curr_id) =   \
+   (((next_curr_id) & ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+(((val) << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) &  \
+ ICP_QAT_FW_COMN_NEXT_ID_MASK))   \
+   } while (0)
+
+#define ICP_QAT_FW_COMN_CURR_ID_SET_2(next_curr_id, val)   
\
+   do {   \
+   (next_curr_id) =   \
+   (((next_curr_id) & ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+((val) & ICP_QAT_FW_COMN_CURR_ID_MASK))   \
+   } while (0)
+
 #define QAT_COMN_RESP_CRYPTO_STATUS_BITPOS 7
 #define QAT_COMN_RESP_CRYPTO_STATUS_MASK 0x1
+#define QAT_COMN_RESP_PKE_STATUS_BITPOS 6
+#define QAT_COMN_RESP_PKE_STATUS_MASK 0x1
 #define QAT_COMN_RESP_CMP_STATUS_BITPOS 5
 #define QAT_COMN_RESP_CMP_STATUS_MASK 0x1
 #define QAT_COMN_RESP_XLAT_STATUS_BITPOS 4
 #define QAT_COMN_RESP_XLAT_STATUS_MASK 0x1
 #define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS 3
 #define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK 0x1
-
-#define ICP_QAT_FW_COMN_RESP_STATUS_BUILD(crypto, comp, xlat, eolb) \
-   crypto) & QAT_COMN_RESP_CRYPTO_STATUS_MASK) << \
-   QAT_COMN_RESP_CRYPTO_STATUS_BITPOS) | \
-   (((comp) & QAT_COMN_RESP_CMP_STATUS_MASK) << \
-   QAT_COMN_RESP_CMP_STATUS_BITPOS) | \
-   (((xlat) & QAT_COMN_RESP_XLAT_STATUS_MASK) << \
-   QAT_COMN_RESP_XLAT_STATUS_BITPOS) | \
-   (((eolb) & QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK) << \
-   QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS))
+#define QAT_COMN_RESP_UNSUPPORTED_REQUEST_BITPOS 2
+#define QAT_COMN_RESP_UNSUPPORTED_REQUEST_MASK 0x1
+#define QAT_COMN_RESP_XLT_WA_APPLIED_BITPOS 0
+#define QAT_COMN_RESP_XLT_WA_APPLIED_MASK 0x1
 
 #define ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(status) \
QAT_FIELD_GET(status, QAT_COMN_RESP_CRYPTO_STATUS_BITPOS, \
QAT_COMN_RESP_CRYPTO_STATUS_MASK)
 
+#define ICP_QAT_FW_COMN_RESP_PKE_STAT_GET(status) \
+   QAT_FIELD_GET(status, QAT_COMN_RESP_PKE_STATUS_BITPOS, \
+   QAT_COMN_RESP_PKE_STATUS_MASK)
+
 #define ICP_QAT_FW_COMN_RESP_CMP_STAT_GET(status) \
QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_STATUS_BITPOS, \
QAT_COMN_RESP_CMP_STATUS_MASK)
@@ -235,10 +264,18 @@ struct icp_qat_fw_comn_resp {
QAT_FIELD_GET(status, QAT_COMN_RESP_XLAT_STATUS_BITPOS, \
QAT_COMN_RESP_XLAT_STATUS_MASK)
 
+#define ICP_QAT_FW_COMN_RESP_XLT_WA_APPLIED_GET(status) \
+   QAT_FIELD_GET(status, QAT_COMN_RESP_XLT_WA_APPLIED_BITPOS, \
+   QAT_COMN_RESP_XLT_WA_APPLIED_MASK)
+
 #define ICP_QAT_FW_COMN_RESP_CMP_END_OF_LAST_BLK_FLAG_GET(status) \
QAT_FIELD_GET(

[dpdk-dev] [PATCH v6 04/16] compress/qat: add xform processing

2018-07-12 Thread Fiona Trahe
Add code to process compressdev rte_comp_xforms, creating
private qat_comp_xforms with prepared firmware message templates.

Signed-off-by: Fiona Trahe 
Signed-off-by: Tomasz Jozwiak 
---
 drivers/compress/qat/qat_comp.c | 239 
 drivers/compress/qat/qat_comp.h |  30 +
 drivers/compress/qat/qat_comp_pmd.h |  16 +++
 3 files changed, 285 insertions(+)

diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
index caa1158..cb2005a 100644
--- a/drivers/compress/qat/qat_comp.c
+++ b/drivers/compress/qat/qat_comp.c
@@ -2,4 +2,243 @@
  * Copyright(c) 2018 Intel Corporation
  */
 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "qat_logs.h"
 #include "qat_comp.h"
+#include "qat_comp_pmd.h"
+
+unsigned int
+qat_comp_xform_size(void)
+{
+   return RTE_ALIGN_CEIL(sizeof(struct qat_comp_xform), 8);
+}
+
+static void qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
+   enum qat_comp_request_type request)
+{
+   if (request == QAT_COMP_REQUEST_FIXED_COMP_STATELESS)
+   header->service_cmd_id = ICP_QAT_FW_COMP_CMD_STATIC;
+   else if (request == QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS)
+   header->service_cmd_id = ICP_QAT_FW_COMP_CMD_DYNAMIC;
+   else if (request == QAT_COMP_REQUEST_DECOMPRESS)
+   header->service_cmd_id = ICP_QAT_FW_COMP_CMD_DECOMPRESS;
+
+   header->service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_COMP;
+   header->hdr_flags =
+   ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(ICP_QAT_FW_COMN_REQ_FLAG_SET);
+
+   header->comn_req_flags = ICP_QAT_FW_COMN_FLAGS_BUILD(
+   QAT_COMN_CD_FLD_TYPE_16BYTE_DATA, QAT_COMN_PTR_TYPE_FLAT);
+}
+
+static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
+   const struct rte_memzone *interm_buff_mz __rte_unused,
+   const struct rte_comp_xform *xform)
+{
+   struct icp_qat_fw_comp_req *comp_req;
+   int comp_level, algo;
+   uint32_t req_par_flags;
+   int direction = ICP_QAT_HW_COMPRESSION_DIR_COMPRESS;
+
+   if (unlikely(qat_xform == NULL)) {
+   QAT_LOG(ERR, "Session was not created for this device");
+   return -EINVAL;
+   }
+
+   if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS) {
+   direction = ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS;
+   comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
+   req_par_flags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD(
+   ICP_QAT_FW_COMP_SOP, ICP_QAT_FW_COMP_EOP,
+   ICP_QAT_FW_COMP_BFINAL, ICP_QAT_FW_COMP_NO_CNV,
+   ICP_QAT_FW_COMP_NO_CNV_RECOVERY);
+
+   } else {
+   if (xform->compress.level == RTE_COMP_LEVEL_PMD_DEFAULT)
+   comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+   else if (xform->compress.level == 1)
+   comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
+   else if (xform->compress.level == 2)
+   comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_4;
+   else if (xform->compress.level == 3)
+   comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+   else if (xform->compress.level >= 4 &&
+xform->compress.level <= 9)
+   comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_16;
+   else {
+   QAT_LOG(ERR, "compression level not supported");
+   return -EINVAL;
+   }
+   req_par_flags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD(
+   ICP_QAT_FW_COMP_SOP, ICP_QAT_FW_COMP_EOP,
+   ICP_QAT_FW_COMP_BFINAL, ICP_QAT_FW_COMP_CNV,
+   ICP_QAT_FW_COMP_CNV_RECOVERY);
+   }
+
+   switch (xform->compress.algo) {
+   case RTE_COMP_ALGO_DEFLATE:
+   algo = ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE;
+   break;
+   case RTE_COMP_ALGO_LZS:
+   default:
+   /* RTE_COMP_NULL */
+   QAT_LOG(ERR, "compression algorithm not supported");
+   return -EINVAL;
+   }
+
+   comp_req = &qat_xform->qat_comp_req_tmpl;
+
+   /* Initialize header */
+   qat_comp_create_req_hdr(&comp_req->comn_hdr,
+   qat_xform->qat_comp_request_type);
+
+   comp_req->comn_hdr.serv_specif_flags = ICP_QAT_FW_COMP_FLAGS_BUILD(
+   ICP_QAT_FW_COMP_STATELESS_SESSION,
+   ICP_QAT_FW_COMP_NOT_AUTO_SELECT_BEST,
+   ICP_QAT_FW_COMP_NOT_ENH_AUTO_SELECT_BEST,
+   ICP_QAT_FW_COMP_NOT_DISABLE_TYPE0_ENH_AUTO_SELECT_BEST,
+   ICP_QAT_FW_COMP_DISABLE_SECURE_RAM_USED_AS_INTMD_BUF);
+
+   comp_req->cd_pars.sl.comp_slice

[dpdk-dev] [PATCH v6 05/16] compress/qat: create fw request and process response

2018-07-12 Thread Fiona Trahe
Add functions to create the request message to send to
firmware and to process the firmware response.

Signed-off-by: Fiona Trahe 
Signed-off-by: Tomasz Jozwiak 
---
 drivers/compress/qat/qat_comp.c | 101 
 drivers/compress/qat/qat_comp.h |   8 +++
 drivers/compress/qat/qat_comp_pmd.h |   1 +
 3 files changed, 110 insertions(+)

diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
index cb2005a..a32d6ef 100644
--- a/drivers/compress/qat/qat_comp.c
+++ b/drivers/compress/qat/qat_comp.c
@@ -19,6 +19,107 @@
 #include "qat_comp.h"
 #include "qat_comp_pmd.h"
 
+
+int
+qat_comp_build_request(void *in_op, uint8_t *out_msg,
+  void *op_cookie __rte_unused,
+  enum qat_device_gen qat_dev_gen __rte_unused)
+{
+   struct rte_comp_op *op = in_op;
+   struct qat_comp_xform *qat_xform = op->private_xform;
+   const uint8_t *tmpl = (uint8_t *)&qat_xform->qat_comp_req_tmpl;
+   struct icp_qat_fw_comp_req *comp_req =
+   (struct icp_qat_fw_comp_req *)out_msg;
+
+   if (unlikely(op->op_type != RTE_COMP_OP_STATELESS)) {
+   QAT_DP_LOG(ERR, "QAT PMD only supports stateless compression "
+   "operation requests, op (%p) is not a "
+   "stateless operation.", op);
+   return -EINVAL;
+   }
+
+   rte_mov128(out_msg, tmpl);
+   comp_req->comn_mid.opaque_data = (uint64_t)(uintptr_t)op;
+
+   /* common for sgl and flat buffers */
+   comp_req->comp_pars.comp_len = op->src.length;
+   comp_req->comp_pars.out_buffer_sz = rte_pktmbuf_pkt_len(op->m_dst);
+
+   /* sgl */
+   if (op->m_src->next != NULL || op->m_dst->next != NULL) {
+   QAT_DP_LOG(ERR, "QAT PMD doesn't support scatter gather");
+   return -EINVAL;
+
+   } else {
+   ICP_QAT_FW_COMN_PTR_TYPE_SET(comp_req->comn_hdr.comn_req_flags,
+   QAT_COMN_PTR_TYPE_FLAT);
+   comp_req->comn_mid.src_length = rte_pktmbuf_data_len(op->m_src);
+   comp_req->comn_mid.dst_length = rte_pktmbuf_data_len(op->m_dst);
+
+   comp_req->comn_mid.src_data_addr =
+   rte_pktmbuf_mtophys_offset(op->m_src, op->src.offset);
+   comp_req->comn_mid.dest_data_addr =
+   rte_pktmbuf_mtophys_offset(op->m_dst, op->dst.offset);
+   }
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+   QAT_DP_LOG(DEBUG, "Direction: %s",
+   qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS ?
+   "decompression" : "compression");
+   QAT_DP_HEXDUMP_LOG(DEBUG, "qat compression message:", comp_req,
+   sizeof(struct icp_qat_fw_comp_req));
+#endif
+   return 0;
+}
+
+int
+qat_comp_process_response(void **op, uint8_t *resp)
+{
+   struct icp_qat_fw_comp_resp *resp_msg =
+   (struct icp_qat_fw_comp_resp *)resp;
+   struct rte_comp_op *rx_op = (struct rte_comp_op *)(uintptr_t)
+   (resp_msg->opaque_data);
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+   QAT_DP_LOG(DEBUG, "Direction: %s",
+   qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS ?
+   "decompression" : "compression");
+   QAT_DP_HEXDUMP_LOG(DEBUG,  "qat_response:", (uint8_t *)resp_msg,
+   sizeof(struct icp_qat_fw_comp_resp));
+#endif
+
+   if ((ICP_QAT_FW_COMN_RESP_CMP_STAT_GET(resp_msg->comn_resp.comn_status)
+   | ICP_QAT_FW_COMN_RESP_XLAT_STAT_GET(
+   resp_msg->comn_resp.comn_status)) !=
+   ICP_QAT_FW_COMN_STATUS_FLAG_OK) {
+
+   rx_op->status = RTE_COMP_OP_STATUS_ERROR;
+   rx_op->debug_status =
+   *((uint16_t *)(&resp_msg->comn_resp.comn_error));
+   } else {
+   struct qat_comp_xform *qat_xform = rx_op->private_xform;
+   struct icp_qat_fw_resp_comp_pars *comp_resp =
+ (struct icp_qat_fw_resp_comp_pars *)&resp_msg->comp_resp_pars;
+
+   rx_op->status = RTE_COMP_OP_STATUS_SUCCESS;
+   rx_op->consumed = comp_resp->input_byte_counter;
+   rx_op->produced = comp_resp->output_byte_counter;
+
+   if (qat_xform->checksum_type != RTE_COMP_CHECKSUM_NONE) {
+   if (qat_xform->checksum_type == RTE_COMP_CHECKSUM_CRC32)
+   rx_op->output_chksum = comp_resp->curr_crc32;
+   else if (qat_xform->checksum_type ==
+   RTE_COMP_CHECKSUM_ADLER32)
+   rx_op->output_chksum = comp_resp->curr_adler_32;
+   else
+   rx_op->output_chksum = comp_resp->curr_chksum;
+   }
+   }
+   *op = (void *)rx_op;
+
+   return 0;
+}
+
 unsi

[dpdk-dev] [PATCH v6 06/16] compress/qat: check that correct firmware is in use

2018-07-12 Thread Fiona Trahe
Check bit in response message to verify that correct firmware
is in use for compression. If not return an error.

Signed-off-by: Fiona Trahe 
Signed-off-by: Tomasz Jozwiak 
---
 drivers/compress/qat/qat_comp.c | 16 +++-
 drivers/compress/qat/qat_comp.h |  2 ++
 2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
index a32d6ef..e8019eb 100644
--- a/drivers/compress/qat/qat_comp.c
+++ b/drivers/compress/qat/qat_comp.c
@@ -2,7 +2,6 @@
  * Copyright(c) 2018 Intel Corporation
  */
 
-
 #include 
 #include 
 #include 
@@ -79,6 +78,8 @@ qat_comp_process_response(void **op, uint8_t *resp)
(struct icp_qat_fw_comp_resp *)resp;
struct rte_comp_op *rx_op = (struct rte_comp_op *)(uintptr_t)
(resp_msg->opaque_data);
+   struct qat_comp_xform *qat_xform = (struct qat_comp_xform *)
+   (rx_op->private_xform);
 
 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
QAT_DP_LOG(DEBUG, "Direction: %s",
@@ -88,6 +89,19 @@ qat_comp_process_response(void **op, uint8_t *resp)
sizeof(struct icp_qat_fw_comp_resp));
 #endif
 
+   if (likely(qat_xform->qat_comp_request_type
+   != QAT_COMP_REQUEST_DECOMPRESS)) {
+   if (unlikely(ICP_QAT_FW_COMN_HDR_CNV_FLAG_GET(
+   resp_msg->comn_resp.hdr_flags)
+   == ICP_QAT_FW_COMP_NO_CNV)) {
+   rx_op->status = RTE_COMP_OP_STATUS_ERROR;
+   rx_op->debug_status = ERR_CODE_QAT_COMP_WRONG_FW;
+   *op = (void *)rx_op;
+   QAT_DP_LOG(ERR, "QAT has wrong firmware");
+   return 0;
+   }
+   }
+
if ((ICP_QAT_FW_COMN_RESP_CMP_STAT_GET(resp_msg->comn_resp.comn_status)
| ICP_QAT_FW_COMN_RESP_XLAT_STAT_GET(
resp_msg->comn_resp.comn_status)) !=
diff --git a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h
index 46105b4..937f3c8 100644
--- a/drivers/compress/qat/qat_comp.h
+++ b/drivers/compress/qat/qat_comp.h
@@ -15,6 +15,8 @@
 #include "icp_qat_fw_comp.h"
 #include "icp_qat_fw_la.h"
 
+#define ERR_CODE_QAT_COMP_WRONG_FW -99
+
 enum qat_comp_request_type {
QAT_COMP_REQUEST_FIXED_COMP_STATELESS,
QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS,
-- 
2.7.4



[dpdk-dev] [PATCH v6 11/16] compress/qat: add enqueue/dequeue functions

2018-07-12 Thread Fiona Trahe
Wrap generic qat enqueue/dequeue functions with
compressdev enqueue and dequeue fns.

Signed-off-by: Fiona Trahe 
Signed-off-by: Tomasz Jozwiak 
---
 drivers/compress/qat/qat_comp_pmd.c | 14 ++
 drivers/compress/qat/qat_comp_pmd.h |  8 
 2 files changed, 22 insertions(+)

diff --git a/drivers/compress/qat/qat_comp_pmd.c 
b/drivers/compress/qat/qat_comp_pmd.c
index 482ebd1..086b6cf 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -213,3 +213,17 @@ qat_comp_dev_info_get(struct rte_compressdev *dev,
info->capabilities = comp_dev->qat_dev_capabilities;
}
 }
+
+uint16_t
+qat_comp_pmd_enqueue_op_burst(void *qp, struct rte_comp_op **ops,
+   uint16_t nb_ops)
+{
+   return qat_enqueue_op_burst(qp, (void **)ops, nb_ops);
+}
+
+uint16_t
+qat_comp_pmd_dequeue_op_burst(void *qp, struct rte_comp_op **ops,
+ uint16_t nb_ops)
+{
+   return qat_dequeue_op_burst(qp, (void **)ops, nb_ops);
+}
diff --git a/drivers/compress/qat/qat_comp_pmd.h 
b/drivers/compress/qat/qat_comp_pmd.h
index 22576f4..f360c29 100644
--- a/drivers/compress/qat/qat_comp_pmd.h
+++ b/drivers/compress/qat/qat_comp_pmd.h
@@ -54,5 +54,13 @@ void
 qat_comp_dev_info_get(struct rte_compressdev *dev,
struct rte_compressdev_info *info);
 
+uint16_t
+qat_comp_pmd_enqueue_op_burst(void *qp, struct rte_comp_op **ops,
+   uint16_t nb_ops);
+
+uint16_t
+qat_comp_pmd_dequeue_op_burst(void *qp, struct rte_comp_op **ops,
+   uint16_t nb_ops);
+
 #endif
 #endif /* _QAT_COMP_PMD_H_ */
-- 
2.7.4



[dpdk-dev] [PATCH v6 15/16] compress/qat: prevent device usage if incorrect firmware

2018-07-12 Thread Fiona Trahe
Previous check only causes op to fail on dequeue.
This extends so once first fail is detected, application can
no longer enqueue ops to the device and will also get an
appropriate error if trying to reconfigure or setup the device.

Signed-off-by: Tomasz Jozwiak 
Signed-off-by: Fiona Trahe 
---
 drivers/compress/qat/qat_comp_pmd.c | 57 -
 1 file changed, 56 insertions(+), 1 deletion(-)

diff --git a/drivers/compress/qat/qat_comp_pmd.c 
b/drivers/compress/qat/qat_comp_pmd.c
index 9bb9897..0a571b3 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -252,6 +252,61 @@ qat_comp_pmd_dequeue_op_burst(void *qp, struct rte_comp_op 
**ops,
 }
 
 
+static uint16_t
+qat_comp_pmd_enq_deq_dummy_op_burst(void *qp __rte_unused,
+   struct rte_comp_op **ops __rte_unused,
+   uint16_t nb_ops __rte_unused)
+{
+   QAT_DP_LOG(ERR, "QAT PMD detected wrong FW version !");
+   return 0;
+}
+
+static struct rte_compressdev_ops compress_qat_dummy_ops = {
+
+   /* Device related operations */
+   .dev_configure  = NULL,
+   .dev_start  = NULL,
+   .dev_stop   = qat_comp_dev_stop,
+   .dev_close  = qat_comp_dev_close,
+   .dev_infos_get  = NULL,
+
+   .stats_get  = NULL,
+   .stats_reset= qat_comp_stats_reset,
+   .queue_pair_setup   = NULL,
+   .queue_pair_release = qat_comp_qp_release,
+
+   /* Compression related operations */
+   .private_xform_create   = NULL,
+   .private_xform_free = qat_comp_private_xform_free
+};
+
+static uint16_t
+qat_comp_pmd_dequeue_frst_op_burst(void *qp, struct rte_comp_op **ops,
+  uint16_t nb_ops)
+{
+   uint16_t ret = qat_dequeue_op_burst(qp, (void **)ops, nb_ops);
+   struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+
+   if (ret) {
+   if ((*ops)->debug_status ==
+   (uint64_t)ERR_CODE_QAT_COMP_WRONG_FW) {
+   tmp_qp->qat_dev->comp_dev->compressdev->enqueue_burst =
+   qat_comp_pmd_enq_deq_dummy_op_burst;
+   tmp_qp->qat_dev->comp_dev->compressdev->dequeue_burst =
+   qat_comp_pmd_enq_deq_dummy_op_burst;
+
+   tmp_qp->qat_dev->comp_dev->compressdev->dev_ops =
+   &compress_qat_dummy_ops;
+   QAT_LOG(ERR, "QAT PMD detected wrong FW version !");
+
+   } else {
+   tmp_qp->qat_dev->comp_dev->compressdev->dequeue_burst =
+   qat_comp_pmd_dequeue_op_burst;
+   }
+   }
+   return ret;
+}
+
 static struct rte_compressdev_ops compress_qat_ops = {
 
/* Device related operations */
@@ -302,7 +357,7 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev)
compressdev->dev_ops = &compress_qat_ops;
 
compressdev->enqueue_burst = qat_comp_pmd_enqueue_op_burst;
-   compressdev->dequeue_burst = qat_comp_pmd_dequeue_op_burst;
+   compressdev->dequeue_burst = qat_comp_pmd_dequeue_frst_op_burst;
 
compressdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED;
 
-- 
2.7.4



[dpdk-dev] [PATCH v6 09/16] compress/qat: add fns to configure and clear device

2018-07-12 Thread Fiona Trahe
Add functions to configure and clear the qat comp device,
including the creation and freeing of the xform pool
and the freeing of queue-pairs.

Signed-off-by: Fiona Trahe 
Signed-off-by: Tomasz Jozwiak 
---
 drivers/compress/qat/qat_comp_pmd.c | 95 +
 drivers/compress/qat/qat_comp_pmd.h |  7 +++
 2 files changed, 102 insertions(+)

diff --git a/drivers/compress/qat/qat_comp_pmd.c 
b/drivers/compress/qat/qat_comp_pmd.c
index 5ae6caf..beab6e3 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -99,3 +99,98 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t 
qp_id,
 
return ret;
 }
+
+static struct rte_mempool *
+qat_comp_create_xform_pool(struct qat_comp_dev_private *comp_dev,
+ uint32_t num_elements)
+{
+   char xform_pool_name[RTE_MEMPOOL_NAMESIZE];
+   struct rte_mempool *mp;
+
+   snprintf(xform_pool_name, RTE_MEMPOOL_NAMESIZE,
+   "%s_xforms", comp_dev->qat_dev->name);
+
+   QAT_LOG(DEBUG, "xformpool: %s", xform_pool_name);
+   mp = rte_mempool_lookup(xform_pool_name);
+
+   if (mp != NULL) {
+   QAT_LOG(DEBUG, "xformpool already created");
+   if (mp->size != num_elements) {
+   QAT_LOG(DEBUG, "xformpool wrong size - delete it");
+   rte_mempool_free(mp);
+   mp = NULL;
+   comp_dev->xformpool = NULL;
+   }
+   }
+
+   if (mp == NULL)
+   mp = rte_mempool_create(xform_pool_name,
+   num_elements,
+   qat_comp_xform_size(), 0, 0,
+   NULL, NULL, NULL, NULL, rte_socket_id(),
+   0);
+   if (mp == NULL) {
+   QAT_LOG(ERR, "Err creating mempool %s w %d elements of size %d",
+   xform_pool_name, num_elements, qat_comp_xform_size());
+   return NULL;
+   }
+
+   return mp;
+}
+
+static void
+_qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev)
+{
+   /* Free private_xform pool */
+   if (comp_dev->xformpool) {
+   /* Free internal mempool for private xforms */
+   rte_mempool_free(comp_dev->xformpool);
+   comp_dev->xformpool = NULL;
+   }
+}
+
+int
+qat_comp_dev_config(struct rte_compressdev *dev,
+   struct rte_compressdev_config *config)
+{
+   struct qat_comp_dev_private *comp_dev = dev->data->dev_private;
+   int ret = 0;
+
+   if (config->max_nb_streams != 0) {
+   QAT_LOG(ERR,
+   "QAT device does not support STATEFUL so max_nb_streams must be 0");
+   return -EINVAL;
+   }
+
+   comp_dev->xformpool = qat_comp_create_xform_pool(comp_dev,
+   config->max_nb_priv_xforms);
+   if (comp_dev->xformpool == NULL) {
+
+   ret = -ENOMEM;
+   goto error_out;
+   }
+   return 0;
+
+error_out:
+   _qat_comp_dev_config_clear(comp_dev);
+   return ret;
+}
+
+
+int
+qat_comp_dev_close(struct rte_compressdev *dev)
+{
+   int i;
+   int ret = 0;
+   struct qat_comp_dev_private *comp_dev = dev->data->dev_private;
+
+   for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+   ret = qat_comp_qp_release(dev, i);
+   if (ret < 0)
+   return ret;
+   }
+
+   _qat_comp_dev_config_clear(comp_dev);
+
+   return ret;
+}
diff --git a/drivers/compress/qat/qat_comp_pmd.h 
b/drivers/compress/qat/qat_comp_pmd.h
index 5a4bc31..b10a66f 100644
--- a/drivers/compress/qat/qat_comp_pmd.h
+++ b/drivers/compress/qat/qat_comp_pmd.h
@@ -41,5 +41,12 @@ int
 qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
  uint32_t max_inflight_ops, int socket_id);
 
+int
+qat_comp_dev_config(struct rte_compressdev *dev,
+   struct rte_compressdev_config *config);
+
+int
+qat_comp_dev_close(struct rte_compressdev *dev);
+
 #endif
 #endif /* _QAT_COMP_PMD_H_ */
-- 
2.7.4



[dpdk-dev] [PATCH v6 14/16] compress/qat: add fns to create and destroy the PMD

2018-07-12 Thread Fiona Trahe
Now that all the device operations are available,
add the functions to create and destroy the pmd.
Called on probe and remove of the qat pci device, these
register the device with the compressdev API
and plug in all the device functionality.

Signed-off-by: Fiona Trahe 
Signed-off-by: Tomasz Jozwiak 
---
 drivers/common/qat/qat_device.h |  4 ++
 drivers/common/qat/qat_qp.c | 11 -
 drivers/common/qat/qat_qp.h |  5 ++
 drivers/compress/qat/qat_comp_pmd.c | 98 +++--
 drivers/compress/qat/qat_comp_pmd.h | 11 ++---
 5 files changed, 117 insertions(+), 12 deletions(-)

diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 0cb370c..9599fc5 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -25,6 +25,8 @@
  *  - runtime data
  */
 struct qat_sym_dev_private;
+struct qat_comp_dev_private;
+
 struct qat_pci_device {
 
/* Data used by all services */
@@ -55,6 +57,8 @@ struct qat_pci_device {
 */
 
/* Data relating to compression service */
+   struct qat_comp_dev_private *comp_dev;
+   /**< link back to compressdev private data */
 
/* Data relating to asymmetric crypto service */
 
diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index 32c1759..7ca7a45 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -15,6 +15,7 @@
 #include "qat_device.h"
 #include "qat_qp.h"
 #include "qat_sym.h"
+#include "qat_comp.h"
 #include "adf_transport_access_macros.h"
 
 
@@ -606,8 +607,8 @@ qat_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops)
 
if (tmp_qp->service_type == QAT_SERVICE_SYMMETRIC)
qat_sym_process_response(ops, resp_msg);
-   /* add qat_asym_process_response here */
-   /* add qat_comp_process_response here */
+   else if (tmp_qp->service_type == QAT_SERVICE_COMPRESSION)
+   qat_comp_process_response(ops, resp_msg);
 
head = adf_modulo(head + rx_queue->msg_size,
  rx_queue->modulo_mask);
@@ -633,3 +634,9 @@ qat_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops)
}
return resp_counter;
 }
+
+__attribute__((weak)) int
+qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused)
+{
+   return  0;
+}
diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h
index 59db945..69f8a61 100644
--- a/drivers/common/qat/qat_qp.h
+++ b/drivers/common/qat/qat_qp.h
@@ -103,4 +103,9 @@ qat_qp_setup(struct qat_pci_device *qat_dev,
 int
 qat_qps_per_service(const struct qat_qp_hw_data *qp_hw_data,
enum qat_service_type service);
+
+/* Needed for weak function*/
+int
+qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused);
+
 #endif /* _QAT_QP_H_ */
diff --git a/drivers/compress/qat/qat_comp_pmd.c 
b/drivers/compress/qat/qat_comp_pmd.c
index 013ff6e..9bb9897 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -5,6 +5,18 @@
 #include "qat_comp.h"
 #include "qat_comp_pmd.h"
 
+static const struct rte_compressdev_capabilities qat_comp_gen_capabilities[] = 
{
+   {/* COMPRESSION - deflate */
+.algo = RTE_COMP_ALGO_DEFLATE,
+.comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
+   RTE_COMP_FF_CRC32_CHECKSUM |
+   RTE_COMP_FF_ADLER32_CHECKSUM |
+   RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
+   RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
+   RTE_COMP_FF_HUFFMAN_FIXED,
+.window_size = {.min = 15, .max = 15, .increment = 0} },
+   {RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+
 static void
 qat_comp_stats_get(struct rte_compressdev *dev,
struct rte_compressdev_stats *stats)
@@ -225,14 +237,14 @@ qat_comp_dev_info_get(struct rte_compressdev *dev,
}
 }
 
-uint16_t
+static uint16_t
 qat_comp_pmd_enqueue_op_burst(void *qp, struct rte_comp_op **ops,
uint16_t nb_ops)
 {
return qat_enqueue_op_burst(qp, (void **)ops, nb_ops);
 }
 
-uint16_t
+static uint16_t
 qat_comp_pmd_dequeue_op_burst(void *qp, struct rte_comp_op **ops,
  uint16_t nb_ops)
 {
@@ -240,7 +252,7 @@ qat_comp_pmd_dequeue_op_burst(void *qp, struct rte_comp_op 
**ops,
 }
 
 
-struct rte_compressdev_ops compress_qat_ops = {
+static struct rte_compressdev_ops compress_qat_ops = {
 
/* Device related operations */
.dev_configure  = qat_comp_dev_config,
@@ -258,3 +270,83 @@ struct rte_compressdev_ops compress_qat_ops = {
.private_xform_create   = qat_comp_private_xform_create,
.private_xform_free = qat_comp_private_xform_free
 };
+
+int
+qat_comp_dev_create(struct qat_pci_device *qat_pci_dev)
+{
+   if (qat_pci_dev->qat_dev_gen 

[dpdk-dev] [PATCH v6 12/16] compress/qat: add device start and stop fns

2018-07-12 Thread Fiona Trahe
There are no specific actions needed to start/stop a QAT comp device
so these are just trivial fns to satisfy the pmd API.

Signed-off-by: Fiona Trahe 
---
 drivers/compress/qat/qat_comp_pmd.c | 11 +++
 drivers/compress/qat/qat_comp_pmd.h |  6 ++
 2 files changed, 17 insertions(+)

diff --git a/drivers/compress/qat/qat_comp_pmd.c 
b/drivers/compress/qat/qat_comp_pmd.c
index 086b6cf..1ab5cf7 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -176,6 +176,17 @@ qat_comp_dev_config(struct rte_compressdev *dev,
return ret;
 }
 
+int
+qat_comp_dev_start(struct rte_compressdev *dev __rte_unused)
+{
+   return 0;
+}
+
+void
+qat_comp_dev_stop(struct rte_compressdev *dev __rte_unused)
+{
+
+}
 
 int
 qat_comp_dev_close(struct rte_compressdev *dev)
diff --git a/drivers/compress/qat/qat_comp_pmd.h 
b/drivers/compress/qat/qat_comp_pmd.h
index f360c29..22cbefb 100644
--- a/drivers/compress/qat/qat_comp_pmd.h
+++ b/drivers/compress/qat/qat_comp_pmd.h
@@ -62,5 +62,11 @@ uint16_t
 qat_comp_pmd_dequeue_op_burst(void *qp, struct rte_comp_op **ops,
uint16_t nb_ops);
 
+int
+qat_comp_dev_start(struct rte_compressdev *dev __rte_unused);
+
+void
+qat_comp_dev_stop(struct rte_compressdev *dev __rte_unused);
+
 #endif
 #endif /* _QAT_COMP_PMD_H_ */
-- 
2.7.4



[dpdk-dev] [PATCH v6 13/16] compress/qat: create and populate the ops structure

2018-07-12 Thread Fiona Trahe
Create an ops structure and populate it with the
qat-specific functions.

Signed-off-by: Fiona Trahe 
Signed-off-by: Tomasz Jozwiak 
---
 drivers/compress/qat/qat_comp_pmd.c | 38 -
 drivers/compress/qat/qat_comp_pmd.h | 30 -
 2 files changed, 29 insertions(+), 39 deletions(-)

diff --git a/drivers/compress/qat/qat_comp_pmd.c 
b/drivers/compress/qat/qat_comp_pmd.c
index 1ab5cf7..013ff6e 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -5,7 +5,7 @@
 #include "qat_comp.h"
 #include "qat_comp_pmd.h"
 
-void
+static void
 qat_comp_stats_get(struct rte_compressdev *dev,
struct rte_compressdev_stats *stats)
 {
@@ -25,7 +25,7 @@ qat_comp_stats_get(struct rte_compressdev *dev,
stats->dequeue_err_count = qat_stats.dequeue_err_count;
 }
 
-void
+static void
 qat_comp_stats_reset(struct rte_compressdev *dev)
 {
struct qat_comp_dev_private *qat_priv;
@@ -40,7 +40,7 @@ qat_comp_stats_reset(struct rte_compressdev *dev)
 
 }
 
-int
+static int
 qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 {
struct qat_comp_dev_private *qat_private = dev->data->dev_private;
@@ -55,7 +55,7 @@ qat_comp_qp_release(struct rte_compressdev *dev, uint16_t 
queue_pair_id)
&(dev->data->queue_pairs[queue_pair_id]));
 }
 
-int
+static int
 qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
  uint32_t max_inflight_ops, int socket_id)
 {
@@ -149,7 +149,7 @@ _qat_comp_dev_config_clear(struct qat_comp_dev_private 
*comp_dev)
}
 }
 
-int
+static int
 qat_comp_dev_config(struct rte_compressdev *dev,
struct rte_compressdev_config *config)
 {
@@ -176,19 +176,19 @@ qat_comp_dev_config(struct rte_compressdev *dev,
return ret;
 }
 
-int
+static int
 qat_comp_dev_start(struct rte_compressdev *dev __rte_unused)
 {
return 0;
 }
 
-void
+static void
 qat_comp_dev_stop(struct rte_compressdev *dev __rte_unused)
 {
 
 }
 
-int
+static int
 qat_comp_dev_close(struct rte_compressdev *dev)
 {
int i;
@@ -207,7 +207,7 @@ qat_comp_dev_close(struct rte_compressdev *dev)
 }
 
 
-void
+static void
 qat_comp_dev_info_get(struct rte_compressdev *dev,
struct rte_compressdev_info *info)
 {
@@ -238,3 +238,23 @@ qat_comp_pmd_dequeue_op_burst(void *qp, struct rte_comp_op 
**ops,
 {
return qat_dequeue_op_burst(qp, (void **)ops, nb_ops);
 }
+
+
+struct rte_compressdev_ops compress_qat_ops = {
+
+   /* Device related operations */
+   .dev_configure  = qat_comp_dev_config,
+   .dev_start  = qat_comp_dev_start,
+   .dev_stop   = qat_comp_dev_stop,
+   .dev_close  = qat_comp_dev_close,
+   .dev_infos_get  = qat_comp_dev_info_get,
+
+   .stats_get  = qat_comp_stats_get,
+   .stats_reset= qat_comp_stats_reset,
+   .queue_pair_setup   = qat_comp_qp_setup,
+   .queue_pair_release = qat_comp_qp_release,
+
+   /* Compression related operations */
+   .private_xform_create   = qat_comp_private_xform_create,
+   .private_xform_free = qat_comp_private_xform_free
+};
diff --git a/drivers/compress/qat/qat_comp_pmd.h 
b/drivers/compress/qat/qat_comp_pmd.h
index 22cbefb..7ba1b8d 100644
--- a/drivers/compress/qat/qat_comp_pmd.h
+++ b/drivers/compress/qat/qat_comp_pmd.h
@@ -30,30 +30,6 @@ struct qat_comp_dev_private {
 
 };
 
-void
-qat_comp_stats_reset(struct rte_compressdev *dev);
-
-void
-qat_comp_stats_get(struct rte_compressdev *dev,
-   struct rte_compressdev_stats *stats);
-int
-qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id);
-
-int
-qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
- uint32_t max_inflight_ops, int socket_id);
-
-int
-qat_comp_dev_config(struct rte_compressdev *dev,
-   struct rte_compressdev_config *config);
-
-int
-qat_comp_dev_close(struct rte_compressdev *dev);
-
-void
-qat_comp_dev_info_get(struct rte_compressdev *dev,
-   struct rte_compressdev_info *info);
-
 uint16_t
 qat_comp_pmd_enqueue_op_burst(void *qp, struct rte_comp_op **ops,
uint16_t nb_ops);
@@ -62,11 +38,5 @@ uint16_t
 qat_comp_pmd_dequeue_op_burst(void *qp, struct rte_comp_op **ops,
uint16_t nb_ops);
 
-int
-qat_comp_dev_start(struct rte_compressdev *dev __rte_unused);
-
-void
-qat_comp_dev_stop(struct rte_compressdev *dev __rte_unused);
-
 #endif
 #endif /* _QAT_COMP_PMD_H_ */
-- 
2.7.4



[dpdk-dev] [PATCH v6 07/16] compress/qat: add stats functions

2018-07-12 Thread Fiona Trahe
Add functions to get and clear compression queue-pair statistics.

Signed-off-by: Fiona Trahe 
Signed-off-by: Tomasz Jozwiak 
---
 drivers/compress/qat/qat_comp_pmd.c | 35 +++
 drivers/compress/qat/qat_comp_pmd.h |  7 +++
 2 files changed, 42 insertions(+)

diff --git a/drivers/compress/qat/qat_comp_pmd.c 
b/drivers/compress/qat/qat_comp_pmd.c
index fb035d1..6feffb7 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -3,3 +3,38 @@
  */
 
 #include "qat_comp_pmd.h"
+
+void
+qat_comp_stats_get(struct rte_compressdev *dev,
+   struct rte_compressdev_stats *stats)
+{
+   struct qat_common_stats qat_stats = {0};
+   struct qat_comp_dev_private *qat_priv;
+
+   if (stats == NULL || dev == NULL) {
+   QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
+   return;
+   }
+   qat_priv = dev->data->dev_private;
+
+   qat_stats_get(qat_priv->qat_dev, &qat_stats, QAT_SERVICE_COMPRESSION);
+   stats->enqueued_count = qat_stats.enqueued_count;
+   stats->dequeued_count = qat_stats.dequeued_count;
+   stats->enqueue_err_count = qat_stats.enqueue_err_count;
+   stats->dequeue_err_count = qat_stats.dequeue_err_count;
+}
+
+void
+qat_comp_stats_reset(struct rte_compressdev *dev)
+{
+   struct qat_comp_dev_private *qat_priv;
+
+   if (dev == NULL) {
+   QAT_LOG(ERR, "invalid compressdev ptr %p", dev);
+   return;
+   }
+   qat_priv = dev->data->dev_private;
+
+   qat_stats_reset(qat_priv->qat_dev, QAT_SERVICE_COMPRESSION);
+
+}
diff --git a/drivers/compress/qat/qat_comp_pmd.h 
b/drivers/compress/qat/qat_comp_pmd.h
index cd04f11..27d84c8 100644
--- a/drivers/compress/qat/qat_comp_pmd.h
+++ b/drivers/compress/qat/qat_comp_pmd.h
@@ -28,5 +28,12 @@ struct qat_comp_dev_private {
 
 };
 
+void
+qat_comp_stats_reset(struct rte_compressdev *dev);
+
+void
+qat_comp_stats_get(struct rte_compressdev *dev,
+   struct rte_compressdev_stats *stats);
+
 #endif
 #endif /* _QAT_COMP_PMD_H_ */
-- 
2.7.4



[dpdk-dev] [PATCH v6 10/16] compress/qat: add fn to return device info

2018-07-12 Thread Fiona Trahe
Add capabilities pointer to internal qat comp device
and function to return this and other info.

C
Signed-off-by: Fiona Trahe 
Signed-off-by: Tomasz Jozwiak 
---
 drivers/compress/qat/qat_comp_pmd.c | 19 +++
 drivers/compress/qat/qat_comp_pmd.h |  6 ++
 2 files changed, 25 insertions(+)

diff --git a/drivers/compress/qat/qat_comp_pmd.c 
b/drivers/compress/qat/qat_comp_pmd.c
index beab6e3..482ebd1 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -194,3 +194,22 @@ qat_comp_dev_close(struct rte_compressdev *dev)
 
return ret;
 }
+
+
+void
+qat_comp_dev_info_get(struct rte_compressdev *dev,
+   struct rte_compressdev_info *info)
+{
+   struct qat_comp_dev_private *comp_dev = dev->data->dev_private;
+   const struct qat_qp_hw_data *comp_hw_qps =
+   qat_gen_config[comp_dev->qat_dev->qat_dev_gen]
+ .qp_hw_data[QAT_SERVICE_COMPRESSION];
+
+   if (info != NULL) {
+   info->max_nb_queue_pairs =
+   qat_qps_per_service(comp_hw_qps,
+   QAT_SERVICE_COMPRESSION);
+   info->feature_flags = dev->feature_flags;
+   info->capabilities = comp_dev->qat_dev_capabilities;
+   }
+}
diff --git a/drivers/compress/qat/qat_comp_pmd.h 
b/drivers/compress/qat/qat_comp_pmd.h
index b10a66f..22576f4 100644
--- a/drivers/compress/qat/qat_comp_pmd.h
+++ b/drivers/compress/qat/qat_comp_pmd.h
@@ -21,6 +21,8 @@ struct qat_comp_dev_private {
/**< The qat pci device hosting the service */
struct rte_compressdev *compressdev;
/**< The pointer to this compression device structure */
+   const struct rte_compressdev_capabilities *qat_dev_capabilities;
+   /* QAT device compression capabilities */
const struct rte_memzone *interm_buff_mz;
/**< The device's memory for intermediate buffers */
struct rte_mempool *xformpool;
@@ -48,5 +50,9 @@ qat_comp_dev_config(struct rte_compressdev *dev,
 int
 qat_comp_dev_close(struct rte_compressdev *dev);
 
+void
+qat_comp_dev_info_get(struct rte_compressdev *dev,
+   struct rte_compressdev_info *info);
+
 #endif
 #endif /* _QAT_COMP_PMD_H_ */
-- 
2.7.4



[dpdk-dev] [PATCH v6 08/16] compress/qat: setup queue-pairs for compression service

2018-07-12 Thread Fiona Trahe
Setup and clear queue-pairs for handling compression
requests and responses.

Signed-off-by: Fiona Trahe 
Signed-off-by: Tomasz Jozwiak 
---
 drivers/compress/qat/qat_comp.h |  2 ++
 drivers/compress/qat/qat_comp_pmd.c | 61 +
 drivers/compress/qat/qat_comp_pmd.h |  6 
 3 files changed, 69 insertions(+)

diff --git a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h
index 937f3c8..9e6861b 100644
--- a/drivers/compress/qat/qat_comp.h
+++ b/drivers/compress/qat/qat_comp.h
@@ -24,6 +24,8 @@ enum qat_comp_request_type {
REQ_COMP_END
 };
 
+struct qat_comp_op_cookie {
+};
 
 struct qat_comp_xform {
struct icp_qat_fw_comp_req qat_comp_req_tmpl;
diff --git a/drivers/compress/qat/qat_comp_pmd.c 
b/drivers/compress/qat/qat_comp_pmd.c
index 6feffb7..5ae6caf 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -2,6 +2,7 @@
  * Copyright(c) 2015-2018 Intel Corporation
  */
 
+#include "qat_comp.h"
 #include "qat_comp_pmd.h"
 
 void
@@ -38,3 +39,63 @@ qat_comp_stats_reset(struct rte_compressdev *dev)
qat_stats_reset(qat_priv->qat_dev, QAT_SERVICE_COMPRESSION);
 
 }
+
+int
+qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
+{
+   struct qat_comp_dev_private *qat_private = dev->data->dev_private;
+
+   QAT_LOG(DEBUG, "Release comp qp %u on device %d",
+   queue_pair_id, dev->data->dev_id);
+
+   qat_private->qat_dev->qps_in_use[QAT_SERVICE_COMPRESSION][queue_pair_id]
+   = NULL;
+
+   return qat_qp_release((struct qat_qp **)
+   &(dev->data->queue_pairs[queue_pair_id]));
+}
+
+int
+qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
+ uint32_t max_inflight_ops, int socket_id)
+{
+   int ret = 0;
+   struct qat_qp_config qat_qp_conf;
+
+   struct qat_qp **qp_addr =
+   (struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
+   struct qat_comp_dev_private *qat_private = dev->data->dev_private;
+   const struct qat_qp_hw_data *comp_hw_qps =
+   qat_gen_config[qat_private->qat_dev->qat_dev_gen]
+ .qp_hw_data[QAT_SERVICE_COMPRESSION];
+   const struct qat_qp_hw_data *qp_hw_data = comp_hw_qps + qp_id;
+
+   /* If qp is already in use free ring memory and qp metadata. */
+   if (*qp_addr != NULL) {
+   ret = qat_comp_qp_release(dev, qp_id);
+   if (ret < 0)
+   return ret;
+   }
+   if (qp_id >= qat_qps_per_service(comp_hw_qps,
+QAT_SERVICE_COMPRESSION)) {
+   QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+   return -EINVAL;
+   }
+
+   qat_qp_conf.hw = qp_hw_data;
+   qat_qp_conf.build_request = qat_comp_build_request;
+   qat_qp_conf.cookie_size = sizeof(struct qat_comp_op_cookie);
+   qat_qp_conf.nb_descriptors = max_inflight_ops;
+   qat_qp_conf.socket_id = socket_id;
+   qat_qp_conf.service_str = "comp";
+
+   ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
+   if (ret != 0)
+   return ret;
+
+   /* store a link to the qp in the qat_pci_device */
+   qat_private->qat_dev->qps_in_use[QAT_SERVICE_COMPRESSION][qp_id]
+   = *qp_addr;
+
+   return ret;
+}
diff --git a/drivers/compress/qat/qat_comp_pmd.h 
b/drivers/compress/qat/qat_comp_pmd.h
index 27d84c8..5a4bc31 100644
--- a/drivers/compress/qat/qat_comp_pmd.h
+++ b/drivers/compress/qat/qat_comp_pmd.h
@@ -34,6 +34,12 @@ qat_comp_stats_reset(struct rte_compressdev *dev);
 void
 qat_comp_stats_get(struct rte_compressdev *dev,
struct rte_compressdev_stats *stats);
+int
+qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id);
+
+int
+qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
+ uint32_t max_inflight_ops, int socket_id);
 
 #endif
 #endif /* _QAT_COMP_PMD_H_ */
-- 
2.7.4



  1   2   >