[dpdk-dev] Compilation failure when disabling Cavium OCTEONTX network PMD driver

2019-07-07 Thread Daniel Pharos
Hi,

If I disable the Cavium OCTEONTX network PMD driver in the config file (so: 
"CONFIG_RTE_LIBRTE_OCTEONTX_PMD=n"), compilation of DPDK 19.05 fails with:
/home/ubuntu/Downloads/dpdk-19.05/build/lib/librte_pmd_octeontx_ssovf.a(ssovf_worker.o):
 In function `ssows_flush_events':
ssovf_worker.c:(.text+0x102): undefined reference to `rte_octeontx_pchan_map'
/home/ubuntu/Downloads/dpdk-19.05/build/lib/librte_pmd_octeontx_ssovf.a(ssovf_worker.o):
 In function `ssows_deq':
ssovf_worker.c:(.text.hot+0x138): undefined reference to 
`rte_octeontx_pchan_map'
/home/ubuntu/Downloads/dpdk-19.05/build/lib/librte_pmd_octeontx_ssovf.a(ssovf_worker.o):
 In function `ssows_deq_timeout':
ssovf_worker.c:(.text.hot+0x301): undefined reference to 
`rte_octeontx_pchan_map'
/home/ubuntu/Downloads/dpdk-19.05/build/lib/librte_pmd_octeontx_ssovf.a(ssovf_worker.o):
 In function `ssows_deq_burst':
ssovf_worker.c:(.text.hot+0x458): undefined reference to 
`rte_octeontx_pchan_map'
/home/ubuntu/Downloads/dpdk-19.05/build/lib/librte_pmd_octeontx_ssovf.a(ssovf_worker.o):
 In function `ssows_deq_timeout_burst':
ssovf_worker.c:(.text.hot+0x61c): undefined reference to 
`rte_octeontx_pchan_map'
collect2: error: ld returned 1 exit status
/home/ubuntu/Downloads/dpdk-19.05/mk/rte.app.mk:404: recipe for target 'test' 
failed
make[3]: *** [test] Error 1
/home/ubuntu/Downloads/dpdk-19.05/mk/rte.subdir.mk:35: recipe for target 'test' 
failed
make[2]: *** [test] Error 2
/home/ubuntu/Downloads/dpdk-19.05/mk/rte.sdkbuild.mk:46: recipe for target 
'app' failed
make[1]: *** [app] Error 2
/home/ubuntu/Downloads/dpdk-19.05/mk/rte.sdkroot.mk:98: recipe for target 'all' 
failed
make: *** [all] Error 2

This is on x86_64, Ubuntu 18.04 (up-to-date), using the DPDK 19.05 tarball, 
with only the mentioned config-line changed.


Kind regards,
DanielPharos


Re: [dpdk-dev] [PATCH v6 1/7] bbdev: renaming non-generic LTE specific structure

2019-07-07 Thread Thomas Monjalon
03/07/2019 17:24, Nicolas Chautru:
> Renaming of the enums and structure which were LTE specific to
> allow for extension and support for 5GNR operations.
> 
> Signed-off-by: Nicolas Chautru 
> Acked-by: Amr Mokhtar 
> ---
> - struct rte_bbdev_op_dec_cb_params *cb = NULL;
> - struct rte_bbdev_op_dec_tb_params *tb = NULL;
> + struct rte_bbdev_op_dec_turbo_cb_params *cb = NULL;
> + struct rte_bbdev_op_dec_turbo_tb_params *tb = NULL;

These structs are renamed only in the next patch.
I will fix.




Re: [dpdk-dev] [PATCH v7 2/3] docs/guides: updating turbo_sw building steps

2019-07-07 Thread Thomas Monjalon
19/06/2019 19:48, Nicolas Chautru:
> The documentation is clarified to point to steps on building the
> SDK libraries which are now publicly available:
> https://software.intel.com/en-us/articles/flexran-lte-and-5g-nr-fec-software-development-kit-modules
> 
> Signed-off-by: Nicolas Chautru 
> Acked-by: Kamil Chalupnik 
> ---
> -*Intel FlexRAN Software Release Package -18-09* to download or directly 
> through
> -this `link `_.
> +These libraries are available through this link `link 
> `_.
[...]
> + ICC is available with a free community license `link 
> `_.

These links are wrongly formatted. I will fix.




[dpdk-dev] [Bug 307] ACL (librte_acl) field of type RANGE and size U32 is not working properly

2019-07-07 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=307

Bug ID: 307
   Summary: ACL (librte_acl) field of type RANGE and size U32 is
not working properly
   Product: DPDK
   Version: 18.11
  Hardware: All
OS: All
Status: CONFIRMED
  Severity: normal
  Priority: Normal
 Component: other
  Assignee: dev@dpdk.org
  Reporter: i...@cgstowernetworks.com
  Target Milestone: ---

Created attachment 47
  --> https://bugs.dpdk.org/attachment.cgi?id=47&action=edit
acl-test for single field match - can be compiled like any dpdk/example

Hi,

Such ACL field doesn't seem to work properly 
~~~
{
.type = RTE_ACL_FIELD_TYPE_RANGE,
.size = sizeof(uint32_t),
...
}
~~~

I found same complain here
https://mails.dpdk.org/archives/users/2017-June/002096.html (by Doohwan Lee)
with no resolution.

Also all DPDK ACL test/examples RTE_ACL_FIELD_TYPE_RANGE fields are with
uint16_t and never with uint32_t

Attached is a small acl-test app 
It sets a single ACL field (though there are few in the acl_fields array all
the rest are don't-care)
And matches a data argument (no need for traffic/packets)

usage:
~~~
./acl-test [EAL options] -- [app options]
app options:
--size=16|32
[--type=RANGE --min= --max=]
[--type=BITMASK --value= --bitmask=]
--data=
example:
./acl-test --no-huge -c 1 -- --size=32 --type=RANGE --min=100 --max=200
--data=150
~~~

Output example for RANGE U16
~~~
cgs@ubuntu:~/workspace/KAZ/output/build/arpeggio/host/dpdk/app$ ./acl-test
--no-huge -c 1 -- --size=16 --type=RANGE --min=1 --max=500 --data=250
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /run/user/1000/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: Started without hugepages support, physical addresses not available
EAL: PCI device :02:01.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device :02:06.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100f net_e1000_em
RULE 1: | 00/00 | 0001/01F4   /   | /   |
/   | 0x1-0x1-0x1 
acl context @0x7f9dede13b40
  socket_id=0
  alg=3
  max_rules=256
  rule_size=96
  num_rules=1
  num_categories=1
  num_tries=1
250 is in range 1-500should MATCH
ACL MATCHtest PASSED
~~~

Bad output example for RANGE U32 (same test just size change)
~~~
./acl-test --no-huge -c 1 -- --size=32 --type=RANGE --min=1 --max=500
--data=250
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /run/user/1000/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: Started without hugepages support, physical addresses not available
EAL: PCI device :02:01.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device :02:06.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100f net_e1000_em
RULE 1: | 00/00 | /   /   | 0001/01F4   |
/   | 0x1-0x1-0x1 
acl context @0x7ff3d08a3b40
  socket_id=0
  alg=3
  max_rules=256
  rule_size=96
  num_rules=1
  num_categories=1
  num_tries=1
250 is in range 1-500should MATCH
ACL NO-MATCH test FAILED!
~~~

Thanx,
- Ido

-- 
You are receiving this mail because:
You are the assignee for the bug.

Re: [dpdk-dev] [PATCH] net/mvneta: remove resources when port is closed

2019-07-07 Thread Jerin Jacob Kollanukkaran
> -Original Message-
> From: lir...@marvell.com 
> Sent: Wednesday, July 3, 2019 1:28 PM
> To: Jerin Jacob Kollanukkaran 
> Cc: dev@dpdk.org; Liron Himi ; Yuri Chipchev
> 
> Subject: [PATCH] net/mvneta: remove resources when port is closed
> 
> From: yuric 
> 
> Since 18.11, it is suggested that driver should release all its private 
> resources at
> the dev_close routine. So all resources previously released in remove routine 
> are
> now released at the dev_close routine, and the dev_close routine will be 
> called
> in driver remove routine in order to support removing a device without closing
> its ports.
> 
> Above behavior changes are supported by setting
> RTE_ETH_DEV_CLOSE_REMOVE flag during probe stage.
> 
> Signed-off-by: yuric 
> Reviewed-by: Liron Himi 

Applied to dpdk-next-net-mrvl/master. Thanks


Re: [dpdk-dev] [PATCH 1/3] eventdev: fix to set positive rte_errno

2019-07-07 Thread Jerin Jacob Kollanukkaran
> -Original Message-
> From: Andrew Rybchenko 
> Sent: Thursday, July 4, 2019 3:34 PM
> To: Jerin Jacob Kollanukkaran ; Nikhil Rao
> ; Erik Gabriel Carrillo 
> Cc: dev@dpdk.org; Dilshod Urazov ;
> sta...@dpdk.org
> Subject: [EXT] [PATCH 1/3] eventdev: fix to set positive rte_errno
> From: Dilshod Urazov 
> 
> Fixes: c9bf83947e2e ("eventdev: add eth Tx adapter APIs")
> Fixes: 47d05b292820 ("eventdev: add timer adapter common code")
> Fixes: 6750b21bd6af ("eventdev: add default software timer adapter")
> Fixes: c75f7897ea35 ("eventdev: set error code in port link/unlink functions")
> Fixes: 7d1acc9dde93 ("eventdev: introduce helper function for enqueue burst")
> Fixes: 406aed4e0dd9 ("eventdev: add errno-style return values")
> Fixes: c64e1b7b20d2 ("eventdev: add new software event timer adapter")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Dilshod Urazov 
> Signed-off-by: Andrew Rybchenko 

Acked-by: Jerin Jacob 

Series applied to dpdk-next-eventdev/master. Thanks.


Re: [dpdk-dev] [PATCH] app/test-eventdev: optimize producer routine

2019-07-07 Thread Jerin Jacob Kollanukkaran
> -Original Message-
> From: pbhagavat...@marvell.com 
> Sent: Wednesday, July 3, 2019 11:22 AM
> To: Jerin Jacob Kollanukkaran 
> Cc: dev@dpdk.org; Pavan Nikhilesh Bhagavatula 
> Subject: [dpdk-dev][PATCH] app/test-eventdev: optimize producer routine
> 
> From: Pavan Nikhilesh 
> 
> When using synthetic and timer event producer reduce the calls made to
> mempool library by using get_bulk() instead of get().
> 
> Signed-off-by: Pavan Nikhilesh 

Acked-by: Jerin Jacob 

Applied to dpdk-next-eventdev/master. Thanks.


Re: [dpdk-dev] Compilation failure when disabling Cavium OCTEONTX network PMD driver

2019-07-07 Thread Jerin Jacob Kollanukkaran
> -Original Message-
> From: dev  On Behalf Of Daniel Pharos 
> Sent: Saturday, July 6, 2019 8:41 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] Compilation failure when disabling Cavium OCTEONTX
> network PMD driver
> 
> Hi,
> 
> If I disable the Cavium OCTEONTX network PMD driver in the config file (so:
> "CONFIG_RTE_LIBRTE_OCTEONTX_PMD=n"), compilation of DPDK 19.05 fails
> with:

CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF is depended on 
CONFIG_RTE_LIBRTE_OCTEONTX_PMD.

Please disable CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF as well.


Re: [dpdk-dev] [PATCH v4 5/5] event/octeontx2: add Tx adadpter support

2019-07-07 Thread Jerin Jacob Kollanukkaran



> -Original Message-
> From: pbhagavat...@marvell.com 
> Sent: Thursday, July 4, 2019 7:50 AM
> To: Jerin Jacob Kollanukkaran ; Pavan Nikhilesh
> Bhagavatula 
> Cc: dev@dpdk.org; Nithin Kumar Dabilpuram 
> Subject: [dpdk-dev] [PATCH v4 5/5] event/octeontx2: add Tx adadpter support


Fixed adadpter typo.

Series applied to dpdk-next-eventdev/master. Thanks.


> 
> From: Pavan Nikhilesh 
> 
> Add event eth Tx adapter support to octeontx2 SSO.
> 


Re: [dpdk-dev] [PATCH 1/2] common/cpt: remove redundant bit swaps

2019-07-07 Thread Anoob Joseph
Hi Akhil, Pablo

This patch is good to go if you don't have any comments.

Thanks,
Anoob

> -Original Message-
> From: Anoob Joseph 
> Sent: Saturday, July 6, 2019 6:54 PM
> To: Akhil Goyal ; Pablo de Lara
> 
> Cc: Anoob Joseph ; Jerin Jacob Kollanukkaran
> ; Narayana Prasad Raju Athreya
> ; dev@dpdk.org
> Subject: [PATCH 1/2] common/cpt: remove redundant bit swaps
> 
> The bit swaps can be removed by re-arranging the structure.
> 
> Signed-off-by: Anoob Joseph 
> ---
>  drivers/common/cpt/cpt_hw_types.h |   7 +++
>  drivers/common/cpt/cpt_ucode.h| 116 
> --
>  2 files changed, 44 insertions(+), 79 deletions(-)
> 
> diff --git a/drivers/common/cpt/cpt_hw_types.h
> b/drivers/common/cpt/cpt_hw_types.h
> index 7be1d12..e2b127d 100644
> --- a/drivers/common/cpt/cpt_hw_types.h
> +++ b/drivers/common/cpt/cpt_hw_types.h
> @@ -30,10 +30,17 @@
>  typedef union {
>   uint64_t u64;
>   struct {
> +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
>   uint16_t opcode;
>   uint16_t param1;
>   uint16_t param2;
>   uint16_t dlen;
> +#else
> + uint16_t dlen;
> + uint16_t param2;
> + uint16_t param1;
> + uint16_t opcode;
> +#endif
>   } s;
>  } vq_cmd_word0_t;
> 
> diff --git a/drivers/common/cpt/cpt_ucode.h
> b/drivers/common/cpt/cpt_ucode.h index e02b34a..c589b58 100644
> --- a/drivers/common/cpt/cpt_ucode.h
> +++ b/drivers/common/cpt/cpt_ucode.h
> @@ -520,16 +520,15 @@ cpt_digest_gen_prep(uint32_t flags,
> 
>   /*GP op header */
>   vq_cmd_w0.u64 = 0;
> - vq_cmd_w0.s.param2 = rte_cpu_to_be_16(((uint16_t)hash_type << 8));
> + vq_cmd_w0.s.param2 = ((uint16_t)hash_type << 8);
>   if (ctx->hmac) {
>   opcode.s.major = CPT_MAJOR_OP_HMAC | CPT_DMA_MODE;
> - vq_cmd_w0.s.param1 = rte_cpu_to_be_16(key_len);
> - vq_cmd_w0.s.dlen =
> - rte_cpu_to_be_16((data_len + ROUNDUP8(key_len)));
> + vq_cmd_w0.s.param1 = key_len;
> + vq_cmd_w0.s.dlen = data_len + ROUNDUP8(key_len);
>   } else {
>   opcode.s.major = CPT_MAJOR_OP_HASH | CPT_DMA_MODE;
>   vq_cmd_w0.s.param1 = 0;
> - vq_cmd_w0.s.dlen = rte_cpu_to_be_16(data_len);
> + vq_cmd_w0.s.dlen = data_len;
>   }
> 
>   opcode.s.minor = 0;
> @@ -540,10 +539,10 @@ cpt_digest_gen_prep(uint32_t flags,
>   /* Minor op is passthrough */
>   opcode.s.minor = 0x03;
>   /* Send out completion code only */
> - vq_cmd_w0.s.param2 = rte_cpu_to_be_16(0x1);
> + vq_cmd_w0.s.param2 = 0x1;
>   }
> 
> - vq_cmd_w0.s.opcode = rte_cpu_to_be_16(opcode.flags);
> + vq_cmd_w0.s.opcode = opcode.flags;
> 
>   /* DPTR has SG list */
>   in_buffer = m_vaddr;
> @@ -622,7 +621,7 @@ cpt_digest_gen_prep(uint32_t flags,
>   size = g_size_bytes + s_size_bytes + SG_LIST_HDR_SIZE;
> 
>   /* This is DPTR len incase of SG mode */
> - vq_cmd_w0.s.dlen = rte_cpu_to_be_16(size);
> + vq_cmd_w0.s.dlen = size;
> 
>   m_vaddr = (uint8_t *)m_vaddr + size;
>   m_dma += size;
> @@ -635,11 +634,6 @@ cpt_digest_gen_prep(uint32_t flags,
> 
>   req->ist.ei1 = dptr_dma;
>   req->ist.ei2 = rptr_dma;
> - /* First 16-bit swap then 64-bit swap */
> - /* TODO: HACK: Reverse the vq_cmd and cpt_req bit field definitions
> -  * to eliminate all the swapping
> -  */
> - vq_cmd_w0.u64 = rte_cpu_to_be_64(vq_cmd_w0.u64);
> 
>   /* vq command w3 */
>   vq_cmd_w3.u64 = 0;
> @@ -798,8 +792,8 @@ cpt_enc_hmac_prep(uint32_t flags,
> 
>   /* GP op header */
>   vq_cmd_w0.u64 = 0;
> - vq_cmd_w0.s.param1 = rte_cpu_to_be_16(encr_data_len);
> - vq_cmd_w0.s.param2 = rte_cpu_to_be_16(auth_data_len);
> + vq_cmd_w0.s.param1 = encr_data_len;
> + vq_cmd_w0.s.param2 = auth_data_len;
>   /*
>* In 83XX since we have a limitation of
>* IV & Offset control word not part of instruction @@ -826,9 +820,9
> @@ cpt_enc_hmac_prep(uint32_t flags,
>   req->alternate_caddr = (uint64_t *)((uint8_t *)dm_vaddr
>   + outputlen - iv_len);
> 
> - vq_cmd_w0.s.dlen = rte_cpu_to_be_16(inputlen +
> OFF_CTRL_LEN);
> + vq_cmd_w0.s.dlen = inputlen + OFF_CTRL_LEN;
> 
> - vq_cmd_w0.s.opcode = rte_cpu_to_be_16(opcode.flags);
> + vq_cmd_w0.s.opcode = opcode.flags;
> 
>   if (likely(iv_len)) {
>   uint64_t *dest = (uint64_t *)((uint8_t *)offset_vaddr
> @@ -861,7 +855,7 @@ cpt_enc_hmac_prep(uint32_t flags,
> 
>   opcode.s.major |= CPT_DMA_MODE;
> 
> - vq_cmd_w0.s.opcode = rte_cpu_to_be_16(opcode.flags);
> + vq_cmd_w0.s.opcode = opcode.flags;
> 
>   if (likely(iv_len)) {
>   uint64_t *dest = (uint64_t *)((uint8_t *)offset_

Re: [dpdk-dev] [PATCH 2/2] common/cpt: remove redundant code in datapath

2019-07-07 Thread Anoob Joseph
Hi Akhil, Pablo

This patch is good to go if you don't have any comments.

Thanks,
Anoob

> -Original Message-
> From: Anoob Joseph 
> Sent: Saturday, July 6, 2019 6:54 PM
> To: Akhil Goyal ; Pablo de Lara
> 
> Cc: Anoob Joseph ; Jerin Jacob Kollanukkaran
> ; Narayana Prasad Raju Athreya
> ; dev@dpdk.org
> Subject: [PATCH 2/2] common/cpt: remove redundant code in datapath
> 
> Removing redundant checks and unused local variables from datapath.
> 
> Signed-off-by: Anoob Joseph 
> ---
>  drivers/common/cpt/cpt_ucode.h | 133 
> ++---
>  1 file changed, 33 insertions(+), 100 deletions(-)
> 
> diff --git a/drivers/common/cpt/cpt_ucode.h
> b/drivers/common/cpt/cpt_ucode.h index c589b58..e197e4e 100644
> --- a/drivers/common/cpt/cpt_ucode.h
> +++ b/drivers/common/cpt/cpt_ucode.h
> @@ -89,8 +89,7 @@ cpt_fc_ciph_validate_key_aes(uint16_t key_len)  }
> 
>  static __rte_always_inline int
> -cpt_fc_ciph_validate_key(cipher_type_t type, struct cpt_ctx *cpt_ctx,
> - uint16_t key_len)
> +cpt_fc_ciph_set_type(cipher_type_t type, struct cpt_ctx *ctx, uint16_t
> +key_len)
>  {
>   int fc_type = 0;
>   switch (type) {
> @@ -125,7 +124,7 @@ cpt_fc_ciph_validate_key(cipher_type_t type, struct
> cpt_ctx *cpt_ctx,
>   if (unlikely(key_len != 16))
>   return -1;
>   /* No support for AEAD yet */
> - if (unlikely(cpt_ctx->hash_type))
> + if (unlikely(ctx->hash_type))
>   return -1;
>   fc_type = ZUC_SNOW3G;
>   break;
> @@ -134,14 +133,16 @@ cpt_fc_ciph_validate_key(cipher_type_t type, struct
> cpt_ctx *cpt_ctx,
>   if (unlikely(key_len != 16))
>   return -1;
>   /* No support for AEAD yet */
> - if (unlikely(cpt_ctx->hash_type))
> + if (unlikely(ctx->hash_type))
>   return -1;
>   fc_type = KASUMI;
>   break;
>   default:
>   return -1;
>   }
> - return fc_type;
> +
> + ctx->fc_type = fc_type;
> + return 0;
>  }
> 
>  static __rte_always_inline void
> @@ -181,7 +182,6 @@ cpt_fc_ciph_set_key_snow3g_uea2(struct cpt_ctx
> *cpt_ctx, uint8_t *key,
>   cpt_ctx->snow3g = 1;
>   gen_key_snow3g(key, keyx);
>   memcpy(cpt_ctx->zs_ctx.ci_key, keyx, key_len);
> - cpt_ctx->fc_type = ZUC_SNOW3G;
>   cpt_ctx->zsk_flags = 0;
>  }
> 
> @@ -192,7 +192,6 @@ cpt_fc_ciph_set_key_zuc_eea3(struct cpt_ctx *cpt_ctx,
> uint8_t *key,
>   cpt_ctx->snow3g = 0;
>   memcpy(cpt_ctx->zs_ctx.ci_key, key, key_len);
>   memcpy(cpt_ctx->zs_ctx.zuc_const, zuc_d, 32);
> - cpt_ctx->fc_type = ZUC_SNOW3G;
>   cpt_ctx->zsk_flags = 0;
>  }
> 
> @@ -203,7 +202,6 @@ cpt_fc_ciph_set_key_kasumi_f8_ecb(struct cpt_ctx
> *cpt_ctx, uint8_t *key,
>   cpt_ctx->k_ecb = 1;
>   memcpy(cpt_ctx->k_ctx.ci_key, key, key_len);
>   cpt_ctx->zsk_flags = 0;
> - cpt_ctx->fc_type = KASUMI;
>  }
> 
>  static __rte_always_inline void
> @@ -212,7 +210,6 @@ cpt_fc_ciph_set_key_kasumi_f8_cbc(struct cpt_ctx
> *cpt_ctx, uint8_t *key,  {
>   memcpy(cpt_ctx->k_ctx.ci_key, key, key_len);
>   cpt_ctx->zsk_flags = 0;
> - cpt_ctx->fc_type = KASUMI;
>  }
> 
>  static __rte_always_inline int
> @@ -222,15 +219,13 @@ cpt_fc_ciph_set_key(void *ctx, cipher_type_t type,
> uint8_t *key,
>   struct cpt_ctx *cpt_ctx = ctx;
>   mc_fc_context_t *fctx = &cpt_ctx->fctx;
>   uint64_t *ctrl_flags = NULL;
> - int fc_type;
> + int ret;
> 
> - /* Validate key before proceeding */
> - fc_type = cpt_fc_ciph_validate_key(type, cpt_ctx, key_len);
> - if (unlikely(fc_type == -1))
> + ret = cpt_fc_ciph_set_type(type, cpt_ctx, key_len);
> + if (unlikely(ret))
>   return -1;
> 
> - if (fc_type == FC_GEN) {
> - cpt_ctx->fc_type = FC_GEN;
> + if (cpt_ctx->fc_type == FC_GEN) {
>   ctrl_flags = (uint64_t *)&(fctx->enc.enc_ctrl.flags);
>   *ctrl_flags = rte_be_to_cpu_64(*ctrl_flags);
>   /*
> @@ -467,7 +462,6 @@ cpt_digest_gen_prep(uint32_t flags,  {
>   struct cpt_request_info *req;
>   uint32_t size, i;
> - int32_t m_size;
>   uint16_t data_len, mac_len, key_len;
>   auth_type_t hash_type;
>   buf_ptr_t *meta_p;
> @@ -488,7 +482,6 @@ cpt_digest_gen_prep(uint32_t flags,
> 
>   m_vaddr = meta_p->vaddr;
>   m_dma = meta_p->dma_addr;
> - m_size = meta_p->size;
> 
>   /*
>* Save initial space that followed app data for completion code & @@ -
> 504,14 +497,12 @@ cpt_digest_gen_prep(uint32_t flags,
> 
>   m_vaddr = (uint8_t *)m_vaddr + size;
>   m_dma += size;
> - m_size -= size;
> 
>   req = m_vaddr;
> 
>   size = sizeof(struct cpt_request_info);
>   m_vaddr = (uint8_t *)m_vaddr + size;
>   m_dma += size;
> - m_size -= size;
> 
>   hash_type = ctx->hash_type;
>   mac_len

Re: [dpdk-dev] [PATCH v1 1/1] mempool/octeontx2: fix npa pool range errors

2019-07-07 Thread Jerin Jacob Kollanukkaran



> -Original Message-
> From: vattun...@marvell.com 
> Sent: Friday, July 5, 2019 4:04 PM
> To: dev@dpdk.org
> Cc: tho...@monjalon.net; Jerin Jacob Kollanukkaran ;
> Vamsi Krishna Attunuru 
> Subject: [PATCH v1 1/1] mempool/octeontx2: fix npa pool range errors
> 
> From: Vamsi Attunuru 
> 
> Patch fixes npa pool range errors observed while creating mempool.
> During mempool creation, octeontx2 mempool driver populates pool range
> fields before enqueueing the buffers. If any enqueue or dequeue operation
> reaches npa hardware prior to the range field's HW context update, those ops
> result in npa range errors. Patch adds a routine to read back HW context and
> verify if range fields are updated or not.
> 
> Signed-off-by: Vamsi Attunuru 


1) Please fix chek-git-log.sh

$ ./devtools/check-git-log.sh
Missing 'Fixes' tag:
mempool/octeontx2: fix npa pool range errors

2) Please mention this issue happens when mempool objects are
from different mempool in git commit log




Re: [dpdk-dev] [PATCH v2] examples/client_server_mp: check port ownership

2019-07-07 Thread Stephen Hemminger
On Sun, 7 Jul 2019 05:44:55 +
Matan Azrad  wrote:

> > +   for (count = 0; pm != 0; pm >>= 1, ++count) {
> > +   struct rte_eth_dev_owner owner;
> > +
> > +   if ((pm & 0x1) == 0)
> > +   continue;
> > +
> > +   if (count >= max_ports) {
> > +   printf("WARNING: requested port %u not present -
> > ignoring\n",
> > +   count);
> > +   continue;
> > +   }
> > +   if (rte_eth_dev_owner_get(count, &owner) < 0) {
> > +   printf("ERROR: can not find port %u owner\n",
> > count);  
> 
> What if some entity will take ownership later?
> If you want the app will be ownership aware:
>   if you sure that you want this port to be owned by this application you 
> need to take ownership on it.
> else:
> the port is hidden by RTE_ETH_FOREACH_DEV if it is owned by some entity. 
> see how it was done in testpmd function: port_id_is_invalid().

There are no mysterious entities in DPDK.
The only thing that can happen later is hotplug, and that will not change state
of existing port.

This model is used for all applications.  The application does not
take ownership, only device drivers do.

The whole portmask as command-line parameter is a bad user experience
now, but that is a different problem.


Re: [dpdk-dev] [PATCH v1 1/1] mempool/octeontx2: fix npa pool range errors

2019-07-07 Thread Thomas Monjalon
07/07/2019 16:21, Jerin Jacob Kollanukkaran:
> 
> > -Original Message-
> > From: vattun...@marvell.com 
> > Sent: Friday, July 5, 2019 4:04 PM
> > To: dev@dpdk.org
> > Cc: tho...@monjalon.net; Jerin Jacob Kollanukkaran ;
> > Vamsi Krishna Attunuru 
> > Subject: [PATCH v1 1/1] mempool/octeontx2: fix npa pool range errors
> > 
> > From: Vamsi Attunuru 
> > 
> > Patch fixes npa pool range errors observed while creating mempool.
> > During mempool creation, octeontx2 mempool driver populates pool range
> > fields before enqueueing the buffers. If any enqueue or dequeue operation
> > reaches npa hardware prior to the range field's HW context update, those ops
> > result in npa range errors. Patch adds a routine to read back HW context and
> > verify if range fields are updated or not.
> > 
> > Signed-off-by: Vamsi Attunuru 
> 
> 
> 1) Please fix chek-git-log.sh
> 
> $ ./devtools/check-git-log.sh
> Missing 'Fixes' tag:
> mempool/octeontx2: fix npa pool range errors
> 
> 2) Please mention this issue happens when mempool objects are
> from different mempool in git commit log

One more comment, the title is supposed to say which behaviour
it is fixing, not the root cause.




Re: [dpdk-dev] [PATCH v2 5/5] mempool/dpaa2: vfio dmamap for user allocated memory

2019-07-07 Thread Thomas Monjalon
Hi, please see several comments about formatting below.

The title should start with a verb.
VFIO and DMA should be uppercase.

27/06/2019 11:33, Hemant Agrawal:
> From: Sachin Saxena 
> 
> Signed-off-by: Sachin Saxena 

There is no description in this patch.

> --- a/drivers/bus/fslmc/fslmc_vfio.c
> +++ b/drivers/bus/fslmc/fslmc_vfio.c
> +__rte_experimental
> +int rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size)

The new policy forbids __rte_experimental tag in .c file.
Per coding style, the return type should be on a separate line.





Re: [dpdk-dev] [PATCH 1/3] bus/dpaa: add plug support and rework parse

2019-07-07 Thread Thomas Monjalon
25/06/2019 12:40, Hemant Agrawal:
> From: Shreyansh Jain 
> 
> Parse and find_device have specific function - former is for parsing a
> string passed as argument, whereas the later is for iterating over all
> the devices in the bus and calling a callback/handler. They have been
> corrected with their right operations to support hotplugging/devargs
> plug/unplug calls.
> 
> Support for plug/unplug too has been added.
> 
> Signed-off-by: Shreyansh Jain 
> Acked-by: Hemant Agrawal 

Series applied, thanks





Re: [dpdk-dev] [PATCH] vfio: retry creating sPAPR DMA window

2019-07-07 Thread Thomas Monjalon
05/07/2019 10:15, Burakov, Anatoly:
> On 07-Jun-19 3:28 AM, Takeshi Yoshimura wrote:
> > sPAPR allows only page_shift from VFIO_IOMMU_SPAPR_TCE_GET_INFO ioctl.
> > However, Linux 4.17 or before returns incorrect page_shift for Power9.
> > I added the code for retrying creation of sPAPR DMA window.
> > 
> > Signed-off-by: Takeshi Yoshimura 
> > ---
> 
> This doesn't affect any code outside of sPAPR and looks sane, so
> 
> Acked-by: Anatoly Burakov 

Applied, thanks




Re: [dpdk-dev] [PATCH v3 0/3] MCS queued lock implementation

2019-07-07 Thread Thomas Monjalon
05/07/2019 12:27, Phil Yang:
> Phil Yang (3):
>   eal/mcslock: add mcs queued lock implementation
>   eal/mcslock: use generic msc queued lock on all arch
>   test/mcslock: add mcs queued lock unit test

Applied, thanks





Re: [dpdk-dev] [PATCH] eal: fix spelling

2019-07-07 Thread Thomas Monjalon
04/06/2019 11:21, kka...@marvell.com:
> From: Krzysztof Kanas 
> 
> Fixes: a753e53d517b ("eal: add device event monitor framework")
> Fixes: af75078fece3 ("first public release")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Krzysztof Kanas 

Applied, thanks




Re: [dpdk-dev] [PATCH v5 1/4] net/ipn3ke: add new register address

2019-07-07 Thread Zhang, Qi Z



> -Original Message-
> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Xu, Rosen
> Sent: Tuesday, July 2, 2019 6:00 PM
> To: Pei, Andy ; dev@dpdk.org; Yigit, Ferruh
> ; Zhang, Tianfei 
> Subject: Re: [dpdk-dev] [PATCH v5 1/4] net/ipn3ke: add new register address
> 
> 
> 
> > -Original Message-
> > From: Pei, Andy
> > Sent: Monday, July 01, 2019 18:36
> > To: dev@dpdk.org
> > Cc: Pei, Andy ; Xu, Rosen 
> > Subject: [PATCH v5 1/4] net/ipn3ke: add new register address
> >
> > ipn3ke can work on 10G mode and 25G mode.
> > 10G mode and 25G mode has different MAC register address for statistics.
> > This patch implemente statistics registers for 10G mode and 25G mode.
> >
> > Fixes: c01c748e4ae6 ("net/ipn3ke: add new driver")
> > Cc: rosen...@intel.com
> >
> > Signed-off-by: Andy Pei 
> > ---
> 
> Acked-by: Rosen Xu 


Applied to dpdk-next-net-intel.

Thanks
Qi


Re: [dpdk-dev] [PATCH v5 2/4] net/ipn3ke: delete MAC register address mask

2019-07-07 Thread Zhang, Qi Z



> -Original Message-
> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Xu, Rosen
> Sent: Tuesday, July 2, 2019 6:00 PM
> To: Pei, Andy ; dev@dpdk.org; Yigit, Ferruh
> ; Zhang, Tianfei 
> Subject: Re: [dpdk-dev] [PATCH v5 2/4] net/ipn3ke: delete MAC register
> address mask
> 
> 
> 
> > -Original Message-
> > From: Pei, Andy
> > Sent: Monday, July 01, 2019 18:36
> > To: dev@dpdk.org
> > Cc: Pei, Andy ; Xu, Rosen 
> > Subject: [PATCH v5 2/4] net/ipn3ke: delete MAC register address mask
> >
> > original code is compatible with older device, whose mac register
> > address is no more than 10 bits. Now we have mac register address
> > longer than 10 bits, so we just delete the mask here.
> >
> > Fixes: c01c748e4ae6 ("net/ipn3ke: add new driver")
> > Cc: rosen...@intel.com
> >
> > Signed-off-by: Andy Pei 
> > ---
> Acked-by: Rosen Xu 

Applied to dpdk-next-net-intel.

Thanks
Qi


Re: [dpdk-dev] [PATCH v5 3/4] net/ipn3ke: clear statistics when init and start dev

2019-07-07 Thread Zhang, Qi Z



> -Original Message-
> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Xu, Rosen
> Sent: Tuesday, July 2, 2019 6:01 PM
> To: Pei, Andy ; dev@dpdk.org; Yigit, Ferruh
> ; Zhang, Tianfei 
> Subject: Re: [dpdk-dev] [PATCH v5 3/4] net/ipn3ke: clear statistics when init
> and start dev
> 
> 
> 
> > -Original Message-
> > From: Pei, Andy
> > Sent: Monday, July 01, 2019 18:36
> > To: dev@dpdk.org
> > Cc: Pei, Andy ; Xu, Rosen 
> > Subject: [PATCH v5 3/4] net/ipn3ke: clear statistics when init and
> > start dev
> >
> > clear line side and NIC side statistics registers when HW init and
> > uinit, and when dev start.
> >
> > Fixes: c01c748e4ae6 ("net/ipn3ke: add new driver")
> > Cc: rosen...@intel.com
> >
> > Signed-off-by: Andy Pei 
> > ---
> Acked-by: Rosen Xu 

Applied to dpdk-next-net-intel.

Thanks
Qi


Re: [dpdk-dev] [PATCH v5 4/4] net/ipn3ke: implementation of statistics

2019-07-07 Thread Zhang, Qi Z



> -Original Message-
> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Xu, Rosen
> Sent: Tuesday, July 2, 2019 6:02 PM
> To: Pei, Andy ; dev@dpdk.org; Yigit, Ferruh
> ; Zhang, Tianfei 
> Subject: Re: [dpdk-dev] [PATCH v5 4/4] net/ipn3ke: implementation of
> statistics
> 
> 
> 
> > -Original Message-
> > From: Pei, Andy
> > Sent: Monday, July 01, 2019 18:36
> > To: dev@dpdk.org
> > Cc: Pei, Andy ; Xu, Rosen 
> > Subject: [PATCH v5 4/4] net/ipn3ke: implementation of statistics
> >
> > This patch implemente statistics read and reset function for ipn3ke.
> >
> > Signed-off-by: Andy Pei 
> 
> Acked-by: Rosen Xu 

Applied to dpdk-next-net-intel.

Thanks
Qi


Re: [dpdk-dev] [PATCH] net/fm10k: fix descriptor filling in vector Tx

2019-07-07 Thread Zhang, Qi Z



> -Original Message-
> From: Wang, Xiao W
> Sent: Wednesday, July 3, 2019 10:54 AM
> To: Zhang, Qi Z 
> Cc: dev@dpdk.org; Kiejdo, Marek ; Wang, Xiao W
> ; sta...@dpdk.org
> Subject: [PATCH] net/fm10k: fix descriptor filling in vector Tx
> 
> The shift left operation "pkt->vlan_tci << 16" gets vlan_tci extended to 
> signed
> type and may cause invalid descriptor. Also the same issue for the "data_len"
> field. This patch fixes it by casting them to uint64_t.
> 
> Fixes: 21f13c541eb0 ("fm10k: add vector Tx")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Xiao Wang 

Acked-by: Qi Zhang 

Applied to dpdk-next-net-intel.

Thanks
Qi


[dpdk-dev] [v2] net/e1000: i219 unit hang issue fix on reset/close

2019-07-07 Thread Xiao Zhang
Unit hang may occur if multiple descriptors are available in the rings
during reset or close. This state can be detected by configure status
by bit 8 in register. If the bit is set and there are pending descriptors
in one of the rings, we must flush them before reset or close.

Signed-off-by: Xiao Zhang 
---
 drivers/net/e1000/base/e1000_ich8lan.h |  1 +
 drivers/net/e1000/e1000_ethdev.h   |  1 +
 drivers/net/e1000/igb_ethdev.c |  4 ++
 drivers/net/e1000/igb_rxtx.c   | 96 ++
 4 files changed, 102 insertions(+)

diff --git a/drivers/net/e1000/base/e1000_ich8lan.h 
b/drivers/net/e1000/base/e1000_ich8lan.h
index bc4ed1d..1f2a3f8 100644
--- a/drivers/net/e1000/base/e1000_ich8lan.h
+++ b/drivers/net/e1000/base/e1000_ich8lan.h
@@ -120,6 +120,7 @@ POSSIBILITY OF SUCH DAMAGE.
 #define E1000_FEXTNVM7_SIDE_CLK_UNGATE 0x0004
 #if !defined(EXTERNAL_RELEASE) || defined(ULP_SUPPORT)
 #define E1000_FEXTNVM7_DISABLE_SMB_PERST   0x0020
+#define E1000_FEXTNVM7_NEED_DESCRING_FLUSH 0x0100
 #endif /* !EXTERNAL_RELEASE || ULP_SUPPORT */
 #define E1000_FEXTNVM9_IOSFSB_CLKGATE_DIS  0x0800
 #define E1000_FEXTNVM9_IOSFSB_CLKREQ_DIS   0x1000
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 67acb73..3451979 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -522,5 +522,6 @@ int igb_action_rss_same(const struct rte_flow_action_rss 
*comp,
 int igb_config_rss_filter(struct rte_eth_dev *dev,
struct igb_rte_flow_rss_conf *conf,
bool add);
+void igb_flush_desc_rings(struct rte_eth_dev *dev);
 
 #endif /* _E1000_ETHDEV_H_ */
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 3ee28cf..845101b 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -1589,6 +1589,10 @@ eth_igb_close(struct rte_eth_dev *dev)
eth_igb_stop(dev);
adapter->stopped = 1;
 
+   /* Flush desc rings for i219 */
+   if (hw->mac.type >= e1000_pch_spt)
+   igb_flush_desc_rings(dev);
+
e1000_phy_hw_reset(hw);
igb_release_manageability(hw);
igb_hw_control_release(hw);
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index c5606de..33eeb4e 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -18,6 +18,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -63,6 +64,9 @@
 #define IGB_TX_OFFLOAD_NOTSUP_MASK \
(PKT_TX_OFFLOAD_MASK ^ IGB_TX_OFFLOAD_MASK)
 
+/* PCI offset for querying descriptor ring status*/
+#define PCICFG_DESC_RING_STATUS   0xE4
+
 /**
  * Structure associated with each descriptor of the RX ring of a RX queue.
  */
@@ -2962,3 +2966,95 @@ igb_config_rss_filter(struct rte_eth_dev *dev,
 
return 0;
 }
+
+static void e1000_flush_tx_ring(struct rte_eth_dev *dev)
+{
+   struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+   volatile union e1000_adv_tx_desc *tx_desc;
+   uint32_t tdt, tctl, txd_lower = E1000_TXD_CMD_IFCS;
+   uint16_t size = 512;
+   struct igb_tx_queue *txq;
+
+   if (dev->data->tx_queues == NULL)
+   return;
+   txq = dev->data->tx_queues[0];
+
+   tctl = E1000_READ_REG(hw, E1000_TCTL);
+   E1000_WRITE_REG(hw, E1000_TCTL, tctl | E1000_TCTL_EN);
+   tdt = E1000_READ_REG(hw, E1000_TDT(0));
+   if (tdt != txq->tx_tail)
+   return;
+   tx_desc = txq->tx_ring;
+   tx_desc->read.buffer_addr = txq->tx_ring_phys_addr;
+   tx_desc->read.cmd_type_len = rte_cpu_to_le_32(txd_lower | size);
+   tx_desc->read.olinfo_status = 0;
+
+   rte_wmb();
+   txq->tx_tail++;
+   if (txq->tx_tail == txq->nb_tx_desc)
+   txq->tx_tail = 0;
+   rte_io_wmb();
+   E1000_WRITE_REG(hw, E1000_TDT(0), txq->tx_tail);
+   usec_delay(250);
+}
+
+static void e1000_flush_rx_ring(struct rte_eth_dev *dev)
+{
+   uint32_t rctl, rxdctl;
+   struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+   rctl = E1000_READ_REG(hw, E1000_RCTL);
+   E1000_WRITE_REG(hw, E1000_TCTL, rctl & ~E1000_RCTL_EN);
+   E1000_WRITE_FLUSH(hw);
+   usec_delay(150);
+
+   rxdctl = E1000_READ_REG(hw, E1000_RXDCTL(0));
+   /* zero the lower 14 bits (prefetch and host thresholds) */
+   rxdctl &= 0xc000;
+
+   /* update thresholds: prefetch threshold to 31, host threshold to 1
+* and make sure the granularity is "descriptors" and not "cache lines"
+*/
+   rxdctl |= (0x1F | (1UL << 8) | E1000_RXDCTL_THRESH_UNIT_DESC);
+
+   E1000_WRITE_REG(hw, E1000_RXDCTL(0), rxdctl);
+   /* momentarily enable the RX ring for the changes to take effect */
+   E1000_WRITE_REG(hw, E1000_RCTL, rctl | E1000_RCTL_EN);
+   E1000_WRITE_FLUSH(hw);
+   usec_delay(150);
+

[dpdk-dev] [PATCH v4] net/i40e: i40e get link status update from ipn3ke

2019-07-07 Thread Andy Pei
Add switch_mode argument for i40e PF to specify the specific FPGA that
i40e PF is connected to. i40e PF get link status update via the
connected FPGA.
Add switch_ethdev to rte_eth_dev_data to track the bind switch device.
Try to bind i40e pf to switch device when i40e device is probed. If it
fail to find correct switch device, bind will occur again when update
i40e device link status.

Signed-off-by: Andy Pei 
---
Cc: qi.z.zh...@intel.com
Cc: jingjing...@intel.com
Cc: beilei.x...@intel.com
Cc: ferruh.yi...@intel.com
Cc: rosen...@intel.com
Cc: xiaolong...@intel.com
Cc: roy.fan.zh...@intel.com
Cc: sta...@dpdk.org

v4:
* use an array instead of a pointer to store switch device string to
* avoid memory free error.

v3:
* Add switch_ethdev to rte_eth_dev_data to track the bind switch device
* Try to bind i40e pf when it is probed.

v2:
* use a more specific subject for this patch.
* delete modifications that are not relevant.
* free memory allocted by strdup.
* delete unnecessary initializations.
* name function more precisely.
* wrap relevant code to a function to avoid too many levels of block
* nesting.

 drivers/net/i40e/i40e_ethdev.c  | 139 +++-
 lib/librte_ethdev/rte_ethdev_core.h |   4 ++
 2 files changed, 141 insertions(+), 2 deletions(-)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 2b9fc45..85d7b18 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -44,6 +44,7 @@
 #define ETH_I40E_SUPPORT_MULTI_DRIVER  "support-multi-driver"
 #define ETH_I40E_QUEUE_NUM_PER_VF_ARG  "queue-num-per-vf"
 #define ETH_I40E_USE_LATEST_VEC"use-latest-supported-vec"
+#define ETH_I40E_SWITCH_MODE_ARG   "switch_mode"
 
 #define I40E_CLEAR_PXE_WAIT_MS 200
 
@@ -406,6 +407,7 @@ static int i40e_sw_tunnel_filter_insert(struct i40e_pf *pf,
ETH_I40E_SUPPORT_MULTI_DRIVER,
ETH_I40E_QUEUE_NUM_PER_VF_ARG,
ETH_I40E_USE_LATEST_VEC,
+   ETH_I40E_SWITCH_MODE_ARG,
NULL};
 
 static const struct rte_pci_id pci_id_i40e_map[] = {
@@ -625,14 +627,118 @@ struct rte_i40e_xstats_name_off {
sizeof(rte_i40e_txq_prio_strings[0]))
 
 static int
+i40e_pf_parse_switch_mode(const char *key __rte_unused,
+   const char *value, void *extra_args)
+{
+   if (!value || !extra_args)
+   return -EINVAL;
+
+   if (RTE_ETH_NAME_MAX_LEN > strlen(value)) {
+   rte_memcpy(extra_args, value, strlen(value) + 1);
+   return 0;
+   } else {
+   PMD_DRV_LOG(ERR,
+   "switch_mode args should be less than %d characters",
+   RTE_ETH_NAME_MAX_LEN);
+   return -EINVAL;
+   }
+}
+
+static struct rte_eth_dev *
+i40e_eth_dev_get_by_switch_mode_name(const char *cfg_str)
+{
+   char switch_name[RTE_ETH_NAME_MAX_LEN];
+   char port_name[RTE_ETH_NAME_MAX_LEN];
+   char switch_ethdev_name[RTE_ETH_NAME_MAX_LEN];
+   uint16_t port_id;
+   const char *p_src;
+   char *p_dst;
+   int ret;
+
+   /* An example of cfg_str is "IPN3KE_0@b3:00.0_0" */
+   if (!strncmp(cfg_str, "IPN3KE", strlen("IPN3KE"))) {
+   p_src = cfg_str;
+   PMD_DRV_LOG(DEBUG, "cfg_str is %s", cfg_str);
+
+   /* move over "IPN3KE" */
+   while ((*p_src != '_') && (*p_src))
+   p_src++;
+
+   /* move over the first underline */
+   p_src++;
+
+   p_dst = switch_name;
+   while ((*p_src != '_') && (*p_src)) {
+   if (*p_src == '@') {
+   *p_dst++ = '|';
+   p_src++;
+   } else {
+   *p_dst++ = *p_src++;
+   }
+   }
+   *p_dst = 0;
+   PMD_DRV_LOG(DEBUG, "switch_name is %s", switch_name);
+
+   /* move over the second underline */
+   p_src++;
+
+   p_dst = port_name;
+   while (*p_src)
+   *p_dst++ = *p_src++;
+   *p_dst = 0;
+   PMD_DRV_LOG(DEBUG, "port_name is %s", port_name);
+
+   snprintf(switch_ethdev_name, sizeof(switch_ethdev_name),
+   "net_%s_representor_%s", switch_name, port_name);
+   PMD_DRV_LOG(DEBUG, "switch_ethdev_name is %s",
+   switch_ethdev_name);
+
+   ret = rte_eth_dev_get_port_by_name(switch_ethdev_name,
+   &port_id);
+   if (ret)
+   return NULL;
+   else
+   return &rte_eth_devices[port_id];
+   } else {
+   return NULL;
+   }
+}
+
+static struct rte_eth_dev *
+i40e_get_switch_ethdev_from_devargs(struct rte_devargs *devargs)
+{
+   struct rte_kvargs *kvlist = NULL;
+   struct rte_eth_dev *switch_ethdev = NULL

[dpdk-dev] [PATCH] net/octeontx2: add PF and VF action support

2019-07-07 Thread kirankumark
From: Kiran Kumar K 

Adding PF and VF action support for octeontx2 Flow.
If RTE_FLOW_ACTION_TYPE_PF action is set from VF, then the packet
will be sent to the parent PF.
If RTE_FLOW_ACTION_TYPE_VF action is set and original is specified,
then the packet will be sent to the original VF, otherwise the packet
will be sent to the VF specified in the vf_id.

Signed-off-by: Kiran Kumar K 
---
 doc/guides/nics/octeontx2.rst   |  4 
 drivers/net/octeontx2/otx2_flow.h   |  2 ++
 drivers/net/octeontx2/otx2_flow_parse.c | 32 ++---
 3 files changed, 35 insertions(+), 3 deletions(-)

diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index a8ed3838f..fbf4c4726 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -292,6 +292,10 @@ Actions:
+++
| 8  | RTE_FLOW_ACTION_TYPE_SECURITY  |
+++
+   | 9  | RTE_FLOW_ACTION_TYPE_PF|
+   +++
+   | 10 | RTE_FLOW_ACTION_TYPE_VF|
+   +++
 
 .. _table_octeontx2_supported_egress_action_types:
 
diff --git a/drivers/net/octeontx2/otx2_flow.h 
b/drivers/net/octeontx2/otx2_flow.h
index f5cc3b983..a27ceeb1a 100644
--- a/drivers/net/octeontx2/otx2_flow.h
+++ b/drivers/net/octeontx2/otx2_flow.h
@@ -52,6 +52,8 @@ enum {
 #define OTX2_FLOW_ACT_DUP (1 << 5)
 #define OTX2_FLOW_ACT_SEC (1 << 6)
 #define OTX2_FLOW_ACT_COUNT   (1 << 7)
+#define OTX2_FLOW_ACT_PF  (1 << 8)
+#define OTX2_FLOW_ACT_VF  (1 << 9)
 
 /* terminating actions */
 #define OTX2_FLOW_ACT_TERM(OTX2_FLOW_ACT_DROP  | \
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c 
b/drivers/net/octeontx2/otx2_flow_parse.c
index 1940cc636..3e6f5b8df 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -751,15 +751,17 @@ otx2_flow_parse_actions(struct rte_eth_dev *dev,
const struct rte_flow_action_count *act_count;
const struct rte_flow_action_mark *act_mark;
const struct rte_flow_action_queue *act_q;
+   const struct rte_flow_action_vf *vf_act;
const char *errmsg = NULL;
int sel_act, req_act = 0;
-   uint16_t pf_func;
+   uint16_t pf_func, vf_id;
int errcode = 0;
int mark = 0;
int rq = 0;
 
/* Initialize actions */
flow->ctr_id = NPC_COUNTER_NONE;
+   pf_func = otx2_pfvf_func(hw->pf, hw->vf);
 
for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
otx2_npc_dbg("Action type = %d", actions->type);
@@ -807,6 +809,27 @@ otx2_flow_parse_actions(struct rte_eth_dev *dev,
req_act |= OTX2_FLOW_ACT_DROP;
break;
 
+   case RTE_FLOW_ACTION_TYPE_PF:
+   req_act |= OTX2_FLOW_ACT_PF;
+   pf_func &= (0xfc00);
+   break;
+
+   case RTE_FLOW_ACTION_TYPE_VF:
+   vf_act = (const struct rte_flow_action_vf *)
+   actions->conf;
+   req_act |= OTX2_FLOW_ACT_VF;
+   if (vf_act->original == 0) {
+   vf_id = (vf_act->id & RVU_PFVF_FUNC_MASK) + 1;
+   if (vf_id  >= hw->maxvf) {
+   errmsg = "invalid vf specified";
+   errcode = EINVAL;
+   goto err_exit;
+   }
+   pf_func &= (0xfc00);
+   pf_func = (pf_func | vf_id);
+   }
+   break;
+
case RTE_FLOW_ACTION_TYPE_QUEUE:
/* Applicable only to ingress flow */
act_q = (const struct rte_flow_action_queue *)
@@ -902,7 +925,11 @@ otx2_flow_parse_actions(struct rte_eth_dev *dev,
}
 
/* Set NIX_RX_ACTIONOP */
-   if (req_act & OTX2_FLOW_ACT_DROP) {
+   if (req_act & (OTX2_FLOW_ACT_PF | OTX2_FLOW_ACT_VF)) {
+   flow->npc_action = NIX_RX_ACTIONOP_UCAST;
+   if (req_act & OTX2_FLOW_ACT_QUEUE)
+   flow->npc_action |= (uint64_t)rq << 20;
+   } else if (req_act & OTX2_FLOW_ACT_DROP) {
flow->npc_action = NIX_RX_ACTIONOP_DROP;
} else if (req_act & OTX2_FLOW_ACT_QUEUE) {
flow->npc_action = NIX_RX_ACTIONOP_UCAST;
@@ -946,7 +973,6 @@ otx2_flow_parse_actions(struct rte_eth_dev *dev,
 
 set_pf_func:
/* Ideally AF must ensure that correct pf_func is set */
-   pf_func = otx2_pfvf_func(hw->pf, hw->vf);
flow->npc_action |= (uint64_t)pf_func << 4;
 
return 0;
-- 
2.17.1



[dpdk-dev] [PATCH v4 3/3] examples/ip_reassembly: enable IP checksum offload

2019-07-07 Thread jerinj
From: Sunil Kumar Kori 

As per the documentation to use any IP offload features, application
must set required offload flags into mbuf->ol_flags.

Signed-off-by: Sunil Kumar Kori 
Acked-by: Konstantin Ananyev 
---
v4:
- Rebased to top of tree
- Fix check-gitlog.sh issues
---
 examples/ip_reassembly/main.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index fbd09341f..38b39be6b 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -353,6 +353,9 @@ reassemble(struct rte_mbuf *m, uint16_t portid, uint32_t 
queue,
struct rte_ether_hdr *);
ip_hdr = (struct rte_ipv4_hdr *)(eth_hdr + 1);
}
+
+   /* update offloading flags */
+   m->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
}
ip_dst = rte_be_to_cpu_32(ip_hdr->dst_addr);
 
-- 
2.22.0



[dpdk-dev] [PATCH v4 2/3] examples/ip_fragmentation: enable IP checksum offload

2019-07-07 Thread jerinj
From: Sunil Kumar Kori 

As per the documentation to use any IP offload features, application
must set required offload flags into mbuf->ol_flags.

Signed-off-by: Sunil Kumar Kori 
Acked-by: Konstantin Ananyev 
---
v4:
- Rebased to top of tree
- Fix check-gitlog.sh issues
---
 examples/ip_fragmentation/main.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index 85c0100f7..ccaf23ff0 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -357,12 +357,14 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct 
lcore_queue_conf *qconf,
/* src addr */
rte_ether_addr_copy(&ports_eth_addr[port_out],
ð_hdr->s_addr);
-   if (ipv6)
+   if (ipv6) {
eth_hdr->ether_type =
rte_be_to_cpu_16(RTE_ETHER_TYPE_IPV6);
-   else
+   } else {
eth_hdr->ether_type =
rte_be_to_cpu_16(RTE_ETHER_TYPE_IPV4);
+   m->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+   }
}
 
len += len2;
-- 
2.22.0



[dpdk-dev] [PATCH v4 1/3] lib/librte_ip_frag: remove IP checkum offload flag

2019-07-07 Thread jerinj
From: Sunil Kumar Kori 

Currently PKT_TX_IP_CKSUM is being set into mbuf->ol_flags
during fragmentation and reassemble operation implicitly.
Because of this, application is forced to use checksum offload
whether it is supported by platform or not.

Also documentation does not provide any expected value of ol_flags
in returned mbuf (reassembled or fragmented) so application will never
come to know that which offloads are enabled. So transmission may be failed
for the platforms which does not support checksum offload.

Also, IPv6 does not contain any checksum field in header so setting
mbuf->ol_flags with PKT_TX_IP_CKSUM is itself invalid.

So removing mentioned flag from the library.

Signed-off-by: Sunil Kumar Kori 
Acked-by: Konstantin Ananyev 
---
v4:
- Update release notes

---
 doc/guides/rel_notes/release_19_08.rst   | 15 +++
 lib/librte_ip_frag/rte_ipv4_reassembly.c |  3 ---
 lib/librte_ip_frag/rte_ipv6_reassembly.c |  3 ---
 3 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/doc/guides/rel_notes/release_19_08.rst 
b/doc/guides/rel_notes/release_19_08.rst
index defbc5e27..ad9cbf4a2 100644
--- a/doc/guides/rel_notes/release_19_08.rst
+++ b/doc/guides/rel_notes/release_19_08.rst
@@ -281,6 +281,21 @@ ABI Changes
 * bbdev: New operations and parameters added to support new 5GNR operations.
   The bbdev ABI is still kept experimental.
 
+* ip_fragmentation: IP fragmentation library converts input mbuf into fragments
+  using input MTU size via ``rte_ipv4_fragment_packet`` interface.
+  Once fragmentation is done, each ``mbuf->ol_flags`` are set to enable IP
+  checksum H/W offload irrespective of the platform capability.
+  Cleared IP checksum H/W offload flag from the library. The application must
+  set this flag if it is supported by the platform and application wishes to
+  use it.
+
+* ip_reassembly: IP reassembly library converts the list of fragments into a
+  reassembled packet via ``rte_ipv4_frag_reassemble_packet`` interface.
+  Once reassembly is done, ``mbuf->ol_flags`` are set to enable IP checksum H/W
+  offload irrespective of the platform capability. Cleared IP checksum H/W
+  offload flag from the library. The application must set this flag if it is
+  supported by the platform and application wishes to use it.
+
 
 Shared Library Versions
 ---
diff --git a/lib/librte_ip_frag/rte_ipv4_reassembly.c 
b/lib/librte_ip_frag/rte_ipv4_reassembly.c
index b7b92ed28..1dda8aca0 100644
--- a/lib/librte_ip_frag/rte_ipv4_reassembly.c
+++ b/lib/librte_ip_frag/rte_ipv4_reassembly.c
@@ -66,9 +66,6 @@ ipv4_frag_reassemble(struct ip_frag_pkt *fp)
m = fp->frags[IP_FIRST_FRAG_IDX].mb;
fp->frags[IP_FIRST_FRAG_IDX].mb = NULL;
 
-   /* update mbuf fields for reassembled packet. */
-   m->ol_flags |= PKT_TX_IP_CKSUM;
-
/* update ipv4 header for the reassembled packet */
ip_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *, m->l2_len);
 
diff --git a/lib/librte_ip_frag/rte_ipv6_reassembly.c 
b/lib/librte_ip_frag/rte_ipv6_reassembly.c
index 169b01a5d..ad0105518 100644
--- a/lib/librte_ip_frag/rte_ipv6_reassembly.c
+++ b/lib/librte_ip_frag/rte_ipv6_reassembly.c
@@ -89,9 +89,6 @@ ipv6_frag_reassemble(struct ip_frag_pkt *fp)
m = fp->frags[IP_FIRST_FRAG_IDX].mb;
fp->frags[IP_FIRST_FRAG_IDX].mb = NULL;
 
-   /* update mbuf fields for reassembled packet. */
-   m->ol_flags |= PKT_TX_IP_CKSUM;
-
/* update ipv6 header for the reassembled datagram */
ip_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *, m->l2_len);
 
-- 
2.22.0



[dpdk-dev] [DPDK] net/e1000: fix buffer overrun while i219 processing DMA transactions

2019-07-07 Thread Xiao Zhang
Intel® 100/200 Series Chipset platforms reduced the round-trip
latency for the LAN Controller DMA accesses, causing in some high
performance cases a buffer overrun while the I219 LAN Connected
Device is processing the DMA transactions. I219LM and I219V devices
can fall into unrecovered Tx hang under very stressfully UDP traffic
and multiple reconnection of Ethernet cable. This Tx hang of the LAN
Controller is only recovered if the system is rebooted. Slightly slow
down DMA access by reducing the number of outstanding requests.
This workaround could have an impact on TCP traffic performance
on the platform. Disabling TSO eliminates performance loss for TCP
traffic without a noticeable impact on CPU performance.

Please, refer to I218/I219 specification update:
https://www.intel.com/content/www/us/en/embedded/products/networking/
ethernet-connection-i218-family-documentation.html

Signed-off-by: Xiao Zhang 
---
 drivers/net/e1000/base/e1000_ich8lan.h |  1 +
 drivers/net/e1000/igb_rxtx.c   | 16 
 2 files changed, 17 insertions(+)

diff --git a/drivers/net/e1000/base/e1000_ich8lan.h 
b/drivers/net/e1000/base/e1000_ich8lan.h
index 1f2a3f8..084eb9c 100644
--- a/drivers/net/e1000/base/e1000_ich8lan.h
+++ b/drivers/net/e1000/base/e1000_ich8lan.h
@@ -134,6 +134,7 @@ POSSIBILITY OF SUCH DAMAGE.
 #define E1000_FLASH_BASE_ADDR 0xE000 /*offset of NVM access regs*/
 #define E1000_CTRL_EXT_NVMVS 0x3 /*NVM valid sector */
 #define E1000_TARC0_CB_MULTIQ_3_REQ(1 << 28 | 1 << 29)
+#define E1000_TARC0_CB_MULTIQ_2_REQ(1 << 29)
 #define PCIE_ICH8_SNOOP_ALLPCIE_NO_SNOOP_ALL
 
 #define E1000_ICH_RAR_ENTRIES  7
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 33eeb4e..5d45e62 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -2627,6 +2627,22 @@ eth_igb_tx_init(struct rte_eth_dev *dev)
 
e1000_config_collision_dist(hw);
 
+   /* SPT and CNP Si errata workaround to avoid data corruption */
+   if (hw->mac.type == e1000_pch_spt) {
+   uint32_t reg_val;
+   reg_val = E1000_READ_REG(hw, E1000_IOSFPC);
+   reg_val |= E1000_RCTL_RDMTS_HEX;
+   E1000_WRITE_REG(hw, E1000_IOSFPC, reg_val);
+
+   /* Dropping the number of outstanding requests from
+* 3 to 2 in order to avoid a buffer overrun.
+*/
+   reg_val = E1000_READ_REG(hw, E1000_TARC(0));
+   reg_val &= ~E1000_TARC0_CB_MULTIQ_3_REQ;
+   reg_val |= E1000_TARC0_CB_MULTIQ_2_REQ;
+   E1000_WRITE_REG(hw, E1000_TARC(0), reg_val);
+   }
+
/* This write will effectively turn on the transmit unit. */
E1000_WRITE_REG(hw, E1000_TCTL, tctl);
 }
-- 
2.7.4



[dpdk-dev] [PATCH v1 1/1] mempool/octeontx2: fix mempool creation failure

2019-07-07 Thread vattunuru
From: Vamsi Attunuru 

Fix npa pool range errors observed while creating mempool, this issue
happens when mempool objects are from different mem segments.

During mempool creation, octeontx2 mempool driver populates pool range
fields before enqueuing the buffers. If any enqueue or dequeue operation
reaches npa hardware prior to the range field's HW context update,
those ops result in npa range errors. Patch adds a routine to read back
HW context and verify if range fields are updated or not.

Fixes: e5271c507aeb ("mempool/octeontx2: add remaining slow path ops")

Signed-off-by: Vamsi Attunuru 
---
 drivers/mempool/octeontx2/otx2_mempool_ops.c | 37 
 1 file changed, 37 insertions(+)

diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c 
b/drivers/mempool/octeontx2/otx2_mempool_ops.c
index e1764b0..a60a77a 100644
--- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
+++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c
@@ -600,6 +600,40 @@ npa_lf_aura_pool_pair_free(struct otx2_npa_lf *lf, 
uint64_t aura_handle)
 }
 
 static int
+npa_lf_aura_range_update_check(uint64_t aura_handle)
+{
+   uint64_t aura_id = npa_lf_aura_handle_to_aura(aura_handle);
+   struct otx2_npa_lf *lf = otx2_npa_lf_obj_get();
+   struct npa_aura_lim *lim = lf->aura_lim;
+   struct npa_aq_enq_req *req;
+   struct npa_aq_enq_rsp *rsp;
+   struct npa_pool_s *pool;
+   int rc;
+
+   req  = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox);
+
+   req->aura_id = aura_id;
+   req->ctype = NPA_AQ_CTYPE_POOL;
+   req->op = NPA_AQ_INSTOP_READ;
+
+   rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp);
+   if (rc) {
+   otx2_err("Failed to get pool(0x%"PRIx64") context", aura_id);
+   return rc;
+   }
+
+   pool = &rsp->pool;
+
+   if (lim[aura_id].ptr_start != pool->ptr_start ||
+   lim[aura_id].ptr_end != pool->ptr_end) {
+   otx2_err("Range update failed on pool(0x%"PRIx64")", aura_id);
+   return -ERANGE;
+   }
+
+   return 0;
+}
+
+static int
 otx2_npa_alloc(struct rte_mempool *mp)
 {
uint32_t block_size, block_count;
@@ -724,6 +758,9 @@ otx2_npa_populate(struct rte_mempool *mp, unsigned int 
max_objs, void *vaddr,
 
npa_lf_aura_op_range_set(mp->pool_id, iova, iova + len);
 
+   if (npa_lf_aura_range_update_check(mp->pool_id) < 0)
+   return -EBUSY;
+
return rte_mempool_op_populate_default(mp, max_objs, vaddr, iova, len,
   obj_cb, obj_cb_arg);
 }
-- 
2.8.4



Re: [dpdk-dev] [PATCH] net/octeontx2: add PF and VF action support

2019-07-07 Thread Jerin Jacob Kollanukkaran
> -Original Message-
> From: dev  On Behalf Of
> kirankum...@marvell.com
> Sent: Monday, July 8, 2019 9:06 AM
> To: dev@dpdk.org
> Cc: Kiran Kumar Kokkilagadda 
> Subject: [dpdk-dev] [PATCH] net/octeontx2: add PF and VF action support
> 
> From: Kiran Kumar K 
> 
> Adding PF and VF action support for octeontx2 Flow.
> If RTE_FLOW_ACTION_TYPE_PF action is set from VF, then the packet will be
> sent to the parent PF.
> If RTE_FLOW_ACTION_TYPE_VF action is set and original is specified, then
> the packet will be sent to the original VF, otherwise the packet will be sent 
> to
> the VF specified in the vf_id.
> 
> Signed-off-by: Kiran Kumar K 

Acked-by: Jerin Jacob 

> ---
>  doc/guides/nics/octeontx2.rst   |  4 
>  drivers/net/octeontx2/otx2_flow.h   |  2 ++
>  drivers/net/octeontx2/otx2_flow_parse.c | 32
> ++---
>  3 files changed, 35 insertions(+), 3 deletions(-)
> 


Re: [dpdk-dev] [PATCH v4 3/3] lib/lpm: use atomic store to avoid partial update

2019-07-07 Thread Honnappa Nagarahalli
> 
> Compiler could generate non-atomic stores for whole table entry updating.
> This may cause incorrect nexthop to be returned, if the byte with valid flag 
> is
> updated prior to the byte with next hot is updated.
   ^^^
Should be nexthop

> 
> Changed to use atomic store to update whole table entry.
> 
> Suggested-by: Medvedkin Vladimir 
> Signed-off-by: Ruifeng Wang 
> Reviewed-by: Gavin Hu 
> ---
> v4: initial version
> 
>  lib/librte_lpm/rte_lpm.c | 34 --
>  1 file changed, 24 insertions(+), 10 deletions(-)
> 
> diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c index
> baa6e7460..5d1dbd7e6 100644
> --- a/lib/librte_lpm/rte_lpm.c
> +++ b/lib/librte_lpm/rte_lpm.c
> @@ -767,7 +767,9 @@ add_depth_small_v20(struct rte_lpm_v20 *lpm,
> uint32_t ip, uint8_t depth,
>* Setting tbl8 entry in one go to
> avoid
>* race conditions
>*/
> - lpm->tbl8[j] = new_tbl8_entry;
> + __atomic_store(&lpm->tbl8[j],
> + &new_tbl8_entry,
> + __ATOMIC_RELAXED);
> 
>   continue;
>   }
> @@ -837,7 +839,9 @@ add_depth_small_v1604(struct rte_lpm *lpm,
> uint32_t ip, uint8_t depth,
>* Setting tbl8 entry in one go to
> avoid
>* race conditions
>*/
> - lpm->tbl8[j] = new_tbl8_entry;
> + __atomic_store(&lpm->tbl8[j],
> + &new_tbl8_entry,
> + __ATOMIC_RELAXED);
> 
>   continue;
>   }
> @@ -965,7 +969,8 @@ add_depth_big_v20(struct rte_lpm_v20 *lpm,
> uint32_t ip_masked, uint8_t depth,
>* Setting tbl8 entry in one go to avoid race
>* condition
>*/
> - lpm->tbl8[i] = new_tbl8_entry;
> + __atomic_store(&lpm->tbl8[i],
> &new_tbl8_entry,
> + __ATOMIC_RELAXED);
> 
>   continue;
>   }
> @@ -1100,7 +1105,8 @@ add_depth_big_v1604(struct rte_lpm *lpm,
> uint32_t ip_masked, uint8_t depth,
>* Setting tbl8 entry in one go to avoid race
>* condition
>*/
> - lpm->tbl8[i] = new_tbl8_entry;
> + __atomic_store(&lpm->tbl8[i],
> &new_tbl8_entry,
> + __ATOMIC_RELAXED);
> 
>   continue;
>   }
> @@ -1393,7 +1399,9 @@ delete_depth_small_v20(struct rte_lpm_v20 *lpm,
> uint32_t ip_masked,
> 
>   RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
> 
>   if (lpm->tbl8[j].depth <= depth)
> - lpm->tbl8[j] =
> new_tbl8_entry;
> + __atomic_store(&lpm->tbl8[j],
> + &new_tbl8_entry,
> + __ATOMIC_RELAXED);
>   }
>   }
>   }
> @@ -1490,7 +1498,9 @@ delete_depth_small_v1604(struct rte_lpm *lpm,
> uint32_t ip_masked,
> 
>   RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
> 
>   if (lpm->tbl8[j].depth <= depth)
> - lpm->tbl8[j] =
> new_tbl8_entry;
> + __atomic_store(&lpm->tbl8[j],
> + &new_tbl8_entry,
> + __ATOMIC_RELAXED);
>   }
>   }
>   }
> @@ -1646,7 +1656,8 @@ delete_depth_big_v20(struct rte_lpm_v20 *lpm,
> uint32_t ip_masked,
>*/
>   for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
>   if (lpm->tbl8[i].depth <= depth)
> - lpm->tbl8[i] = new_tbl8_entry;
> + __atomic_store(&lpm->tbl8[i],
> &new_tbl8_entry,
> + __ATOMIC_RELAXED);
>   }
>   }
> 
> @@ -1677,7 +1688,8 @@ delete_depth_big_v20(struct rte_lpm_v20 *lpm,
> uint32_t ip_masked,
>   /* Set tbl24 before freeing tbl8 to avoi

Re: [dpdk-dev] [PATCH v1 1/1] mempool/octeontx2: fix mempool creation failure

2019-07-07 Thread Jerin Jacob Kollanukkaran
> -Original Message-
> From: vattun...@marvell.com 
> Sent: Monday, July 8, 2019 10:18 AM
> To: dev@dpdk.org
> Cc: tho...@monjalon.net; Jerin Jacob Kollanukkaran ;
> Vamsi Krishna Attunuru 
> Subject: [PATCH v1 1/1] mempool/octeontx2: fix mempool creation failure

Actually it is v2.

v2..v1:
# Fixed git-check-log.sh issues
# Updated git comments for "when this issue happens?"
# Change the name of the patch
# Add Fixes tag
 
> From: Vamsi Attunuru 
> 
> Fix npa pool range errors observed while creating mempool, this issue
> happens when mempool objects are from different mem segments.
> 
> During mempool creation, octeontx2 mempool driver populates pool range
> fields before enqueuing the buffers. If any enqueue or dequeue operation
> reaches npa hardware prior to the range field's HW context update, those
> ops result in npa range errors. Patch adds a routine to read back HW context
> and verify if range fields are updated or not.
> 
> Fixes: e5271c507aeb ("mempool/octeontx2: add remaining slow path ops")
> 
> Signed-off-by: Vamsi Attunuru 

Acked-by: Jerin Jacob 


Re: [dpdk-dev] [PATCH v4 3/3] lib/lpm: use atomic store to avoid partial update

2019-07-07 Thread Ruifeng Wang (Arm Technology China)
Hi Vladimir,

> -Original Message-
> From: Medvedkin, Vladimir 
> Sent: Saturday, July 6, 2019 00:53
> To: Ruifeng Wang (Arm Technology China) ;
> bruce.richard...@intel.com
> Cc: dev@dpdk.org; Honnappa Nagarahalli
> ; Gavin Hu (Arm Technology China)
> ; nd 
> Subject: Re: [PATCH v4 3/3] lib/lpm: use atomic store to avoid partial update
> 
> Hi Wang,
> 
> On 03/07/2019 06:44, Ruifeng Wang wrote:
> > Compiler could generate non-atomic stores for whole table entry
> > updating. This may cause incorrect nexthop to be returned, if the byte
> > with valid flag is updated prior to the byte with next hot is updated.
> >
> > Changed to use atomic store to update whole table entry.
> >
> > Suggested-by: Medvedkin Vladimir 
> > Signed-off-by: Ruifeng Wang 
> > Reviewed-by: Gavin Hu 
> > ---
> > v4: initial version
> >
> >   lib/librte_lpm/rte_lpm.c | 34 --
> >   1 file changed, 24 insertions(+), 10 deletions(-)
> >
> > diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c index
> > baa6e7460..5d1dbd7e6 100644
> > --- a/lib/librte_lpm/rte_lpm.c
> > +++ b/lib/librte_lpm/rte_lpm.c
> > @@ -767,7 +767,9 @@ add_depth_small_v20(struct rte_lpm_v20 *lpm,
> uint32_t ip, uint8_t depth,
> >  * Setting tbl8 entry in one go to avoid
> >  * race conditions
> >  */
> > -   lpm->tbl8[j] = new_tbl8_entry;
> > +   __atomic_store(&lpm->tbl8[j],
> > +   &new_tbl8_entry,
> > +   __ATOMIC_RELAXED);
> >
> > continue;
> > }
> > @@ -837,7 +839,9 @@ add_depth_small_v1604(struct rte_lpm *lpm,
> uint32_t ip, uint8_t depth,
> >  * Setting tbl8 entry in one go to avoid
> >  * race conditions
> >  */
> > -   lpm->tbl8[j] = new_tbl8_entry;
> > +   __atomic_store(&lpm->tbl8[j],
> > +   &new_tbl8_entry,
> > +   __ATOMIC_RELAXED);
> >
> > continue;
> > }
> > @@ -965,7 +969,8 @@ add_depth_big_v20(struct rte_lpm_v20 *lpm,
> uint32_t ip_masked, uint8_t depth,
> >  * Setting tbl8 entry in one go to avoid race
> >  * condition
> >  */
> > -   lpm->tbl8[i] = new_tbl8_entry;
> > +   __atomic_store(&lpm->tbl8[i],
> &new_tbl8_entry,
> > +   __ATOMIC_RELAXED);
> >
> > continue;
> > }
> > @@ -1100,7 +1105,8 @@ add_depth_big_v1604(struct rte_lpm *lpm,
> uint32_t ip_masked, uint8_t depth,
> >  * Setting tbl8 entry in one go to avoid race
> >  * condition
> >  */
> > -   lpm->tbl8[i] = new_tbl8_entry;
> > +   __atomic_store(&lpm->tbl8[i],
> &new_tbl8_entry,
> > +   __ATOMIC_RELAXED);
> >
> > continue;
> > }
> > @@ -1393,7 +1399,9 @@ delete_depth_small_v20(struct rte_lpm_v20
> *lpm, uint32_t ip_masked,
> >
>   RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
> >
> > if (lpm->tbl8[j].depth <= depth)
> > -   lpm->tbl8[j] =
> new_tbl8_entry;
> > +   __atomic_store(&lpm-
> >tbl8[j],
> > +   &new_tbl8_entry,
> > +   __ATOMIC_RELAXED);
> > }
> > }
> > }
> > @@ -1490,7 +1498,9 @@ delete_depth_small_v1604(struct rte_lpm *lpm,
> uint32_t ip_masked,
> >
>   RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
> >
> > if (lpm->tbl8[j].depth <= depth)
> > -   lpm->tbl8[j] =
> new_tbl8_entry;
> > +   __atomic_store(&lpm-
> >tbl8[j],
> > +   &new_tbl8_entry,
> > +   __ATOMIC_RELAXED);
> > }
> > }
> > }
> > @@ -1646,7 +1656,8 @@ delete_depth_big_v20(struct rte_lpm_v20 *lpm,
> uint32_t ip_masked,
> >  */
> > for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
> > if (lpm->tbl8[i].depth <= depth)
> > -  

Re: [dpdk-dev] [PATCH v3] net/i40e: i40e get link status update from ipn3ke

2019-07-07 Thread Zhang, Qi Z
Hi Andy:

> -Original Message-
> From: Pei, Andy
> Sent: Thursday, July 4, 2019 2:56 PM
> To: dev@dpdk.org
> Cc: Pei, Andy ; Zhang, Qi Z ; Wu,
> Jingjing ; Xing, Beilei ; Yigit,
> Ferruh ; Xu, Rosen ; Ye,
> Xiaolong ; Zhang, Roy Fan
> ; sta...@dpdk.org
> Subject: [PATCH v3] net/i40e: i40e get link status update from ipn3ke
> 
> Add switch_mode argument for i40e PF to specify the specific FPGA that i40e
> PF is connected to. i40e PF get link status update via the connected FPGA.
> Add switch_ethdev to rte_eth_dev_data to track the bind switch device.
> Try to bind i40e pf to switch device when i40e device is probed. If it fail 
> to find
> correct switch device, bind will occur again when update i40e device link
> status.
>

..
 
>  int
>  i40e_dev_link_update(struct rte_eth_dev *dev,
>int wait_to_complete)
> @@ -2786,6 +2905,8 @@ void i40e_flex_payload_reg_set_default(struct
> i40e_hw *hw)
>   struct i40e_hw *hw =
> I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>   struct rte_eth_link link;
>   bool enable_lse = dev->data->dev_conf.intr_conf.lsc ? true : false;
> + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
> + struct rte_devargs *devargs;
>   int ret;
> 
>   memset(&link, 0, sizeof(link));
> @@ -2800,6 +2921,16 @@ void i40e_flex_payload_reg_set_default(struct
> i40e_hw *hw)
>   else
>   update_link_aq(hw, &link, enable_lse, wait_to_complete);
> 
> + if (!dev->data->switch_ethdev) {
> + devargs = pci_dev->device.devargs;
> + if (devargs)
> + dev->data->switch_ethdev =
> + i40e_get_switch_ethdev_from_devargs(
> + pci_dev->device.devargs);
> + }

For regular mode, switch_ethdev is always null, seems above code did 
unnecessary devargs parsing for every link_update call
Can we add an intermediate flag (add a field in i40e_pf indicate if switch mode 
is required like other devargs ) so during probe this flag can be initialized, 
and it can be reused later.

So all switch mode related code can be braces with that flag. 
If (xxx_flag)
{
 .
}


> + i40e_pf_linkstatus_get_from_switch_ethdev(dev->data->switch_ethdev,
> + &link);


Why i40e_pf_linkstatus_get_from_switch_ethdev is always be called?, should we do

If (dev->data->switch_ethdev)
i40e_pf_linkstatus_get_from_switch_ethdev(dev->data->switch_ethdev, 
&link);



Re: [dpdk-dev] [PATCH v4 3/3] lib/lpm: use atomic store to avoid partial update

2019-07-07 Thread Ruifeng Wang (Arm Technology China)
Hi Honnappa,

> -Original Message-
> From: Honnappa Nagarahalli 
> Sent: Monday, July 8, 2019 12:57
> To: Ruifeng Wang (Arm Technology China) ;
> vladimir.medved...@intel.com; bruce.richard...@intel.com
> Cc: dev@dpdk.org; Gavin Hu (Arm Technology China) ;
> nd ; Ruifeng Wang (Arm Technology China)
> ; Honnappa Nagarahalli
> ; nd 
> Subject: RE: [PATCH v4 3/3] lib/lpm: use atomic store to avoid partial update
> 
> >
> > Compiler could generate non-atomic stores for whole table entry updating.
> > This may cause incorrect nexthop to be returned, if the byte with
> > valid flag is updated prior to the byte with next hot is updated.
>^^^ Should be 
> nexthop
> 
> >
> > Changed to use atomic store to update whole table entry.
> >
> > Suggested-by: Medvedkin Vladimir 
> > Signed-off-by: Ruifeng Wang 
> > Reviewed-by: Gavin Hu 
> > ---
> > v4: initial version
> >
> >  lib/librte_lpm/rte_lpm.c | 34 --
> >  1 file changed, 24 insertions(+), 10 deletions(-)
> >
> > diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c index
> > baa6e7460..5d1dbd7e6 100644
> > --- a/lib/librte_lpm/rte_lpm.c
> > +++ b/lib/librte_lpm/rte_lpm.c
> > @@ -767,7 +767,9 @@ add_depth_small_v20(struct rte_lpm_v20 *lpm,
> > uint32_t ip, uint8_t depth,
> >  * Setting tbl8 entry in one go to avoid
> >  * race conditions
> >  */
> > -   lpm->tbl8[j] = new_tbl8_entry;
> > +   __atomic_store(&lpm->tbl8[j],
> > +   &new_tbl8_entry,
> > +   __ATOMIC_RELAXED);
> >
> > continue;
> > }
> > @@ -837,7 +839,9 @@ add_depth_small_v1604(struct rte_lpm *lpm,
> > uint32_t ip, uint8_t depth,
> >  * Setting tbl8 entry in one go to avoid
> >  * race conditions
> >  */
> > -   lpm->tbl8[j] = new_tbl8_entry;
> > +   __atomic_store(&lpm->tbl8[j],
> > +   &new_tbl8_entry,
> > +   __ATOMIC_RELAXED);
> >
> > continue;
> > }
> > @@ -965,7 +969,8 @@ add_depth_big_v20(struct rte_lpm_v20 *lpm,
> > uint32_t ip_masked, uint8_t depth,
> >  * Setting tbl8 entry in one go to avoid race
> >  * condition
> >  */
> > -   lpm->tbl8[i] = new_tbl8_entry;
> > +   __atomic_store(&lpm->tbl8[i],
> > &new_tbl8_entry,
> > +   __ATOMIC_RELAXED);
> >
> > continue;
> > }
> > @@ -1100,7 +1105,8 @@ add_depth_big_v1604(struct rte_lpm *lpm,
> > uint32_t ip_masked, uint8_t depth,
> >  * Setting tbl8 entry in one go to avoid race
> >  * condition
> >  */
> > -   lpm->tbl8[i] = new_tbl8_entry;
> > +   __atomic_store(&lpm->tbl8[i],
> > &new_tbl8_entry,
> > +   __ATOMIC_RELAXED);
> >
> > continue;
> > }
> > @@ -1393,7 +1399,9 @@ delete_depth_small_v20(struct rte_lpm_v20
> *lpm,
> > uint32_t ip_masked,
> >
> > RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
> >
> > if (lpm->tbl8[j].depth <= depth)
> > -   lpm->tbl8[j] =
> > new_tbl8_entry;
> > +   __atomic_store(&lpm-
> >tbl8[j],
> > +   &new_tbl8_entry,
> > +   __ATOMIC_RELAXED);
> > }
> > }
> > }
> > @@ -1490,7 +1498,9 @@ delete_depth_small_v1604(struct rte_lpm *lpm,
> > uint32_t ip_masked,
> >
> > RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
> >
> > if (lpm->tbl8[j].depth <= depth)
> > -   lpm->tbl8[j] =
> > new_tbl8_entry;
> > +   __atomic_store(&lpm-
> >tbl8[j],
> > +   &new_tbl8_entry,
> > +   __ATOMIC_RELAXED);
> > }
> > }
> > }
> > @@ -1646,7 +1656,8 @@ delete_depth_big_v20(struct rte_lpm_v20 *lpm,
> > uint32_t ip_masked,
> >  */
> >

Re: [dpdk-dev] [PATCH v2] examples/client_server_mp: check port ownership

2019-07-07 Thread Matan Azrad
Hi Stephen

From: Stephen Hemminger 
> Sent: Sunday, July 7, 2019 7:47 PM
> To: Matan Azrad 
> Cc: anatoly.bura...@intel.com; dev@dpdk.org; Stephen Hemminger
> 
> Subject: Re: [dpdk-dev] [PATCH v2] examples/client_server_mp: check port
> ownership
> 
> On Sun, 7 Jul 2019 05:44:55 +
> Matan Azrad  wrote:
> 
> > > + for (count = 0; pm != 0; pm >>= 1, ++count) {
> > > + struct rte_eth_dev_owner owner;
> > > +
> > > + if ((pm & 0x1) == 0)
> > > + continue;
> > > +
> > > + if (count >= max_ports) {
> > > + printf("WARNING: requested port %u not present -
> > > ignoring\n",
> > > + count);
> > > + continue;
> > > + }
> > > + if (rte_eth_dev_owner_get(count, &owner) < 0) {
> > > + printf("ERROR: can not find port %u owner\n",
> > > count);
> >
> > What if some entity will take ownership later?
> > If you want the app will be ownership aware:
> > if you sure that you want this port to be owned by this application
> you need to take ownership on it.
> > else:
> > the port is hidden by RTE_ETH_FOREACH_DEV if it is owned by some entity.
> > see how it was done in testpmd function: port_id_is_invalid().
> 
> There are no mysterious entities in DPDK.
> The only thing that can happen later is hotplug, and that will not change 
> state
> of existing port.
> This model is used for all applications.  The application does not take
> ownership, only device drivers do.

A long discussions were done on it.
There is an application model to take ownership as I wrote you above.
You chose in the second option - not to be ownership aware.
 
>From docs:
"10.4.2. Port Ownership
The Ethernet devices ports can be owned by a single DPDK entity (application, 
library, PMD, process, etc). The ownership mechanism is controlled by ethdev 
APIs and allows to set/remove/get a port owner by DPDK entities. Allowing this 
should prevent any multiple management of Ethernet port by different entities.

Note

It is the DPDK entity responsibility to set the port owner before using it and 
to manage the port usage synchronization between different threads or 
processes."

> The whole portmask as command-line parameter is a bad user experience
> now, but that is a different problem.

I think, this is the problem you should solve here.


Matan