Re: [dpdk-dev] [EXT] Re: [PATCH] mk: add support for UBSAN
15/11/2019 15:46, Harman Kalra: > On Mon, Nov 11, 2019 at 08:07:00AM +0100, Thomas Monjalon wrote: > > External Email > > > > -- > > Hi, > > > > Sorry for the very late review. > > I hope someone else would try it. > > > > I tried this: > > devtools/test-build.sh -v > > x86_64-native-linux-clang+shared+UBSAN+SANITIZE_ALL > > and it triggers some link errors: > > /usr/bin/ld: rte_kvargs.c:(.text+0xc65): undefined reference to > > `__ubsan_handle_pointer_overflow' > Hi, > > Thanks for trying it out. I came across these errors when compiler > versions doesn't supports UBSAN If the support is not available, we should print a clear error message, not random link errors. > Can you please with latest clang version if issue still persists. I am using clang 8.
[dpdk-dev] [PATCH] net/ice: fix FDIR flow type conflict
Flow type "IPv4 + UDP" or "IPv4 + TCP" is conflict with "IPv4 + any" flow type. If a rule for IPv4 + any is created, we should reject any rule for IPv4 + UDP otherwise the first rule may be impacted, same decision should be made on a reverse order. For IPv6 and IPv4 GTPU inner case, we have the same limitation. Fixes: 109e8e06249e ("net/ice: configure HW flow director rule") Signed-off-by: Qi Zhang --- drivers/net/ice/ice_fdir_filter.c | 172 ++ 1 file changed, 156 insertions(+), 16 deletions(-) diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c index 7876f4bbc..334b84430 100644 --- a/drivers/net/ice/ice_fdir_filter.c +++ b/drivers/net/ice/ice_fdir_filter.c @@ -646,6 +646,153 @@ ice_fdir_teardown(struct ice_pf *pf) } } +static int ice_fdir_cur_prof_conflict(struct ice_pf *pf, + enum ice_fltr_ptype ptype, + struct ice_flow_seg_info *seg, + bool is_tunnel) +{ + struct ice_hw *hw = ICE_PF_TO_HW(pf); + struct ice_flow_seg_info *ori_seg; + struct ice_fd_hw_prof *hw_prof; + + hw_prof = hw->fdir_prof[ptype]; + ori_seg = hw_prof->fdir_seg[is_tunnel]; + + /* profile does not exist */ + if (!ori_seg) + return 0; + + /* if no input set conflict, return -EAGAIN */ + if ((!is_tunnel && !memcmp(ori_seg, seg, sizeof(*seg))) || + (is_tunnel && !memcmp(&ori_seg[1], &seg[1], sizeof(*seg { + PMD_DRV_LOG(DEBUG, "Profile already exist for flow type %d.", + ptype); + return -EAGAIN; + } + + /* a rule with input set conflict already exist, so give up */ + if (pf->fdir_fltr_cnt[ptype][is_tunnel]) { + PMD_DRV_LOG(DEBUG, "Failed to create profile for flow type %d due to conflict with exist rule.", + ptype); + return -EINVAL; + } + + /* it's safe to delete an empty profile */ + ice_fdir_prof_rm(pf, ptype, is_tunnel); + return 0; +} + +static bool ice_fdir_prof_resolve_conflict(struct ice_pf *pf, + enum ice_fltr_ptype ptype, + bool is_tunnel) +{ + struct ice_hw *hw = ICE_PF_TO_HW(pf); + struct ice_fd_hw_prof *hw_prof; + struct ice_flow_seg_info *seg; + + hw_prof = hw->fdir_prof[ptype]; + seg = hw_prof->fdir_seg[is_tunnel]; + + /* profile does not exist */ + if (!seg) + return true; + + /* profile exist and rule exist, fail to resolve the conflict */ + if (pf->fdir_fltr_cnt[ptype][is_tunnel] != 0) + return false; + + /* it's safe to delete an empty profile */ + ice_fdir_prof_rm(pf, ptype, is_tunnel); + + return true; +} + +static int ice_fdir_cross_prof_conflict(struct ice_pf *pf, + enum ice_fltr_ptype ptype, + bool is_tunnel) +{ + enum ice_fltr_ptype cflct_ptype; + + switch (ptype) { + /* IPv4 */ + case ICE_FLTR_PTYPE_NONF_IPV4_UDP: + case ICE_FLTR_PTYPE_NONF_IPV4_TCP: + case ICE_FLTR_PTYPE_NONF_IPV4_SCTP: + cflct_ptype = ICE_FLTR_PTYPE_NONF_IPV4_OTHER; + if (!ice_fdir_prof_resolve_conflict( + pf, cflct_ptype, is_tunnel)) + goto err; + break; + case ICE_FLTR_PTYPE_NONF_IPV4_OTHER: + cflct_ptype = ICE_FLTR_PTYPE_NONF_IPV4_UDP; + if (!ice_fdir_prof_resolve_conflict( + pf, cflct_ptype, is_tunnel)) + goto err; + cflct_ptype = ICE_FLTR_PTYPE_NONF_IPV4_TCP; + if (!ice_fdir_prof_resolve_conflict( + pf, cflct_ptype, is_tunnel)) + goto err; + cflct_ptype = ICE_FLTR_PTYPE_NONF_IPV4_SCTP; + if (!ice_fdir_prof_resolve_conflict( + pf, cflct_ptype, is_tunnel)) + goto err; + break; + /* IPv4 GTPU */ + case ICE_FLTR_PTYPE_NONF_IPV4_GTPU_IPV4_UDP: + case ICE_FLTR_PTYPE_NONF_IPV4_GTPU_IPV4_TCP: + case ICE_FLTR_PTYPE_NONF_IPV4_GTPU_IPV4_ICMP: + cflct_ptype = ICE_FLTR_PTYPE_NONF_IPV4_GTPU_IPV4_OTHER; + if (!ice_fdir_prof_resolve_conflict( + pf, cflct_ptype, is_tunnel)) + goto err; + break; + case ICE_FLTR_PTYPE_NONF_IPV4_GTPU_IPV4_OTHER: + cflct_ptype = ICE_FLTR_PTYPE_NONF_IPV4_GTPU_IPV4_OTHER; + if (!ice_fdir_prof_resolve_conflict( + pf, cflct_ptype, is_tunnel)) + goto err; + cflct_ptype = ICE_FLTR_PTYPE_NONF_IPV4_GTPU_IPV4_OTHER; +
[dpdk-dev] [PATCH v2] net/ice: fix FDIR flow type conflict
Flow type "IPv4 + UDP" or "IPv4 + TCP" is conflict with "IPv4 + any" flow type. If a rule for IPv4 + any is created, we should reject any rule for IPv4 + UDP otherwise the first rule may be impacted, same decision should be made on a reverse order. For IPv6 and IPv4 GTPU inner case, we have the same limitation. Fixes: 109e8e06249e ("net/ice: configure HW flow director rule") Signed-off-by: Qi Zhang --- v2: - fix check patch issue. drivers/net/ice/ice_fdir_filter.c | 172 ++ 1 file changed, 156 insertions(+), 16 deletions(-) diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c index 7876f4bbc..cc1d6d010 100644 --- a/drivers/net/ice/ice_fdir_filter.c +++ b/drivers/net/ice/ice_fdir_filter.c @@ -646,6 +646,153 @@ ice_fdir_teardown(struct ice_pf *pf) } } +static int ice_fdir_cur_prof_conflict(struct ice_pf *pf, + enum ice_fltr_ptype ptype, + struct ice_flow_seg_info *seg, + bool is_tunnel) +{ + struct ice_hw *hw = ICE_PF_TO_HW(pf); + struct ice_flow_seg_info *ori_seg; + struct ice_fd_hw_prof *hw_prof; + + hw_prof = hw->fdir_prof[ptype]; + ori_seg = hw_prof->fdir_seg[is_tunnel]; + + /* profile does not exist */ + if (!ori_seg) + return 0; + + /* if no input set conflict, return -EAGAIN */ + if ((!is_tunnel && !memcmp(ori_seg, seg, sizeof(*seg))) || + (is_tunnel && !memcmp(&ori_seg[1], &seg[1], sizeof(*seg { + PMD_DRV_LOG(DEBUG, "Profile already exist for flow type %d.", + ptype); + return -EAGAIN; + } + + /* a rule with input set conflict already exist, so give up */ + if (pf->fdir_fltr_cnt[ptype][is_tunnel]) { + PMD_DRV_LOG(DEBUG, "Failed to create profile for flow type %d due to conflict with exist rule.", + ptype); + return -EINVAL; + } + + /* it's safe to delete an empty profile */ + ice_fdir_prof_rm(pf, ptype, is_tunnel); + return 0; +} + +static bool ice_fdir_prof_resolve_conflict(struct ice_pf *pf, + enum ice_fltr_ptype ptype, + bool is_tunnel) +{ + struct ice_hw *hw = ICE_PF_TO_HW(pf); + struct ice_fd_hw_prof *hw_prof; + struct ice_flow_seg_info *seg; + + hw_prof = hw->fdir_prof[ptype]; + seg = hw_prof->fdir_seg[is_tunnel]; + + /* profile does not exist */ + if (!seg) + return true; + + /* profile exist and rule exist, fail to resolve the conflict */ + if (pf->fdir_fltr_cnt[ptype][is_tunnel] != 0) + return false; + + /* it's safe to delete an empty profile */ + ice_fdir_prof_rm(pf, ptype, is_tunnel); + + return true; +} + +static int ice_fdir_cross_prof_conflict(struct ice_pf *pf, + enum ice_fltr_ptype ptype, + bool is_tunnel) +{ + enum ice_fltr_ptype cflct_ptype; + + switch (ptype) { + /* IPv4 */ + case ICE_FLTR_PTYPE_NONF_IPV4_UDP: + case ICE_FLTR_PTYPE_NONF_IPV4_TCP: + case ICE_FLTR_PTYPE_NONF_IPV4_SCTP: + cflct_ptype = ICE_FLTR_PTYPE_NONF_IPV4_OTHER; + if (!ice_fdir_prof_resolve_conflict + (pf, cflct_ptype, is_tunnel)) + goto err; + break; + case ICE_FLTR_PTYPE_NONF_IPV4_OTHER: + cflct_ptype = ICE_FLTR_PTYPE_NONF_IPV4_UDP; + if (!ice_fdir_prof_resolve_conflict + (pf, cflct_ptype, is_tunnel)) + goto err; + cflct_ptype = ICE_FLTR_PTYPE_NONF_IPV4_TCP; + if (!ice_fdir_prof_resolve_conflict + (pf, cflct_ptype, is_tunnel)) + goto err; + cflct_ptype = ICE_FLTR_PTYPE_NONF_IPV4_SCTP; + if (!ice_fdir_prof_resolve_conflict + (pf, cflct_ptype, is_tunnel)) + goto err; + break; + /* IPv4 GTPU */ + case ICE_FLTR_PTYPE_NONF_IPV4_GTPU_IPV4_UDP: + case ICE_FLTR_PTYPE_NONF_IPV4_GTPU_IPV4_TCP: + case ICE_FLTR_PTYPE_NONF_IPV4_GTPU_IPV4_ICMP: + cflct_ptype = ICE_FLTR_PTYPE_NONF_IPV4_GTPU_IPV4_OTHER; + if (!ice_fdir_prof_resolve_conflict + (pf, cflct_ptype, is_tunnel)) + goto err; + break; + case ICE_FLTR_PTYPE_NONF_IPV4_GTPU_IPV4_OTHER: + cflct_ptype = ICE_FLTR_PTYPE_NONF_IPV4_GTPU_IPV4_OTHER; + if (!ice_fdir_prof_resolve_conflict + (pf, cflct_ptype, is_tunnel)) + goto err; + cflct_ptype = ICE_FLTR_PTY
[dpdk-dev] [PATCH 4/7] mempool/octeontx: add application domain validation
From: Pavan Nikhilesh Add application domain validation for OcteonTx FPA vfs. Signed-off-by: Pavan Nikhilesh --- drivers/mempool/octeontx/octeontx_fpavf.c | 86 +++ 1 file changed, 58 insertions(+), 28 deletions(-) diff --git a/drivers/mempool/octeontx/octeontx_fpavf.c b/drivers/mempool/octeontx/octeontx_fpavf.c index ec84a5cff..c97267db3 100644 --- a/drivers/mempool/octeontx/octeontx_fpavf.c +++ b/drivers/mempool/octeontx/octeontx_fpavf.c @@ -119,20 +119,22 @@ RTE_INIT(otx_pool_init_log) static int octeontx_fpa_gpool_alloc(unsigned int object_size) { + uint16_t global_domain = octeontx_get_global_domain(); struct fpavf_res *res = NULL; - uint16_t gpool; unsigned int sz128; + int i; sz128 = FPA_OBJSZ_2_CACHE_LINE(object_size); - for (gpool = 0; gpool < FPA_VF_MAX; gpool++) { + for (i = 0; i < FPA_VF_MAX; i++) { /* Skip VF that is not mapped Or _inuse */ - if ((fpadev.pool[gpool].bar0 == NULL) || - (fpadev.pool[gpool].is_inuse == true)) + if ((fpadev.pool[i].bar0 == NULL) || + (fpadev.pool[i].is_inuse == true) || + (fpadev.pool[i].domain_id != global_domain)) continue; - res = &fpadev.pool[gpool]; + res = &fpadev.pool[i]; RTE_ASSERT(res->domain_id != (uint16_t)~0); RTE_ASSERT(res->vf_id != (uint16_t)~0); @@ -140,15 +142,34 @@ octeontx_fpa_gpool_alloc(unsigned int object_size) if (res->sz128 == 0) { res->sz128 = sz128; + fpavf_log_dbg("gpool %d blk_sz %d\n", res->vf_id, + sz128); - fpavf_log_dbg("gpool %d blk_sz %d\n", gpool, sz128); - return gpool; + return res->vf_id; } } return -ENOSPC; } +static __rte_always_inline struct fpavf_res * +octeontx_get_fpavf(uint16_t gpool) +{ + uint16_t global_domain = octeontx_get_global_domain(); + int i; + + for (i = 0; i < FPA_VF_MAX; i++) { + if (fpadev.pool[i].domain_id != global_domain) + continue; + if (fpadev.pool[i].vf_id != gpool) + continue; + + return &fpadev.pool[i]; + } + + return NULL; +} + /* lock is taken by caller */ static __rte_always_inline uintptr_t octeontx_fpa_gpool2handle(uint16_t gpool) @@ -156,8 +177,10 @@ octeontx_fpa_gpool2handle(uint16_t gpool) struct fpavf_res *res = NULL; RTE_ASSERT(gpool < FPA_VF_MAX); + res = octeontx_get_fpavf(gpool); + if (res == NULL) + return 0; - res = &fpadev.pool[gpool]; return (uintptr_t)res->bar0 | gpool; } @@ -182,7 +205,7 @@ octeontx_fpa_handle_valid(uintptr_t handle) continue; /* validate gpool */ - if (gpool != i) + if (gpool != fpadev.pool[i].vf_id) return false; res = &fpadev.pool[i]; @@ -212,7 +235,10 @@ octeontx_fpapf_pool_setup(unsigned int gpool, unsigned int buf_size, struct octeontx_mbox_fpa_cfg cfg; int ret = -1; - fpa = &fpadev.pool[gpool]; + fpa = octeontx_get_fpavf(gpool); + if (fpa == NULL) + return -EINVAL; + memsz = FPA_ROUND_UP(max_buf_count / fpa->stack_ln_ptr, FPA_LN_SIZE) * FPA_LN_SIZE; @@ -278,7 +304,11 @@ octeontx_fpapf_pool_destroy(unsigned int gpool_index) struct fpavf_res *fpa = NULL; int ret = -1; - fpa = &fpadev.pool[gpool_index]; + fpa = octeontx_get_fpavf(gpool_index); + if (fpa == NULL) { + ret = -EINVAL; + goto err; + } hdr.coproc = FPA_COPROC; hdr.msg = FPA_CONFIGSET; @@ -422,6 +452,7 @@ octeontx_fpapf_start_count(uint16_t gpool_index) static __rte_always_inline int octeontx_fpavf_free(unsigned int gpool) { + struct fpavf_res *res = octeontx_get_fpavf(gpool); int ret = 0; if (gpool >= FPA_MAX_POOL) { @@ -430,7 +461,8 @@ octeontx_fpavf_free(unsigned int gpool) } /* Pool is free */ - fpadev.pool[gpool].is_inuse = false; + if (res != NULL) + res->is_inuse = false; err: return ret; @@ -439,8 +471,10 @@ octeontx_fpavf_free(unsigned int gpool) static __rte_always_inline int octeontx_gpool_free(uint16_t gpool) { - if (fpadev.pool[gpool].sz128 != 0) { - fpadev.pool[gpool].sz128 = 0; + struct fpavf_res *res = octeontx_get_fpavf(gpool); + + if (res && res->sz128 != 0) { + res->sz128 = 0; return 0; } return -EINVAL; @@ -460,8 +494,8 @@ octeontx_fpa_bufpool_block_size(uintptr_t handle) /* get
[dpdk-dev] [PATCH 1/7] octeontx: update mbox definition to version 1.1.3
From: Pavan Nikhilesh Sync mail box data structures to version 1.1.3. Add mail box version verification and defer initializing octeontx devices if mail box version mismatches. Signed-off-by: Pavan Nikhilesh Reviewed-by: Jerin Jacob Kollanukkaran --- drivers/common/octeontx/octeontx_mbox.c | 97 +++ drivers/common/octeontx/octeontx_mbox.h | 7 ++ .../octeontx/rte_common_octeontx_version.map | 6 ++ drivers/event/octeontx/ssovf_evdev.c | 5 +- drivers/event/octeontx/ssovf_probe.c | 2 - drivers/mempool/octeontx/octeontx_fpavf.c | 1 + drivers/net/octeontx/base/octeontx_bgx.h | 3 + drivers/net/octeontx/base/octeontx_pkivf.h| 15 ++- 8 files changed, 129 insertions(+), 7 deletions(-) diff --git a/drivers/common/octeontx/octeontx_mbox.c b/drivers/common/octeontx/octeontx_mbox.c index 880f8a40f..68cb0351f 100644 --- a/drivers/common/octeontx/octeontx_mbox.c +++ b/drivers/common/octeontx/octeontx_mbox.c @@ -31,6 +31,7 @@ enum { struct mbox { int init_once; + uint8_t ready; uint8_t *ram_mbox_base; /* Base address of mbox message stored in ram */ uint8_t *reg; /* Store to this register triggers PF mbox interrupt */ uint16_t tag_own; /* Last tag which was written to own channel */ @@ -59,6 +60,13 @@ struct mbox_ram_hdr { }; }; +/* MBOX interface version message */ +struct mbox_intf_ver { + uint32_t platform:12; + uint32_t major:10; + uint32_t minor:10; +}; + int octeontx_logtype_mbox; RTE_INIT(otx_init_log) @@ -247,3 +255,92 @@ octeontx_mbox_send(struct octeontx_mbox_hdr *hdr, void *txdata, return mbox_send(m, hdr, txdata, txlen, rxdata, rxlen); } + +static int +octeontx_start_domain(void) +{ + struct octeontx_mbox_hdr hdr = {0}; + int result = -EINVAL; + + hdr.coproc = NO_COPROC; + hdr.msg = RM_START_APP; + + result = octeontx_mbox_send(&hdr, NULL, 0, NULL, 0); + if (result != 0) { + mbox_log_err("Could not start domain. Err=%d. FuncErr=%d\n", +result, hdr.res_code); + result = -EINVAL; + } + + return result; +} + +static int +octeontx_check_mbox_version(struct mbox_intf_ver app_intf_ver, + struct mbox_intf_ver *intf_ver) +{ + struct mbox_intf_ver kernel_intf_ver = {0}; + struct octeontx_mbox_hdr hdr = {0}; + int result = 0; + + + hdr.coproc = NO_COPROC; + hdr.msg = RM_INTERFACE_VERSION; + + result = octeontx_mbox_send(&hdr, &app_intf_ver, sizeof(app_intf_ver), + &kernel_intf_ver, sizeof(kernel_intf_ver)); + if (result != sizeof(kernel_intf_ver)) { + mbox_log_err("Could not send interface version. Err=%d. FuncErr=%d\n", +result, hdr.res_code); + result = -EINVAL; + } + + if (intf_ver) + *intf_ver = kernel_intf_ver; + + if (app_intf_ver.platform != kernel_intf_ver.platform || + app_intf_ver.major != kernel_intf_ver.major || + app_intf_ver.minor != kernel_intf_ver.minor) + result = -EINVAL; + + return result; +} + +int +octeontx_mbox_init(void) +{ + const struct mbox_intf_ver MBOX_INTERFACE_VERSION = { + .platform = 0x01, + .major = 0x01, + .minor = 0x03 + }; + struct mbox_intf_ver rm_intf_ver = {0}; + struct mbox *m = &octeontx_mbox; + int ret; + + if (m->ready) + return 0; + + ret = octeontx_start_domain(); + if (ret < 0) { + m->init_once = 0; + return ret; + } + + ret = octeontx_check_mbox_version(MBOX_INTERFACE_VERSION, + &rm_intf_ver); + if (ret < 0) { + mbox_log_err("MBOX version: Kernel(%d.%d.%d) != DPDK(%d.%d.%d)", +rm_intf_ver.platform, rm_intf_ver.major, +rm_intf_ver.minor, MBOX_INTERFACE_VERSION.platform, +MBOX_INTERFACE_VERSION.major, +MBOX_INTERFACE_VERSION.minor); + m->init_once = 0; + return -EINVAL; + } + + m->ready = 1; + rte_mb(); + + return 0; +} diff --git a/drivers/common/octeontx/octeontx_mbox.h b/drivers/common/octeontx/octeontx_mbox.h index 43fbda282..1f794c7f7 100644 --- a/drivers/common/octeontx/octeontx_mbox.h +++ b/drivers/common/octeontx/octeontx_mbox.h @@ -11,6 +11,11 @@ #define SSOW_BAR4_LEN (64 * 1024) #define SSO_VHGRP_PF_MBOX(x) (0x200ULL | ((x) << 3)) +#define NO_COPROC 0x0 +#define RM_START_APP0x1 +#define RM_INTERFACE_VERSION0x2 + + #define MBOX_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, octeontx_logtype_mbox,\ "%s()
[dpdk-dev] [PATCH 2/7] net/octeontx: add application domain validation
From: Pavan Nikhilesh Add domain validation for PKI and PKO vfs Signed-off-by: Pavan Nikhilesh --- drivers/common/octeontx/octeontx_mbox.c | 15 - drivers/common/octeontx/octeontx_mbox.h | 6 +- .../octeontx/rte_common_octeontx_version.map | 1 + drivers/event/octeontx/ssovf_probe.c | 5 +- drivers/net/octeontx/base/octeontx_pkivf.c| 66 +-- drivers/net/octeontx/base/octeontx_pkivf.h| 8 +-- drivers/net/octeontx/base/octeontx_pkovf.c| 37 +-- drivers/net/octeontx/base/octeontx_pkovf.h| 3 + drivers/net/octeontx/octeontx_ethdev.c| 13 +++- drivers/net/octeontx/octeontx_ethdev.h| 1 + 10 files changed, 135 insertions(+), 20 deletions(-) diff --git a/drivers/common/octeontx/octeontx_mbox.c b/drivers/common/octeontx/octeontx_mbox.c index 68cb0351f..2fd253107 100644 --- a/drivers/common/octeontx/octeontx_mbox.c +++ b/drivers/common/octeontx/octeontx_mbox.c @@ -35,6 +35,7 @@ struct mbox { uint8_t *ram_mbox_base; /* Base address of mbox message stored in ram */ uint8_t *reg; /* Store to this register triggers PF mbox interrupt */ uint16_t tag_own; /* Last tag which was written to own channel */ + uint16_t domain; /* Domain */ rte_spinlock_t lock; }; @@ -198,7 +199,7 @@ mbox_send(struct mbox *m, struct octeontx_mbox_hdr *hdr, const void *txmsg, } int -octeontx_mbox_set_ram_mbox_base(uint8_t *ram_mbox_base) +octeontx_mbox_set_ram_mbox_base(uint8_t *ram_mbox_base, uint16_t domain) { struct mbox *m = &octeontx_mbox; @@ -215,13 +216,14 @@ octeontx_mbox_set_ram_mbox_base(uint8_t *ram_mbox_base) if (m->reg != NULL) { rte_spinlock_init(&m->lock); m->init_once = 1; + m->domain = domain; } return 0; } int -octeontx_mbox_set_reg(uint8_t *reg) +octeontx_mbox_set_reg(uint8_t *reg, uint16_t domain) { struct mbox *m = &octeontx_mbox; @@ -238,6 +240,7 @@ octeontx_mbox_set_reg(uint8_t *reg) if (m->ram_mbox_base != NULL) { rte_spinlock_init(&m->lock); m->init_once = 1; + m->domain = domain; } return 0; @@ -344,3 +347,11 @@ octeontx_mbox_init(void) return 0; } + +uint16_t +octeontx_get_global_domain(void) +{ + struct mbox *m = &octeontx_mbox; + + return m->domain; +} diff --git a/drivers/common/octeontx/octeontx_mbox.h b/drivers/common/octeontx/octeontx_mbox.h index 1f794c7f7..e56719cb8 100644 --- a/drivers/common/octeontx/octeontx_mbox.h +++ b/drivers/common/octeontx/octeontx_mbox.h @@ -36,8 +36,10 @@ struct octeontx_mbox_hdr { }; int octeontx_mbox_init(void); -int octeontx_mbox_set_ram_mbox_base(uint8_t *ram_mbox_base); -int octeontx_mbox_set_reg(uint8_t *reg); +void octeontx_set_global_domain(uint16_t global_domain); +uint16_t octeontx_get_global_domain(void); +int octeontx_mbox_set_ram_mbox_base(uint8_t *ram_mbox_base, uint16_t domain); +int octeontx_mbox_set_reg(uint8_t *reg, uint16_t domain); int octeontx_mbox_send(struct octeontx_mbox_hdr *hdr, void *txdata, uint16_t txlen, void *rxdata, uint16_t rxlen); diff --git a/drivers/common/octeontx/rte_common_octeontx_version.map b/drivers/common/octeontx/rte_common_octeontx_version.map index fdc036a62..2bba7cc93 100644 --- a/drivers/common/octeontx/rte_common_octeontx_version.map +++ b/drivers/common/octeontx/rte_common_octeontx_version.map @@ -10,4 +10,5 @@ DPDK_19.08 { global: octeontx_mbox_init; + octeontx_get_global_domain; }; diff --git a/drivers/event/octeontx/ssovf_probe.c b/drivers/event/octeontx/ssovf_probe.c index 9252998c1..4da7d1ae4 100644 --- a/drivers/event/octeontx/ssovf_probe.c +++ b/drivers/event/octeontx/ssovf_probe.c @@ -181,7 +181,8 @@ ssowvf_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) sdev.total_ssowvfs++; if (vfid == 0) { ram_mbox_base = ssovf_bar(OCTEONTX_SSO_HWS, 0, 4); - if (octeontx_mbox_set_ram_mbox_base(ram_mbox_base)) { + if (octeontx_mbox_set_ram_mbox_base(ram_mbox_base, + res->domain)) { mbox_log_err("Invalid Failed to set ram mbox base"); return -EINVAL; } @@ -257,7 +258,7 @@ ssovf_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) if (vfid == 0) { reg = ssovf_bar(OCTEONTX_SSO_GROUP, 0, 0); reg += SSO_VHGRP_PF_MBOX(1); - if (octeontx_mbox_set_reg(reg)) { + if (octeontx_mbox_set_reg(reg, res->domain)) { mbox_log_err("Invalid Failed to set mbox_reg"); return -EINVAL; } diff --git a/drivers/net/octeontx/base/octeontx_pkivf.c b/drivers/net/octeontx/base/octeontx_pkivf.c index 1babea0e8..783b2a2e5 100644 --- a/drivers/net/octeontx
[dpdk-dev] [PATCH 3/7] net/octeontx: cleanup redudant mbox structs
From: Pavan Nikhilesh Cleanup redudant mail box structures. Signed-off-by: Pavan Nikhilesh --- drivers/net/octeontx/base/octeontx_pkivf.c | 25 +-- drivers/net/octeontx/base/octeontx_pkivf.h | 242 +++-- 2 files changed, 43 insertions(+), 224 deletions(-) diff --git a/drivers/net/octeontx/base/octeontx_pkivf.c b/drivers/net/octeontx/base/octeontx_pkivf.c index 783b2a2e5..8ce041955 100644 --- a/drivers/net/octeontx/base/octeontx_pkivf.c +++ b/drivers/net/octeontx/base/octeontx_pkivf.c @@ -30,9 +30,7 @@ octeontx_pki_port_open(int port) { uint16_t global_domain = octeontx_get_global_domain(); struct octeontx_mbox_hdr hdr; - mbox_pki_port_t port_type = { - .port_type = OCTTX_PORT_TYPE_NET, - }; + pki_port_type_t port_type; int i, res; /* Check if atleast one PKI vf is in application domain. */ @@ -45,11 +43,12 @@ octeontx_pki_port_open(int port) if (i == PKI_VF_MAX) return -ENODEV; + port_type.port_type = OCTTX_PORT_TYPE_NET; hdr.coproc = OCTEONTX_PKI_COPROC; hdr.msg = MBOX_PKI_PORT_OPEN; hdr.vfid = port; - res = octeontx_mbox_send(&hdr, &port_type, sizeof(mbox_pki_port_t), + res = octeontx_mbox_send(&hdr, &port_type, sizeof(pki_port_type_t), NULL, 0); if (res < 0) return -EACCES; @@ -62,8 +61,8 @@ octeontx_pki_port_hash_config(int port, pki_hash_cfg_t *hash_cfg) struct octeontx_mbox_hdr hdr; int res; - mbox_pki_hash_cfg_t h_cfg = *(mbox_pki_hash_cfg_t *)hash_cfg; - int len = sizeof(mbox_pki_hash_cfg_t); + pki_hash_cfg_t h_cfg = *(pki_hash_cfg_t *)hash_cfg; + int len = sizeof(pki_hash_cfg_t); hdr.coproc = OCTEONTX_PKI_COPROC; hdr.msg = MBOX_PKI_PORT_HASH_CONFIG; @@ -82,8 +81,8 @@ octeontx_pki_port_pktbuf_config(int port, pki_pktbuf_cfg_t *buf_cfg) struct octeontx_mbox_hdr hdr; int res; - mbox_pki_pktbuf_cfg_t b_cfg = *(mbox_pki_pktbuf_cfg_t *)buf_cfg; - int len = sizeof(mbox_pki_pktbuf_cfg_t); + pki_pktbuf_cfg_t b_cfg = *(pki_pktbuf_cfg_t *)buf_cfg; + int len = sizeof(pki_pktbuf_cfg_t); hdr.coproc = OCTEONTX_PKI_COPROC; hdr.msg = MBOX_PKI_PORT_PKTBUF_CONFIG; @@ -101,8 +100,8 @@ octeontx_pki_port_create_qos(int port, pki_qos_cfg_t *qos_cfg) struct octeontx_mbox_hdr hdr; int res; - mbox_pki_qos_cfg_t q_cfg = *(mbox_pki_qos_cfg_t *)qos_cfg; - int len = sizeof(mbox_pki_qos_cfg_t); + pki_qos_cfg_t q_cfg = *(pki_qos_cfg_t *)qos_cfg; + int len = sizeof(pki_qos_cfg_t); hdr.coproc = OCTEONTX_PKI_COPROC; hdr.msg = MBOX_PKI_PORT_CREATE_QOS; @@ -122,9 +121,9 @@ octeontx_pki_port_errchk_config(int port, pki_errchk_cfg_t *cfg) struct octeontx_mbox_hdr hdr; int res; - mbox_pki_errcheck_cfg_t e_cfg; - e_cfg = *((mbox_pki_errcheck_cfg_t *)(cfg)); - int len = sizeof(mbox_pki_errcheck_cfg_t); + pki_errchk_cfg_t e_cfg; + e_cfg = *((pki_errchk_cfg_t *)(cfg)); + int len = sizeof(pki_errchk_cfg_t); hdr.coproc = OCTEONTX_PKI_COPROC; hdr.msg = MBOX_PKI_PORT_ERRCHK_CONFIG; diff --git a/drivers/net/octeontx/base/octeontx_pkivf.h b/drivers/net/octeontx/base/octeontx_pkivf.h index c2a944404..d541dc3bd 100644 --- a/drivers/net/octeontx/base/octeontx_pkivf.h +++ b/drivers/net/octeontx/base/octeontx_pkivf.h @@ -39,15 +39,6 @@ #define MBOX_PKI_MAX_QOS_ENTRY 64 -/* pki pkind parse mode */ -enum { - MBOX_PKI_PARSE_LA_TO_LG = 0, - MBOX_PKI_PARSE_LB_TO_LG = 1, - MBOX_PKI_PARSE_LC_TO_LG = 3, - MBOX_PKI_PARSE_LG = 0x3f, - MBOX_PKI_PARSE_NOTHING = 0x7f -}; - /* PKI maximum constants */ #define PKI_VF_MAX (32) #define PKI_MAX_PKTLEN (32768) @@ -60,189 +51,37 @@ enum { OCTTX_PORT_TYPE_MAX }; -/* pki port config */ -typedef struct mbox_pki_port_type { - uint8_t port_type; -} mbox_pki_port_t; - -/* pki port config */ -typedef struct mbox_pki_port_cfg { - uint8_t port_type; - struct { - uint8_t fcs_pres:1; - uint8_t fcs_skip:1; - uint8_t inst_skip:1; - uint8_t parse_mode:1; - uint8_t mpls_parse:1; - uint8_t inst_hdr_parse:1; - uint8_t fulc_parse:1; - uint8_t dsa_parse:1; - uint8_t hg2_parse:1; - uint8_t hg_parse:1; - } mmask; - uint8_t fcs_pres; - uint8_t fcs_skip; - uint8_t inst_skip; - uint8_t parse_mode; - uint8_t mpls_parse; - uint8_t inst_hdr_parse; - uint8_t fulc_parse; - uint8_t dsa_parse; - uint8_t hg2_parse; - uint8_t hg_parse; -} mbox_pki_prt_cfg_t; - -/* pki Flow/style packet buffer config */ -typedef struct mbox_pki_port_pktbuf_cfg { - uint8_t port_type; - stru
[dpdk-dev] [PATCH 6/7] net/octeontx: make Rx queue offloads same as dev offloads
From: Pavan Nikhilesh Make Rx queue specific offloads same as device Rx offloads. Signed-off-by: Pavan Nikhilesh --- drivers/net/octeontx/octeontx_ethdev.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c index c2258d136..679803dd4 100644 --- a/drivers/net/octeontx/octeontx_ethdev.c +++ b/drivers/net/octeontx/octeontx_ethdev.c @@ -604,6 +604,8 @@ octeontx_dev_info(struct rte_eth_dev *dev, dev_info->rx_offload_capa = OCTEONTX_RX_OFFLOADS; dev_info->tx_offload_capa = OCTEONTX_TX_OFFLOADS; + dev_info->rx_queue_offload_capa = OCTEONTX_RX_OFFLOADS; + dev_info->tx_queue_offload_capa = OCTEONTX_TX_OFFLOADS; return 0; } -- 2.24.0
[dpdk-dev] [PATCH 5/7] event/octeontx: add appication domain validation
From: Pavan Nikhilesh Add applicaton domain validation for OcteonTx TIM vfs aka Event timer. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx/timvf_evdev.c | 12 ++--- drivers/event/octeontx/timvf_evdev.h | 8 +--- drivers/event/octeontx/timvf_probe.c | 65 ++-- 3 files changed, 49 insertions(+), 36 deletions(-) diff --git a/drivers/event/octeontx/timvf_evdev.c b/drivers/event/octeontx/timvf_evdev.c index abbc9a775..caa129087 100644 --- a/drivers/event/octeontx/timvf_evdev.c +++ b/drivers/event/octeontx/timvf_evdev.c @@ -231,17 +231,15 @@ timvf_ring_create(struct rte_event_timer_adapter *adptr) { char pool_name[25]; int ret; + uint8_t tim_ring_id; uint64_t nb_timers; struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf; struct timvf_ring *timr; - struct timvf_info tinfo; const char *mempool_ops; unsigned int mp_flags = 0; - if (timvf_info(&tinfo) < 0) - return -ENODEV; - - if (adptr->data->id >= tinfo.total_timvfs) + tim_ring_id = timvf_get_ring(); + if (tim_ring_id == UINT8_MAX) return -ENODEV; timr = rte_zmalloc("octeontx_timvf_priv", @@ -259,7 +257,7 @@ timvf_ring_create(struct rte_event_timer_adapter *adptr) } timr->clk_src = (int) rcfg->clk_src; - timr->tim_ring_id = adptr->data->id; + timr->tim_ring_id = tim_ring_id; timr->tck_nsec = RTE_ALIGN_MUL_CEIL(rcfg->timer_tick_ns, 10); timr->max_tout = rcfg->max_tmo_ns; timr->nb_bkts = (timr->max_tout / timr->tck_nsec); @@ -337,8 +335,10 @@ static int timvf_ring_free(struct rte_event_timer_adapter *adptr) { struct timvf_ring *timr = adptr->data->adapter_priv; + rte_mempool_free(timr->chunk_pool); rte_free(timr->bkt); + timvf_release_ring(timr->tim_ring_id); rte_free(adptr->data->adapter_priv); return 0; } diff --git a/drivers/event/octeontx/timvf_evdev.h b/drivers/event/octeontx/timvf_evdev.h index 0185593f1..d0e5921db 100644 --- a/drivers/event/octeontx/timvf_evdev.h +++ b/drivers/event/octeontx/timvf_evdev.h @@ -115,11 +115,6 @@ extern int otx_logtype_timvf; static const uint16_t nb_chunk_slots = (TIM_CHUNK_SIZE / 16) - 1; -struct timvf_info { - uint16_t domain; /* Domain id */ - uint8_t total_timvfs; /* Total timvf available in domain */ -}; - enum timvf_clk_src { TIM_CLK_SRC_SCLK = RTE_EVENT_TIMER_ADAPTER_CPU_CLK, TIM_CLK_SRC_GPIO = RTE_EVENT_TIMER_ADAPTER_EXT_CLK0, @@ -196,7 +191,8 @@ bkt_and(uint32_t rel_bkt, uint32_t nb_bkts) return rel_bkt & (nb_bkts - 1); } -int timvf_info(struct timvf_info *tinfo); +uint8_t timvf_get_ring(void); +void timvf_release_ring(uint8_t vfid); void *timvf_bar(uint8_t id, uint8_t bar); int timvf_timer_adapter_caps_get(const struct rte_eventdev *dev, uint64_t flags, uint32_t *caps, const struct rte_event_timer_adapter_ops **ops, diff --git a/drivers/event/octeontx/timvf_probe.c b/drivers/event/octeontx/timvf_probe.c index af87625fd..59bba31e8 100644 --- a/drivers/event/octeontx/timvf_probe.c +++ b/drivers/event/octeontx/timvf_probe.c @@ -20,6 +20,7 @@ #define TIM_MAX_RINGS (64) struct timvf_res { + uint8_t in_use; uint16_t domain; uint16_t vfid; void *bar0; @@ -34,50 +35,65 @@ struct timdev { static struct timdev tdev; -int -timvf_info(struct timvf_info *tinfo) +uint8_t +timvf_get_ring(void) { + uint16_t global_domain = octeontx_get_global_domain(); int i; - struct ssovf_info info; - if (tinfo == NULL) - return -EINVAL; + for (i = 0; i < tdev.total_timvfs; i++) { + if (tdev.rings[i].domain != global_domain) + continue; + if (tdev.rings[i].in_use) + continue; - if (!tdev.total_timvfs) - return -ENODEV; + tdev.rings[i].in_use = true; + return tdev.rings[i].vfid; + } - if (ssovf_info(&info) < 0) - return -EINVAL; + return UINT8_MAX; +} + +void +timvf_release_ring(uint8_t tim_ring_id) +{ + uint16_t global_domain = octeontx_get_global_domain(); + int i; for (i = 0; i < tdev.total_timvfs; i++) { - if (info.domain != tdev.rings[i].domain) { - timvf_log_err("GRP error, vfid=%d/%d domain=%d/%d %p", - i, tdev.rings[i].vfid, - info.domain, tdev.rings[i].domain, - tdev.rings[i].bar0); - return -EINVAL; - } + if (tdev.rings[i].domain != global_domain) + continue; + if (tdev.rings[i].vfid == tim_ring_id) + tdev.rings[i].in_use = false; } - - tinfo->total_timvfs =
[dpdk-dev] [PATCH 0/7] octeontx: sync with latest SDK
From: Pavan Nikhilesh Sync octeontx mailbox with the latest version (10.1.2.x) of SDK available. Pavan Nikhilesh (7): octeontx: upgrade mbox definition to version 1.1.3 net/octeontx: add application domain validation net/octeontx: cleanup redudant mbox structs mempool/octeontx: add application domain validation event/octeontx: add appication domain validation net/octeontx: make rx queue offloads same as dev offloads doc: update OcteonTx limitations doc/guides/eventdevs/octeontx.rst | 7 + doc/guides/nics/octeontx.rst | 7 + drivers/common/octeontx/octeontx_mbox.c | 112 +++- drivers/common/octeontx/octeontx_mbox.h | 13 +- .../octeontx/rte_common_octeontx_version.map | 7 + drivers/event/octeontx/ssovf_evdev.c | 5 +- drivers/event/octeontx/ssovf_probe.c | 7 +- drivers/event/octeontx/timvf_evdev.c | 12 +- drivers/event/octeontx/timvf_evdev.h | 8 +- drivers/event/octeontx/timvf_probe.c | 65 +++-- drivers/mempool/octeontx/octeontx_fpavf.c | 87 -- drivers/net/octeontx/base/octeontx_bgx.h | 3 + drivers/net/octeontx/base/octeontx_pkivf.c| 83 +- drivers/net/octeontx/base/octeontx_pkivf.h| 249 +++--- drivers/net/octeontx/base/octeontx_pkovf.c| 37 ++- drivers/net/octeontx/base/octeontx_pkovf.h| 3 + drivers/net/octeontx/octeontx_ethdev.c| 15 +- drivers/net/octeontx/octeontx_ethdev.h| 1 + 18 files changed, 418 insertions(+), 303 deletions(-) -- 2.24.0
[dpdk-dev] [PATCH 7/7] doc: update OcteonTx limitations
From: Pavan Nikhilesh Update OcteonTx limitaion with max mempool size used. Signed-off-by: Pavan Nikhilesh --- doc/guides/eventdevs/octeontx.rst | 7 +++ doc/guides/nics/octeontx.rst | 7 +++ 2 files changed, 14 insertions(+) diff --git a/doc/guides/eventdevs/octeontx.rst b/doc/guides/eventdevs/octeontx.rst index ab36a36e0..587b7a427 100644 --- a/doc/guides/eventdevs/octeontx.rst +++ b/doc/guides/eventdevs/octeontx.rst @@ -139,3 +139,10 @@ follows: When timvf is used as Event timer adapter event schedule type ``RTE_SCHED_TYPE_PARALLEL`` is not supported. + +Max mempool size + + +Max mempool size when using OcteonTx Eventdev (SSO) should be limited to 128K. +When running dpdk-test-eventdev on OcteonTx the application can limit the +number of mbufs by using the option ``--pool_sz 131072`` diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst index 3c19c912d..00098a3b2 100644 --- a/doc/guides/nics/octeontx.rst +++ b/doc/guides/nics/octeontx.rst @@ -174,3 +174,10 @@ The OCTEON TX SoC family NICs support a maximum of a 32K jumbo frame. The value is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len`` member of ``struct rte_eth_conf`` is set to a value lower than 32k, frames up to 32k bytes can still reach the host interface. + +Maximum mempool size + + +The maximum mempool size supplied to Rx queue setup should be less than 128K. +When running testpmd on OcteonTx the application can limit the number of mbufs +by using the option ``--total-num-mbufs=131072``. -- 2.24.0