RE: [PATCH 1/2] examples/ipsec-secgw: fix width of variables
> 'rte_eth_rx_burst' returns uint16_t. The same value need to be passed > to 'process_packets' functions which performs further processing. Having > this function use 'uint8_t' can result in issues when MAX_PKT_BURST is > larger. > > The route functions (route4_pkts & route6_pkts) take uint8_t as the > argument. The caller can pass larger values as the field that is passed > is of type uint32_t. And the function can work with uint32_t as it loops > through the packets and sends it out. Using uint8_t can result in silent > packet drops. > > Fixes: 4fbfa6c7c921 ("examples/ipsec-secgw: update eth header during route > lookup") > > Signed-off-by: Anoob Joseph > --- > examples/ipsec-secgw/ipsec-secgw.c | 5 ++--- > examples/ipsec-secgw/ipsec_worker.h | 4 ++-- > 2 files changed, 4 insertions(+), 5 deletions(-) > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c > b/examples/ipsec-secgw/ipsec-secgw.c > index bf98d2618b..a61bea400a 100644 > --- a/examples/ipsec-secgw/ipsec-secgw.c > +++ b/examples/ipsec-secgw/ipsec-secgw.c > @@ -568,7 +568,7 @@ process_pkts_outbound_nosp(struct ipsec_ctx *ipsec_ctx, > > static inline void > process_pkts(struct lcore_conf *qconf, struct rte_mbuf **pkts, > - uint8_t nb_pkts, uint16_t portid, void *ctx) > + uint16_t nb_pkts, uint16_t portid, void *ctx) > { > struct ipsec_traffic traffic; > > @@ -695,8 +695,7 @@ ipsec_poll_mode_worker(void) > struct rte_mbuf *pkts[MAX_PKT_BURST]; > uint32_t lcore_id; > uint64_t prev_tsc, diff_tsc, cur_tsc; > - int32_t i, nb_rx; > - uint16_t portid; > + uint16_t i, nb_rx, portid; > uint8_t queueid; > struct lcore_conf *qconf; > int32_t rc, socket_id; > diff --git a/examples/ipsec-secgw/ipsec_worker.h > b/examples/ipsec-secgw/ipsec_worker.h > index ac980b8bcf..8e937fda3e 100644 > --- a/examples/ipsec-secgw/ipsec_worker.h > +++ b/examples/ipsec-secgw/ipsec_worker.h > @@ -469,7 +469,7 @@ get_hop_for_offload_pkt(struct rte_mbuf *pkt, int is_ipv6) > > static __rte_always_inline void > route4_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], > - uint8_t nb_pkts, uint64_t tx_offloads, bool ip_cksum) > + uint32_t nb_pkts, uint64_t tx_offloads, bool ip_cksum) > { > uint32_t hop[MAX_PKT_BURST * 2]; > uint32_t dst_ip[MAX_PKT_BURST * 2]; > @@ -557,7 +557,7 @@ route4_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf > *pkts[], > } > > static __rte_always_inline void > -route6_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts) > +route6_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint32_t nb_pkts) > { > int32_t hop[MAX_PKT_BURST * 2]; > uint8_t dst_ip[MAX_PKT_BURST * 2][16]; > -- Acked-by: Konstantin Ananyev > 2.25.1
[PATCH v2] lib/dmadev: get DMA device using device ID
DMA library has a function to get DMA device based on device name but there is no function to get DMA device using device id. Added a function that lookup for the dma device using device id and returns the pointer to the same. Signed-off-by: Amit Prakash Shukla Acked-by: Chengwen Feng --- v2: - Arranged api in alphabetical order in version.map lib/dmadev/rte_dmadev.c | 9 + lib/dmadev/rte_dmadev_pmd.h | 14 ++ lib/dmadev/version.map | 1 + 3 files changed, 24 insertions(+) diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c index 4e5e420c82..83f49e77f2 100644 --- a/lib/dmadev/rte_dmadev.c +++ b/lib/dmadev/rte_dmadev.c @@ -397,6 +397,15 @@ rte_dma_is_valid(int16_t dev_id) rte_dma_devices[dev_id].state != RTE_DMA_DEV_UNUSED; } +struct rte_dma_dev * +rte_dma_pmd_get_dev_by_id(const int dev_id) +{ + if (!rte_dma_is_valid(dev_id)) + return NULL; + + return &rte_dma_devices[dev_id]; +} + uint16_t rte_dma_count_avail(void) { diff --git a/lib/dmadev/rte_dmadev_pmd.h b/lib/dmadev/rte_dmadev_pmd.h index c61cedfb23..f68c3ac6aa 100644 --- a/lib/dmadev/rte_dmadev_pmd.h +++ b/lib/dmadev/rte_dmadev_pmd.h @@ -167,6 +167,20 @@ struct rte_dma_dev *rte_dma_pmd_allocate(const char *name, int numa_node, __rte_internal int rte_dma_pmd_release(const char *name); +/** + * @internal + * Get the rte_dma_dev structure device pointer for the device id. + * + * @param dev_id + * Device ID value to select the device structure. + * + * @return + * - rte_dma_dev structure pointer for the given device ID on success, NULL + * otherwise. + */ +__rte_internal +struct rte_dma_dev *rte_dma_pmd_get_dev_by_id(const int dev_id); + #ifdef __cplusplus } #endif diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map index 2a3736514c..046dbfa988 100644 --- a/lib/dmadev/version.map +++ b/lib/dmadev/version.map @@ -25,6 +25,7 @@ INTERNAL { rte_dma_fp_objs; rte_dma_pmd_allocate; + rte_dma_pmd_get_dev_by_id; rte_dma_pmd_release; local: *; -- 2.17.1
[Bug 1336] Statistic counter rx_missed_errors always shows zero no matter how large the traffic was generated on Mellanox NICs
https://bugs.dpdk.org/show_bug.cgi?id=1336 Bug ID: 1336 Summary: Statistic counter rx_missed_errors always shows zero no matter how large the traffic was generated on Mellanox NICs Product: DPDK Version: 22.11 Hardware: x86 OS: Linux Status: UNCONFIRMED Severity: critical Priority: Normal Component: ethdev Assignee: dev@dpdk.org Reporter: pony...@ericsson.com Target Milestone: --- Created attachment 266 --> https://bugs.dpdk.org/attachment.cgi?id=266&action=edit dpdk imitation log in data-plane pod Our product used the DPDK as a lib for data-plane pod in the vcloud platform. We observed that if data-plane pod doesn't mount the host path /sys/device to the container, then "rx_missed_errors" counter always shows zero no matter how large the traffic was generated on Mellanox NIC. But our customers don't like application need to access any host paths, therefore we have to remove the mount path /sys/device. The outcome is data-plane pod would miss the "rx_missed_errors" metric, it might have a degradation regarding the debuggability for packet loss. I have got through the dpdk code about the function "mlx5_os_read_dev_stat", and gdb the dpdk process. It shows that the point of "priv-gcounters" is NULL and below two paths have no file of "out_of_buffer" (gdb) p priv->q_counters $4 = (struct mlx5_devx_obj *) 0x0 (gdb) info local mkstr_size_path1 = 54 path1 = "/sys/class/infiniband/mlx5_2/hw_counters/out_of_buffer" mkstr_size_path = 62 path = "/sys/class/infiniband/mlx5_2/ports/1/hw_counters/out_of_buffer" Note: Our NIC is Mellanox and SRIOV VFs are used for POD. We expect that Statistic counter "rx_missed_errors" can work well without mounting host path /sys/device on Mellanox NICs. -- You are receiving this mail because: You are the assignee for the bug.
[PATCH] kernel/freebsd: fix module build on FreeBSD 14
When building nic_uio module on FreeBSD 14, a build error is given in the DRIVER_MODULE macro: .../nic_uio.c:84:81: error: too many arguments provided to function-like macro invocation DRIVER_MODULE(nic_uio, pci, nic_uio_driver, nic_uio_devclass, nic_uio_modevent, 0); ^ On FreeBSD 14, the devclass parameter is dropped from the macro, so we conditionally compile a different invocation for BSD versions before/ after v14. Bugzilla Id: 1335 Signed-off-by: Bruce Richardson --- kernel/freebsd/nic_uio/nic_uio.c | 8 ++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/kernel/freebsd/nic_uio/nic_uio.c b/kernel/freebsd/nic_uio/nic_uio.c index 7a81694c92..0043892870 100644 --- a/kernel/freebsd/nic_uio/nic_uio.c +++ b/kernel/freebsd/nic_uio/nic_uio.c @@ -78,10 +78,14 @@ struct pci_bdf { uint32_t function; }; -static devclass_t nic_uio_devclass; - DEFINE_CLASS_0(nic_uio, nic_uio_driver, nic_uio_methods, sizeof(struct nic_uio_softc)); + +#if __FreeBSD_version < 140 +static devclass_t nic_uio_devclass; DRIVER_MODULE(nic_uio, pci, nic_uio_driver, nic_uio_devclass, nic_uio_modevent, 0); +#else +DRIVER_MODULE(nic_uio, pci, nic_uio_driver, nic_uio_modevent, 0); +#endif static int nic_uio_mmap(struct cdev *cdev, vm_ooffset_t offset, vm_paddr_t *paddr, -- 2.42.0
RE: [PATCH v2 4/6] examples/ipsec-secgw: fix lcore ID restriction
> Currently the config option allows lcore IDs up to 255, > irrespective of RTE_MAX_LCORES and needs to be fixed. > > The patch allows config options based on DPDK config. > > Fixes: d299106e8e31 ("examples/ipsec-secgw: add IPsec sample application") > Cc: sergio.gonzalez.mon...@intel.com > Cc: sta...@dpdk.org > > Signed-off-by: Sivaprasad Tummala > --- > examples/ipsec-secgw/event_helper.h | 2 +- > examples/ipsec-secgw/ipsec-secgw.c | 16 +--- > examples/ipsec-secgw/ipsec.c| 2 +- > 3 files changed, 11 insertions(+), 9 deletions(-) > > diff --git a/examples/ipsec-secgw/event_helper.h > b/examples/ipsec-secgw/event_helper.h > index dfb81bfcf1..9923700f03 100644 > --- a/examples/ipsec-secgw/event_helper.h > +++ b/examples/ipsec-secgw/event_helper.h > @@ -102,7 +102,7 @@ struct eh_event_link_info { > /**< Event port ID */ > uint8_t eventq_id; > /**< Event queue to be linked to the port */ > - uint8_t lcore_id; > + uint16_t lcore_id; > /**< Lcore to be polling on this port */ > }; > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c > b/examples/ipsec-secgw/ipsec-secgw.c > index bf98d2618b..6f550db05c 100644 > --- a/examples/ipsec-secgw/ipsec-secgw.c > +++ b/examples/ipsec-secgw/ipsec-secgw.c > @@ -221,7 +221,7 @@ static const char *cfgfile; > struct lcore_params { > uint16_t port_id; > uint8_t queue_id; > - uint8_t lcore_id; > + uint16_t lcore_id; > } __rte_cache_aligned; > > static struct lcore_params lcore_params_array[MAX_LCORE_PARAMS]; > @@ -810,7 +810,7 @@ check_flow_params(uint16_t fdir_portid, uint8_t fdir_qid) > static int32_t > check_poll_mode_params(struct eh_conf *eh_conf) > { > - uint8_t lcore; > + uint16_t lcore; > uint16_t portid; > uint16_t i; > int32_t socket_id; > @@ -829,13 +829,13 @@ check_poll_mode_params(struct eh_conf *eh_conf) > for (i = 0; i < nb_lcore_params; ++i) { > lcore = lcore_params[i].lcore_id; > if (!rte_lcore_is_enabled(lcore)) { > - printf("error: lcore %hhu is not enabled in " > + printf("error: lcore %hu is not enabled in " > "lcore mask\n", lcore); > return -1; > } > socket_id = rte_lcore_to_socket_id(lcore); > if (socket_id != 0 && numa_on == 0) { > - printf("warning: lcore %hhu is on socket %d " > + printf("warning: lcore %hu is on socket %d " > "with numa off\n", > lcore, socket_id); > } > @@ -870,7 +870,7 @@ static int32_t > init_lcore_rx_queues(void) > { > uint16_t i, nb_rx_queue; > - uint8_t lcore; > + uint16_t lcore; > > for (i = 0; i < nb_lcore_params; ++i) { > lcore = lcore_params[i].lcore_id; > @@ -1051,6 +1051,8 @@ parse_config(const char *q_arg) > char *str_fld[_NUM_FLD]; > int32_t i; > uint32_t size; > + unsigned int max_fld[_NUM_FLD] = {RTE_MAX_ETHPORTS, > + 255, RTE_MAX_LCORE}; > > nb_lcore_params = 0; > > @@ -1071,7 +1073,7 @@ parse_config(const char *q_arg) > for (i = 0; i < _NUM_FLD; i++) { > errno = 0; > int_fld[i] = strtoul(str_fld[i], &end, 0); > - if (errno != 0 || end == str_fld[i] || int_fld[i] > 255) > + if (errno != 0 || end == str_fld[i] || int_fld[i] > > max_fld[i]) > return -1; > } > if (nb_lcore_params >= MAX_LCORE_PARAMS) { > @@ -1084,7 +1086,7 @@ parse_config(const char *q_arg) > lcore_params_array[nb_lcore_params].queue_id = > (uint8_t)int_fld[FLD_QUEUE]; > lcore_params_array[nb_lcore_params].lcore_id = > - (uint8_t)int_fld[FLD_LCORE]; > + (uint16_t)int_fld[FLD_LCORE]; > ++nb_lcore_params; > } > lcore_params = lcore_params_array; > diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c > index f5cec4a928..5ebb71bb9a 100644 > --- a/examples/ipsec-secgw/ipsec.c > +++ b/examples/ipsec-secgw/ipsec.c > @@ -259,7 +259,7 @@ create_lookaside_session(struct ipsec_ctx > *ipsec_ctx_lcore[], > continue; > > /* Looking for cryptodev, which can handle this SA */ > - key.lcore_id = (uint8_t)lcore_id; > + key.lcore_id = (uint16_t)lcore_id; > key.cipher_algo = (uint8_t)sa->cipher_algo; > key.auth_algo = (uint8_t)sa->auth_algo; > key.aead_algo = (uint8_t)sa->aead_algo; > -- Acked-by: Konstantin Ananyev > 2.25.1
RE: [PATCH v2 1/6] examples/l3fwd: fix lcore ID restriction
> Currently the config option allows lcore IDs up to 255, > irrespective of RTE_MAX_LCORES and needs to be fixed. > > The patch allows config options based on DPDK config. > > Fixes: af75078fece3 ("first public release") > Cc: sta...@dpdk.org > > Signed-off-by: Sivaprasad Tummala > --- > examples/l3fwd/main.c | 19 +++ > 1 file changed, 11 insertions(+), 8 deletions(-) > > diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c > index 3bf28aec0c..ed116da09c 100644 > --- a/examples/l3fwd/main.c > +++ b/examples/l3fwd/main.c > @@ -99,7 +99,7 @@ struct parm_cfg parm_config; > struct lcore_params { > uint16_t port_id; > uint8_t queue_id; > - uint8_t lcore_id; > + uint16_t lcore_id; > } __rte_cache_aligned; > > static struct lcore_params lcore_params_array[MAX_LCORE_PARAMS]; > @@ -292,8 +292,8 @@ setup_l3fwd_lookup_tables(void) > static int > check_lcore_params(void) > { > - uint8_t queue, lcore; > - uint16_t i; > + uint8_t queue; > + uint16_t i, lcore; > int socketid; > > for (i = 0; i < nb_lcore_params; ++i) { > @@ -304,12 +304,12 @@ check_lcore_params(void) > } > lcore = lcore_params[i].lcore_id; > if (!rte_lcore_is_enabled(lcore)) { > - printf("error: lcore %hhu is not enabled in lcore > mask\n", lcore); > + printf("error: lcore %hu is not enabled in lcore > mask\n", lcore); > return -1; > } > if ((socketid = rte_lcore_to_socket_id(lcore) != 0) && > (numa_on == 0)) { > - printf("warning: lcore %hhu is on socket %d with numa > off \n", > + printf("warning: lcore %hu is on socket %d with numa > off\n", > lcore, socketid); > } > } > @@ -359,7 +359,7 @@ static int > init_lcore_rx_queues(void) > { > uint16_t i, nb_rx_queue; > - uint8_t lcore; > + uint16_t lcore; > > for (i = 0; i < nb_lcore_params; ++i) { > lcore = lcore_params[i].lcore_id; > @@ -500,6 +500,8 @@ parse_config(const char *q_arg) > char *str_fld[_NUM_FLD]; > int i; > unsigned size; > + unsigned int max_fld[_NUM_FLD] = {RTE_MAX_ETHPORTS, > + 255, RTE_MAX_LCORE}; > > nb_lcore_params = 0; > > @@ -518,7 +520,8 @@ parse_config(const char *q_arg) > for (i = 0; i < _NUM_FLD; i++){ > errno = 0; > int_fld[i] = strtoul(str_fld[i], &end, 0); > - if (errno != 0 || end == str_fld[i] || int_fld[i] > 255) > + if (errno != 0 || end == str_fld[i] || int_fld[i] > > + > max_fld[i]) > return -1; > } > if (nb_lcore_params >= MAX_LCORE_PARAMS) { > @@ -531,7 +534,7 @@ parse_config(const char *q_arg) > lcore_params_array[nb_lcore_params].queue_id = > (uint8_t)int_fld[FLD_QUEUE]; > lcore_params_array[nb_lcore_params].lcore_id = > - (uint8_t)int_fld[FLD_LCORE]; > + (uint16_t)int_fld[FLD_LCORE]; > ++nb_lcore_params; > } > lcore_params = lcore_params_array; > -- Acked-by: Konstantin Ananyev > 2.25.1
RE: [PATCH v2 1/6] examples/l3fwd: fix lcore ID restriction
> > > Currently the config option allows lcore IDs up to 255, > > irrespective of RTE_MAX_LCORES and needs to be fixed. > > > > The patch allows config options based on DPDK config. > > > > Fixes: af75078fece3 ("first public release") > > Cc: sta...@dpdk.org > > > > Signed-off-by: Sivaprasad Tummala > > --- > > examples/l3fwd/main.c | 19 +++ > > 1 file changed, 11 insertions(+), 8 deletions(-) > > > > diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c > > index 3bf28aec0c..ed116da09c 100644 > > --- a/examples/l3fwd/main.c > > +++ b/examples/l3fwd/main.c > > @@ -99,7 +99,7 @@ struct parm_cfg parm_config; > > struct lcore_params { > > uint16_t port_id; > > uint8_t queue_id; Actually one comment: As lcore_id becomes uint16_t it might be worth to do the same queue_id, they usually are very much related. > > - uint8_t lcore_id; > > + uint16_t lcore_id; > > } __rte_cache_aligned; > > > > static struct lcore_params lcore_params_array[MAX_LCORE_PARAMS]; > > @@ -292,8 +292,8 @@ setup_l3fwd_lookup_tables(void) > > static int > > check_lcore_params(void) > > { > > - uint8_t queue, lcore; > > - uint16_t i; > > + uint8_t queue; > > + uint16_t i, lcore; > > int socketid; > > > > for (i = 0; i < nb_lcore_params; ++i) { > > @@ -304,12 +304,12 @@ check_lcore_params(void) > > } > > lcore = lcore_params[i].lcore_id; > > if (!rte_lcore_is_enabled(lcore)) { > > - printf("error: lcore %hhu is not enabled in lcore > > mask\n", lcore); > > + printf("error: lcore %hu is not enabled in lcore > > mask\n", lcore); > > return -1; > > } > > if ((socketid = rte_lcore_to_socket_id(lcore) != 0) && > > (numa_on == 0)) { > > - printf("warning: lcore %hhu is on socket %d with numa > > off \n", > > + printf("warning: lcore %hu is on socket %d with numa > > off\n", > > lcore, socketid); > > } > > } > > @@ -359,7 +359,7 @@ static int > > init_lcore_rx_queues(void) > > { > > uint16_t i, nb_rx_queue; > > - uint8_t lcore; > > + uint16_t lcore; > > > > for (i = 0; i < nb_lcore_params; ++i) { > > lcore = lcore_params[i].lcore_id; > > @@ -500,6 +500,8 @@ parse_config(const char *q_arg) > > char *str_fld[_NUM_FLD]; > > int i; > > unsigned size; > > + unsigned int max_fld[_NUM_FLD] = {RTE_MAX_ETHPORTS, > > + 255, RTE_MAX_LCORE}; > > > > nb_lcore_params = 0; > > > > @@ -518,7 +520,8 @@ parse_config(const char *q_arg) > > for (i = 0; i < _NUM_FLD; i++){ > > errno = 0; > > int_fld[i] = strtoul(str_fld[i], &end, 0); > > - if (errno != 0 || end == str_fld[i] || int_fld[i] > 255) > > + if (errno != 0 || end == str_fld[i] || int_fld[i] > > > + > > max_fld[i]) > > return -1; > > } > > if (nb_lcore_params >= MAX_LCORE_PARAMS) { > > @@ -531,7 +534,7 @@ parse_config(const char *q_arg) > > lcore_params_array[nb_lcore_params].queue_id = > > (uint8_t)int_fld[FLD_QUEUE]; > > lcore_params_array[nb_lcore_params].lcore_id = > > - (uint8_t)int_fld[FLD_LCORE]; > > + (uint16_t)int_fld[FLD_LCORE]; > > ++nb_lcore_params; > > } > > lcore_params = lcore_params_array; > > -- > > Acked-by: Konstantin Ananyev > > > > 2.25.1
RE: [PATCH v2 1/6] examples/l3fwd: fix lcore ID restriction
[AMD Official Use Only - General] Hi Konstantin, > -Original Message- > From: Konstantin Ananyev > Sent: Tuesday, December 19, 2023 6:00 PM > To: Konstantin Ananyev ; Tummala, Sivaprasad > ; david.h...@intel.com; > anatoly.bura...@intel.com; jer...@marvell.com; radu.nico...@intel.com; > gak...@marvell.com; cristian.dumitre...@intel.com; Yigit, Ferruh > > Cc: dev@dpdk.org; sta...@dpdk.org > Subject: RE: [PATCH v2 1/6] examples/l3fwd: fix lcore ID restriction > > Caution: This message originated from an External Source. Use proper caution > when opening attachments, clicking links, or responding. > > > > > > > Currently the config option allows lcore IDs up to 255, irrespective > > > of RTE_MAX_LCORES and needs to be fixed. > > > > > > The patch allows config options based on DPDK config. > > > > > > Fixes: af75078fece3 ("first public release") > > > Cc: sta...@dpdk.org > > > > > > Signed-off-by: Sivaprasad Tummala > > > --- > > > examples/l3fwd/main.c | 19 +++ > > > 1 file changed, 11 insertions(+), 8 deletions(-) > > > > > > diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index > > > 3bf28aec0c..ed116da09c 100644 > > > --- a/examples/l3fwd/main.c > > > +++ b/examples/l3fwd/main.c > > > @@ -99,7 +99,7 @@ struct parm_cfg parm_config; struct lcore_params > > > { > > > uint16_t port_id; > > > uint8_t queue_id; > > Actually one comment: > As lcore_id becomes uint16_t it might be worth to do the same queue_id, they > usually are very much related. Yes, that's a valid statement for one network interface. With multiple interfaces, it's a combination of port/queue that maps to a specific lcore. If there a NICs that support more than 256 queues, then it makes sense to change the queue_id type as well. Please let me know your thoughts. > > > > - uint8_t lcore_id; > > > + uint16_t lcore_id; > > > } __rte_cache_aligned; > > > > > > static struct lcore_params lcore_params_array[MAX_LCORE_PARAMS]; > > > @@ -292,8 +292,8 @@ setup_l3fwd_lookup_tables(void) static int > > > check_lcore_params(void) > > > { > > > - uint8_t queue, lcore; > > > - uint16_t i; > > > + uint8_t queue; > > > + uint16_t i, lcore; > > > int socketid; > > > > > > for (i = 0; i < nb_lcore_params; ++i) { @@ -304,12 +304,12 @@ > > > check_lcore_params(void) > > > } > > > lcore = lcore_params[i].lcore_id; > > > if (!rte_lcore_is_enabled(lcore)) { > > > - printf("error: lcore %hhu is not enabled in lcore > > > mask\n", lcore); > > > + printf("error: lcore %hu is not enabled in lcore > > > + mask\n", lcore); > > > return -1; > > > } > > > if ((socketid = rte_lcore_to_socket_id(lcore) != 0) && > > > (numa_on == 0)) { > > > - printf("warning: lcore %hhu is on socket %d with numa > > > off \n", > > > + printf("warning: lcore %hu is on socket %d with > > > + numa off\n", > > > lcore, socketid); > > > } > > > } > > > @@ -359,7 +359,7 @@ static int > > > init_lcore_rx_queues(void) > > > { > > > uint16_t i, nb_rx_queue; > > > - uint8_t lcore; > > > + uint16_t lcore; > > > > > > for (i = 0; i < nb_lcore_params; ++i) { > > > lcore = lcore_params[i].lcore_id; @@ -500,6 +500,8 @@ > > > parse_config(const char *q_arg) > > > char *str_fld[_NUM_FLD]; > > > int i; > > > unsigned size; > > > + unsigned int max_fld[_NUM_FLD] = {RTE_MAX_ETHPORTS, > > > + 255, RTE_MAX_LCORE}; > > > > > > nb_lcore_params = 0; > > > > > > @@ -518,7 +520,8 @@ parse_config(const char *q_arg) > > > for (i = 0; i < _NUM_FLD; i++){ > > > errno = 0; > > > int_fld[i] = strtoul(str_fld[i], &end, 0); > > > - if (errno != 0 || end == str_fld[i] || int_fld[i] > > > > 255) > > > + if (errno != 0 || end == str_fld[i] || int_fld[i] > > > > + > > > + max_fld[i]) > > > return -1; > > > } > > > if (nb_lcore_params >= MAX_LCORE_PARAMS) { @@ -531,7 > > > +534,7 @@ parse_config(const char *q_arg) > > > lcore_params_array[nb_lcore_params].queue_id = > > > (uint8_t)int_fld[FLD_QUEUE]; > > > lcore_params_array[nb_lcore_params].lcore_id = > > > - (uint8_t)int_fld[FLD_LCORE]; > > > + (uint16_t)int_fld[FLD_LCORE]; > > > ++nb_lcore_params; > > > } > > > lcore_params = lcore_params_array; > > > -- > > > > Acked-by: Konstantin Ananyev > > > > > > > 2.25.1
RE: [PATCH v2 1/6] examples/l3fwd: fix lcore ID restriction
Hi Sivaprasad, > > Hi Konstantin, > > > -Original Message- > > From: Konstantin Ananyev > > Sent: Tuesday, December 19, 2023 6:00 PM > > To: Konstantin Ananyev ; Tummala, Sivaprasad > > ; david.h...@intel.com; > > anatoly.bura...@intel.com; jer...@marvell.com; radu.nico...@intel.com; > > gak...@marvell.com; cristian.dumitre...@intel.com; Yigit, Ferruh > > > > Cc: dev@dpdk.org; sta...@dpdk.org > > Subject: RE: [PATCH v2 1/6] examples/l3fwd: fix lcore ID restriction > > > > Caution: This message originated from an External Source. Use proper caution > > when opening attachments, clicking links, or responding. > > > > > > > > > > > Currently the config option allows lcore IDs up to 255, irrespective > > > > of RTE_MAX_LCORES and needs to be fixed. > > > > > > > > The patch allows config options based on DPDK config. > > > > > > > > Fixes: af75078fece3 ("first public release") > > > > Cc: sta...@dpdk.org > > > > > > > > Signed-off-by: Sivaprasad Tummala > > > > --- > > > > examples/l3fwd/main.c | 19 +++ > > > > 1 file changed, 11 insertions(+), 8 deletions(-) > > > > > > > > diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index > > > > 3bf28aec0c..ed116da09c 100644 > > > > --- a/examples/l3fwd/main.c > > > > +++ b/examples/l3fwd/main.c > > > > @@ -99,7 +99,7 @@ struct parm_cfg parm_config; struct lcore_params > > > > { > > > > uint16_t port_id; > > > > uint8_t queue_id; > > > > Actually one comment: > > As lcore_id becomes uint16_t it might be worth to do the same queue_id, they > > usually are very much related. > Yes, that's a valid statement for one network interface. > With multiple interfaces, it's a combination of port/queue that maps to a > specific lcore. > If there a NICs that support more than 256 queues, then it makes sense to > change the > queue_id type as well. AFAIK, majority of modern NICs do support more than 256 queues. That's why in rte_ethev API queue_id is uint16_t. > > Please let me know your thoughts. > > > > > > - uint8_t lcore_id; > > > > + uint16_t lcore_id; > > > > } __rte_cache_aligned; > > > > > > > > static struct lcore_params lcore_params_array[MAX_LCORE_PARAMS]; > > > > @@ -292,8 +292,8 @@ setup_l3fwd_lookup_tables(void) static int > > > > check_lcore_params(void) > > > > { > > > > - uint8_t queue, lcore; > > > > - uint16_t i; > > > > + uint8_t queue; > > > > + uint16_t i, lcore; > > > > int socketid; > > > > > > > > for (i = 0; i < nb_lcore_params; ++i) { @@ -304,12 +304,12 @@ > > > > check_lcore_params(void) > > > > } > > > > lcore = lcore_params[i].lcore_id; > > > > if (!rte_lcore_is_enabled(lcore)) { > > > > - printf("error: lcore %hhu is not enabled in lcore > > > > mask\n", lcore); > > > > + printf("error: lcore %hu is not enabled in lcore > > > > + mask\n", lcore); > > > > return -1; > > > > } > > > > if ((socketid = rte_lcore_to_socket_id(lcore) != 0) && > > > > (numa_on == 0)) { > > > > - printf("warning: lcore %hhu is on socket %d with > > > > numa off \n", > > > > + printf("warning: lcore %hu is on socket %d with > > > > + numa off\n", > > > > lcore, socketid); > > > > } > > > > } > > > > @@ -359,7 +359,7 @@ static int > > > > init_lcore_rx_queues(void) > > > > { > > > > uint16_t i, nb_rx_queue; > > > > - uint8_t lcore; > > > > + uint16_t lcore; > > > > > > > > for (i = 0; i < nb_lcore_params; ++i) { > > > > lcore = lcore_params[i].lcore_id; @@ -500,6 +500,8 @@ > > > > parse_config(const char *q_arg) > > > > char *str_fld[_NUM_FLD]; > > > > int i; > > > > unsigned size; > > > > + unsigned int max_fld[_NUM_FLD] = {RTE_MAX_ETHPORTS, > > > > + 255, RTE_MAX_LCORE}; > > > > > > > > nb_lcore_params = 0; > > > > > > > > @@ -518,7 +520,8 @@ parse_config(const char *q_arg) > > > > for (i = 0; i < _NUM_FLD; i++){ > > > > errno = 0; > > > > int_fld[i] = strtoul(str_fld[i], &end, 0); > > > > - if (errno != 0 || end == str_fld[i] || int_fld[i] > > > > > 255) > > > > + if (errno != 0 || end == str_fld[i] || int_fld[i] > > > > > + > > > > + max_fld[i]) > > > > return -1; > > > > } > > > > if (nb_lcore_params >= MAX_LCORE_PARAMS) { @@ -531,7 > > > > +534,7 @@ parse_config(const char *q_arg) > > > > lcore_params_array[nb_lcore_params].queue_id = > > > > (uint8_t)int_fld[FLD_QUEUE]; > > > > lcore_params_array[nb_lcore_params].lcore_id = > > > > - (uint8_t)int_fld[FLD_LCORE]; > > > > + (uint16_t)int_fld[FLD_LCORE]; > > > > ++nb_lcore_params; > > > > } > > > > lcore_params = lco
[PATCH 1/2] doc: updated incorrect value for IP frag max fragments
Docs for IP Fragment said RTE_LIBRTE_IP_FRAG_MAX_FRAGS was 4 by default, however this was changed to 8. Documentation has been updated to account for this, including a snippet of the code where RTE_LIBRTE_IP_FRAG_MAX_FRAGS is defined to ensure the documentation stays up to date. Signed-off-by: Euan Bourke --- .mailmap | 1 + doc/guides/prog_guide/ip_fragment_reassembly_lib.rst | 7 ++- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/.mailmap b/.mailmap index ab0742a382..528bc68a30 100644 --- a/.mailmap +++ b/.mailmap @@ -379,6 +379,7 @@ Eric Zhang Erik Gabriel Carrillo Erik Ziegenbalg Erlu Chen +Euan Bourke Eugenio Pérez Eugeny Parshutin Evan Swanson diff --git a/doc/guides/prog_guide/ip_fragment_reassembly_lib.rst b/doc/guides/prog_guide/ip_fragment_reassembly_lib.rst index 314d4adbb8..458d7c6776 100644 --- a/doc/guides/prog_guide/ip_fragment_reassembly_lib.rst +++ b/doc/guides/prog_guide/ip_fragment_reassembly_lib.rst @@ -43,7 +43,12 @@ Note that all update/lookup operations on Fragment Table are not thread safe. So if different execution contexts (threads/processes) will access the same table simultaneously, then some external syncing mechanism have to be provided. -Each table entry can hold information about packets consisting of up to RTE_LIBRTE_IP_FRAG_MAX (by default: 4) fragments. +Each table entry can hold information about packets of up to ``RTE_LIBRTE_IP_FRAG_MAX_FRAGS`` fragments, +where ``RTE_LIBRTE_IP_FRAG_MAX_FRAGS`` defaults to: + +.. literalinclude:: ../../../config/rte_config.h +:start-after: /* ip_fragmentation defines */ +:lines: 1 Code example, that demonstrates creation of a new Fragment table: -- 2.34.1
[PATCH 2/2] ip_frag: updated name for IP frag define
Removed LIBRTE from name as its an old prefix. Signed-off-by: Euan Bourke --- app/test/test_reassembly_perf.c | 2 +- config/rte_config.h | 2 +- doc/guides/prog_guide/ip_fragment_reassembly_lib.rst | 4 ++-- doc/guides/sample_app_ug/ip_reassembly.rst | 4 ++-- examples/ip_fragmentation/main.c | 2 +- examples/ip_reassembly/main.c| 2 +- examples/ipsec-secgw/ipsec_worker.h | 2 +- lib/ip_frag/ip_reassembly.h | 2 +- lib/ip_frag/rte_ip_frag.h| 2 +- 9 files changed, 11 insertions(+), 11 deletions(-) diff --git a/app/test/test_reassembly_perf.c b/app/test/test_reassembly_perf.c index 3912179022..805ae2fe9d 100644 --- a/app/test/test_reassembly_perf.c +++ b/app/test/test_reassembly_perf.c @@ -20,7 +20,7 @@ #define MAX_FLOWS (1024 * 32) #define MAX_BKTS MAX_FLOWS #define MAX_ENTRIES_PER_BKT 16 -#define MAX_FRAGMENTS RTE_LIBRTE_IP_FRAG_MAX_FRAG +#define MAX_FRAGMENTS RTE_IP_FRAG_MAX_FRAG #define MIN_FRAGMENTS 2 #define MAX_PKTS (MAX_FLOWS * MAX_FRAGMENTS) diff --git a/config/rte_config.h b/config/rte_config.h index da265d7dd2..e2fa2a58fa 100644 --- a/config/rte_config.h +++ b/config/rte_config.h @@ -85,7 +85,7 @@ #define RTE_RAWDEV_MAX_DEVS 64 /* ip_fragmentation defines */ -#define RTE_LIBRTE_IP_FRAG_MAX_FRAG 8 +#define RTE_IP_FRAG_MAX_FRAG 8 // RTE_LIBRTE_IP_FRAG_TBL_STAT is not set /* rte_power defines */ diff --git a/doc/guides/prog_guide/ip_fragment_reassembly_lib.rst b/doc/guides/prog_guide/ip_fragment_reassembly_lib.rst index 458d7c6776..230baeaa19 100644 --- a/doc/guides/prog_guide/ip_fragment_reassembly_lib.rst +++ b/doc/guides/prog_guide/ip_fragment_reassembly_lib.rst @@ -43,8 +43,8 @@ Note that all update/lookup operations on Fragment Table are not thread safe. So if different execution contexts (threads/processes) will access the same table simultaneously, then some external syncing mechanism have to be provided. -Each table entry can hold information about packets of up to ``RTE_LIBRTE_IP_FRAG_MAX_FRAGS`` fragments, -where ``RTE_LIBRTE_IP_FRAG_MAX_FRAGS`` defaults to: +Each table entry can hold information about packets of up to ``RTE_IP_FRAG_MAX_FRAGS`` fragments, +where ``RTE_IP_FRAG_MAX_FRAGS`` defaults to: .. literalinclude:: ../../../config/rte_config.h :start-after: /* ip_fragmentation defines */ diff --git a/doc/guides/sample_app_ug/ip_reassembly.rst b/doc/guides/sample_app_ug/ip_reassembly.rst index 5280bf4ea0..9cf4fb3f7d 100644 --- a/doc/guides/sample_app_ug/ip_reassembly.rst +++ b/doc/guides/sample_app_ug/ip_reassembly.rst @@ -135,7 +135,7 @@ Fragment table maintains information about already received fragments of the pac Each IP packet is uniquely identified by triple , , . To avoid lock contention, each RX queue has its own Fragment Table, e.g. the application can't handle the situation when different fragments of the same packet arrive through different RX queues. -Each table entry can hold information about packet consisting of up to RTE_LIBRTE_IP_FRAG_MAX_FRAGS fragments. +Each table entry can hold information about packet consisting of up to RTE_IP_FRAG_MAX_FRAGS fragments. .. literalinclude:: ../../../examples/ip_reassembly/main.c :language: c @@ -147,7 +147,7 @@ Mempools Initialization ~~~ The reassembly application demands a lot of mbuf's to be allocated. -At any given time up to (2 \* max_flow_num \* RTE_LIBRTE_IP_FRAG_MAX_FRAGS \* ) +At any given time up to (2 \* max_flow_num \* RTE_IP_FRAG_MAX_FRAGS \* ) can be stored inside Fragment Table waiting for remaining fragments. To keep mempool size under reasonable limits and to avoid situation when one RX queue can starve other queues, each RX queue uses its own mempool. diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c index 744a1aa9b4..1e4471891b 100644 --- a/examples/ip_fragmentation/main.c +++ b/examples/ip_fragmentation/main.c @@ -71,7 +71,7 @@ /* * Max number of fragments per packet expected - defined by config file. */ -#defineMAX_PACKET_FRAG RTE_LIBRTE_IP_FRAG_MAX_FRAG +#defineMAX_PACKET_FRAG RTE_IP_FRAG_MAX_FRAG #define NB_MBUF 8192 diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c index bd0b1d31de..16607d99f3 100644 --- a/examples/ip_reassembly/main.c +++ b/examples/ip_reassembly/main.c @@ -69,7 +69,7 @@ #defineMIN_FLOW_TTL1 #defineDEF_FLOW_TTLMS_PER_S -#define MAX_FRAG_NUM RTE_LIBRTE_IP_FRAG_MAX_FRAG +#define MAX_FRAG_NUM RTE_IP_FRAG_MAX_FRAG /* Should be power of two. */ #defineIP_FRAG_TBL_BUCKET_ENTRIES 16 diff --git a/examples/ipsec-secgw/ipsec_worker.h b/examples/ipsec-secgw/ipsec_worker.h index ac980b8bcf..918e6b5200 100644 --- a/examples/ipsec-secgw/ipsec_worker.h +++ b/examples/ipsec-sec
Re: [PATCH v4 11/14] log: add a per line log helper
18/12/2023 15:38, David Marchand: > +#ifdef RTE_TOOLCHAIN_GCC > +#define RTE_LOG_CHECK_NO_NEWLINE(fmt) \ > + static_assert(!__builtin_strchr(fmt, '\n'), \ > + "This log format string contains a \\n") > +#else > +#define RTE_LOG_CHECK_NO_NEWLINE(...) > +#endif No support in clang? > +#define RTE_LOG_LINE(l, t, ...) do { \ > + RTE_LOG_CHECK_NO_NEWLINE(RTE_FMT_HEAD(__VA_ARGS__ ,)); \ > + RTE_LOG(l, t, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ > + RTE_FMT_TAIL(__VA_ARGS__ ,))); \ > +} while (0) > + > +#define RTE_LOG_DP_LINE(l, t, ...) do { \ > + RTE_LOG_CHECK_NO_NEWLINE(RTE_FMT_HEAD(__VA_ARGS__ ,)); \ > + RTE_LOG_DP(l, t, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ > + RTE_FMT_TAIL(__VA_ARGS__ ,))); \ > +} while (0) I don't think we need a space between __VA_ARGS__ and the comma.
[PATCH 0/4] add new QAT gen3 device
This patchset adds support for a new gen3 QuickAssist device. There are some changes for this device in comparison to the existing gen3 implementation: - DES and Kasumi removed from capabilities. - ZUC256 added to capabiltiies. - New device ID. - New CMAC macros included. - Some algorithms moved to wireless slice (SNOW3G, ZUC, AES-CMAC). This patchset covers Symmetric crypto, so a check has been added for Asymmetric and Compression PMDs to skip for this gen3 device only. Documentation will be updated in a subsequent version of the patchset. Ciara Power (4): crypto/qat: add new gen3 device crypto/qat: add zuc256 wireless slice for gen3 crypto/qat: add new gen3 CMAC macros crypto/qat: disable asym and compression for new gen3 device drivers/common/qat/qat_adf/icp_qat_fw.h | 3 +- drivers/common/qat/qat_adf/icp_qat_fw_la.h | 24 +++ drivers/common/qat/qat_adf/icp_qat_hw.h | 23 ++- drivers/common/qat/qat_device.c | 13 ++ drivers/common/qat/qat_device.h | 2 + drivers/compress/qat/qat_comp_pmd.c | 3 +- drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 1 + drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 57 ++- drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 2 +- drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 44 - drivers/crypto/qat/dev/qat_sym_pmd_gen1.c| 15 ++ drivers/crypto/qat/qat_asym.c| 3 +- drivers/crypto/qat/qat_sym_session.c | 164 +-- drivers/crypto/qat/qat_sym_session.h | 2 + 14 files changed, 332 insertions(+), 24 deletions(-) -- 2.25.1
[PATCH 1/4] crypto/qat: add new gen3 device
Add new gen3 QAT device ID. This device has a wireless slice, but other gen3 devices do not, so we must set a flag to indicate this wireless enabled device. Capabilities for the device are slightly different from base gen3 capabilities, some are removed from the list for this device. Signed-off-by: Ciara Power --- drivers/common/qat/qat_device.c | 13 + drivers/common/qat/qat_device.h | 2 ++ drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 11 +++ 3 files changed, 26 insertions(+) diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c index f55dc3c6f0..0e7d387d78 100644 --- a/drivers/common/qat/qat_device.c +++ b/drivers/common/qat/qat_device.c @@ -53,6 +53,9 @@ static const struct rte_pci_id pci_id_qat_map[] = { { RTE_PCI_DEVICE(0x8086, 0x18a1), }, + { + RTE_PCI_DEVICE(0x8086, 0x578b), + }, { RTE_PCI_DEVICE(0x8086, 0x4941), }, @@ -194,6 +197,7 @@ pick_gen(const struct rte_pci_device *pci_dev) case 0x18ef: return QAT_GEN2; case 0x18a1: + case 0x578b: return QAT_GEN3; case 0x4941: case 0x4943: @@ -205,6 +209,12 @@ pick_gen(const struct rte_pci_device *pci_dev) } } +static int +wireless_slice_support(uint16_t pci_dev_id) +{ + return pci_dev_id == 0x578b; +} + struct qat_pci_device * qat_pci_device_allocate(struct rte_pci_device *pci_dev, struct qat_dev_cmd_param *qat_dev_cmd_param) @@ -282,6 +292,9 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev, qat_dev->qat_dev_id = qat_dev_id; qat_dev->qat_dev_gen = qat_dev_gen; + if (wireless_slice_support(pci_dev->id.device_id)) + qat_dev->has_wireless_slice = 1; + ops_hw = qat_dev_hw_spec[qat_dev->qat_dev_gen]; NOT_NULL(ops_hw->qat_dev_get_misc_bar, goto error, "QAT internal error! qat_dev_get_misc_bar function not set"); diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h index aa7988bb74..43e4752812 100644 --- a/drivers/common/qat/qat_device.h +++ b/drivers/common/qat/qat_device.h @@ -135,6 +135,8 @@ struct qat_pci_device { /**< Per generation specific information */ uint32_t slice_map; /**< Map of the crypto and compression slices */ + uint16_t has_wireless_slice; + /**< Wireless Slices supported */ }; struct qat_gen_hw_data { diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c index 02bcdb06b1..bc53e2e0f1 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c @@ -255,6 +255,17 @@ qat_sym_crypto_cap_get_gen3(struct qat_cryptodev_private *internals, RTE_CRYPTO_AUTH_SM3_HMAC))) { continue; } + if (internals->qat_dev->has_wireless_slice && ( + check_auth_capa(&capabilities[iter], + RTE_CRYPTO_AUTH_KASUMI_F9) || + check_cipher_capa(&capabilities[iter], + RTE_CRYPTO_CIPHER_KASUMI_F8) || + check_cipher_capa(&capabilities[iter], + RTE_CRYPTO_CIPHER_DES_CBC) || + check_cipher_capa(&capabilities[iter], + RTE_CRYPTO_CIPHER_DES_DOCSISBPI))) + continue; + memcpy(addr + curr_capa, capabilities + iter, sizeof(struct rte_cryptodev_capabilities)); curr_capa++; -- 2.25.1
[PATCH 2/4] crypto/qat: add zuc256 wireless slice for gen3
The new gen3 device handles wireless algorithms on wireless slices, based on the device wireless slice support, set the required flags for these algorithms to move slice. One of the algorithms supported for the wireless slices is ZUC 256, support is added for this, along with modifying the capability for the device. The device supports 24 bytes iv for ZUC 256, with iv[20] being ignored in register. For 25 byte iv, compress this into 23 bytes. Signed-off-by: Ciara Power --- drivers/common/qat/qat_adf/icp_qat_fw.h | 3 +- drivers/common/qat/qat_adf/icp_qat_fw_la.h | 24 drivers/common/qat/qat_adf/icp_qat_hw.h | 21 ++- drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 1 + drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 46 +- drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 2 +- drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 44 +- drivers/crypto/qat/dev/qat_sym_pmd_gen1.c| 15 ++ drivers/crypto/qat/qat_sym_session.c | 142 +-- drivers/crypto/qat/qat_sym_session.h | 2 + 10 files changed, 279 insertions(+), 21 deletions(-) diff --git a/drivers/common/qat/qat_adf/icp_qat_fw.h b/drivers/common/qat/qat_adf/icp_qat_fw.h index 3aa17ae041..76584d48f0 100644 --- a/drivers/common/qat/qat_adf/icp_qat_fw.h +++ b/drivers/common/qat/qat_adf/icp_qat_fw.h @@ -75,7 +75,8 @@ struct icp_qat_fw_comn_req_hdr { uint8_t service_type; uint8_t hdr_flags; uint16_t serv_specif_flags; - uint16_t comn_req_flags; + uint8_t comn_req_flags; + uint8_t ext_flags; }; struct icp_qat_fw_comn_req_rqpars { diff --git a/drivers/common/qat/qat_adf/icp_qat_fw_la.h b/drivers/common/qat/qat_adf/icp_qat_fw_la.h index 70f0effa62..134c309355 100644 --- a/drivers/common/qat/qat_adf/icp_qat_fw_la.h +++ b/drivers/common/qat/qat_adf/icp_qat_fw_la.h @@ -81,6 +81,15 @@ struct icp_qat_fw_la_bulk_req { #define ICP_QAT_FW_LA_PARTIAL_END 2 #define QAT_LA_PARTIAL_BITPOS 0 #define QAT_LA_PARTIAL_MASK 0x3 +#define QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS_BITPOS 0 +#define QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS 1 +#define QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS_MASK 0x1 +#define QAT_LA_USE_WCP_SLICE 1 +#define QAT_LA_USE_WCP_SLICE_BITPOS 2 +#define QAT_LA_USE_WCP_SLICE_MASK 0x1 +#define QAT_LA_USE_WAT_SLICE_BITPOS 3 +#define QAT_LA_USE_WAT_SLICE 1 +#define QAT_LA_USE_WAT_SLICE_MASK 0x1 #define ICP_QAT_FW_LA_FLAGS_BUILD(zuc_proto, gcm_iv_len, auth_rslt, proto, \ cmp_auth, ret_auth, update_state, \ ciph_iv, ciphcfg, partial) \ @@ -188,6 +197,21 @@ struct icp_qat_fw_la_bulk_req { QAT_FIELD_SET(flags, val, QAT_LA_PARTIAL_BITPOS, \ QAT_LA_PARTIAL_MASK) +#define ICP_QAT_FW_USE_EXTENDED_PROTOCOL_FLAGS_SET(flags, val) \ + QAT_FIELD_SET(flags, val, \ + QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS_BITPOS, \ + QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS_MASK) + +#define ICP_QAT_FW_USE_WCP_SLICE_SET(flags, val) \ + QAT_FIELD_SET(flags, val, \ + QAT_LA_USE_WCP_SLICE_BITPOS, \ + QAT_LA_USE_WCP_SLICE_MASK) + +#define ICP_QAT_FW_USE_WAT_SLICE_SET(flags, val) \ + QAT_FIELD_SET(flags, val, \ + QAT_LA_USE_WAT_SLICE_BITPOS, \ + QAT_LA_USE_WAT_SLICE_MASK) + #define QAT_FW_LA_MODE2 1 #define QAT_FW_LA_NO_MODE2 0 #define QAT_FW_LA_MODE2_MASK 0x1 diff --git a/drivers/common/qat/qat_adf/icp_qat_hw.h b/drivers/common/qat/qat_adf/icp_qat_hw.h index 8b864e1630..dfd0ea133c 100644 --- a/drivers/common/qat/qat_adf/icp_qat_hw.h +++ b/drivers/common/qat/qat_adf/icp_qat_hw.h @@ -71,7 +71,16 @@ enum icp_qat_hw_auth_algo { ICP_QAT_HW_AUTH_ALGO_SHA3_256 = 17, ICP_QAT_HW_AUTH_ALGO_SHA3_384 = 18, ICP_QAT_HW_AUTH_ALGO_SHA3_512 = 19, - ICP_QAT_HW_AUTH_ALGO_DELIMITER = 20 + ICP_QAT_HW_AUTH_ALGO_RESERVED = 20, + ICP_QAT_HW_AUTH_ALGO_RESERVED1 = 21, + ICP_QAT_HW_AUTH_ALGO_RESERVED2 = 22, + ICP_QAT_HW_AUTH_ALGO_RESERVED3 = 22, + ICP_QAT_HW_AUTH_ALGO_RESERVED4 = 23, + ICP_QAT_HW_AUTH_ALGO_RESERVED5 = 24, + ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32 = 25, + ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64 = 26, + ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128 = 27, + ICP_QAT_HW_AUTH_ALGO_DELIMITER = 28 }; enum icp_qat_hw_auth_mode { @@ -167,6 +176,9 @@ struct icp_qat_hw_auth_setup { #define ICP_QAT_HW_GALOIS_128_STATE1_SZ 16 #define ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ 8 #define ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ 8 +#define ICP_QAT_HW_ZUC_256_MAC_32_STATE1_SZ 8 +#define ICP_QAT_HW_ZUC_256_MAC_64_STATE1_SZ 8 +#define ICP_QAT_HW_ZUC_256_MAC_128_STATE1_SZ 16 #define ICP_QAT_HW_NULL_STATE2_SZ 32 #define ICP_QAT_HW_MD5_STATE2_SZ 16 @@ -191,6 +203,7 @@ struct icp_qat_hw_auth_setup { #define ICP_QAT_HW_AES_F9_STATE2_SZ ICP_QAT_HW_KASUMI_F9_STATE2_SZ #define ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ 24 #define ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ 32 +#define ICP_QAT_HW_ZUC_256_STATE2_SZ 56 #define ICP_QAT_HW_GALOIS_H_SZ 16 #define ICP_QAT
[PATCH 3/4] crypto/qat: add new gen3 CMAC macros
The new QAT GEN3 device uses new macros for CMAC values, rather than using XCBC_MAC ones. The wireless slice handles CMAC in the new gen3 device, and no key precomputes are required by SW. Signed-off-by: Ciara Power --- drivers/common/qat/qat_adf/icp_qat_hw.h | 4 +++- drivers/crypto/qat/qat_sym_session.c| 28 + 2 files changed, 27 insertions(+), 5 deletions(-) diff --git a/drivers/common/qat/qat_adf/icp_qat_hw.h b/drivers/common/qat/qat_adf/icp_qat_hw.h index dfd0ea133c..b0a6126271 100644 --- a/drivers/common/qat/qat_adf/icp_qat_hw.h +++ b/drivers/common/qat/qat_adf/icp_qat_hw.h @@ -74,7 +74,7 @@ enum icp_qat_hw_auth_algo { ICP_QAT_HW_AUTH_ALGO_RESERVED = 20, ICP_QAT_HW_AUTH_ALGO_RESERVED1 = 21, ICP_QAT_HW_AUTH_ALGO_RESERVED2 = 22, - ICP_QAT_HW_AUTH_ALGO_RESERVED3 = 22, + ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC = 22, ICP_QAT_HW_AUTH_ALGO_RESERVED4 = 23, ICP_QAT_HW_AUTH_ALGO_RESERVED5 = 24, ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32 = 25, @@ -179,6 +179,7 @@ struct icp_qat_hw_auth_setup { #define ICP_QAT_HW_ZUC_256_MAC_32_STATE1_SZ 8 #define ICP_QAT_HW_ZUC_256_MAC_64_STATE1_SZ 8 #define ICP_QAT_HW_ZUC_256_MAC_128_STATE1_SZ 16 +#define ICP_QAT_HW_AES_CMAC_STATE1_SZ 16 #define ICP_QAT_HW_NULL_STATE2_SZ 32 #define ICP_QAT_HW_MD5_STATE2_SZ 16 @@ -207,6 +208,7 @@ struct icp_qat_hw_auth_setup { #define ICP_QAT_HW_GALOIS_H_SZ 16 #define ICP_QAT_HW_GALOIS_LEN_A_SZ 8 #define ICP_QAT_HW_GALOIS_E_CTR0_SZ 16 +#define ICP_QAT_HW_AES_128_CMAC_STATE2_SZ 16 struct icp_qat_hw_auth_sha512 { struct icp_qat_hw_auth_setup inner_setup; diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index ebdad0bd67..b1649b8d18 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -922,11 +922,20 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC; break; case RTE_CRYPTO_AUTH_AES_CMAC: - session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC; session->aes_cmac = 1; - if (internals->qat_dev->has_wireless_slice) { - is_wireless = 1; - session->is_wireless = 1; + if (!internals->qat_dev->has_wireless_slice) { + session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC; + break; + } + is_wireless = 1; + session->is_wireless = 1; + switch (key_length) { + case ICP_QAT_HW_AES_128_KEY_SZ: + session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC; + break; + default: + QAT_LOG(ERR, "Invalid key length: %d", key_length); + return -ENOTSUP; } break; case RTE_CRYPTO_AUTH_AES_GMAC: @@ -1309,6 +1318,9 @@ static int qat_hash_get_state1_size(enum icp_qat_hw_auth_algo qat_hash_alg) case ICP_QAT_HW_AUTH_ALGO_NULL: return QAT_HW_ROUND_UP(ICP_QAT_HW_NULL_STATE1_SZ, QAT_HW_DEFAULT_ALIGNMENT); + case ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC: + return QAT_HW_ROUND_UP(ICP_QAT_HW_AES_CMAC_STATE1_SZ, + QAT_HW_DEFAULT_ALIGNMENT); case ICP_QAT_HW_AUTH_ALGO_DELIMITER: /* return maximum state1 size in this case */ return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ, @@ -1345,6 +1357,7 @@ static int qat_hash_get_digest_size(enum icp_qat_hw_auth_algo qat_hash_alg) case ICP_QAT_HW_AUTH_ALGO_MD5: return ICP_QAT_HW_MD5_STATE1_SZ; case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC: + case ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC: return ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ; case ICP_QAT_HW_AUTH_ALGO_DELIMITER: /* return maximum digest size in this case */ @@ -2353,6 +2366,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64 || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128 || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC + || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SM3 @@ -2593,6 +2607,12 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, return -EFAULT; } break; + case ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC: + state1_size = ICP_QAT_HW_AES_CMAC_STATE1_SZ; +
[PATCH 4/4] crypto/qat: disable asym and compression for new gen3 device
Currently only symmetric crypto has been added for the new gen3 device, adding a check to disable asym and comp PMDs for this device. Signed-off-by: Ciara Power --- drivers/compress/qat/qat_comp_pmd.c | 3 ++- drivers/crypto/qat/qat_asym.c | 3 ++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c index 6fb8cf69be..bdc35b5949 100644 --- a/drivers/compress/qat/qat_comp_pmd.c +++ b/drivers/compress/qat/qat_comp_pmd.c @@ -687,7 +687,8 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev, qat_pci_dev->name, "comp"); QAT_LOG(DEBUG, "Creating QAT COMP device %s", name); - if (qat_comp_gen_ops->compressdev_ops == NULL) { + if (qat_comp_gen_ops->compressdev_ops == NULL || + qat_dev_instance->pci_dev->id.device_id == 0x578b) { QAT_LOG(DEBUG, "Device %s does not support compression", name); return -ENOTSUP; } diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c index 2bf3060278..036813e977 100644 --- a/drivers/crypto/qat/qat_asym.c +++ b/drivers/crypto/qat/qat_asym.c @@ -1522,7 +1522,8 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev, qat_pci_dev->name, "asym"); QAT_LOG(DEBUG, "Creating QAT ASYM device %s\n", name); - if (gen_dev_ops->cryptodev_ops == NULL) { + if (gen_dev_ops->cryptodev_ops == NULL || + qat_dev_instance->pci_dev->id.device_id == 0x578b) { QAT_LOG(ERR, "Device %s does not support asymmetric crypto", name); return -(EFAULT); -- 2.25.1
[PATCH] app/dma-perf: replace pktmbuf with mempool objects
Replace pktmbuf pool with mempool, this allows increase in MOPS especially in lower buffer size. Using Mempool, allows to reduce the extra CPU cycles. Changes made are 1. pktmbuf pool create with mempool create. 2. create src & dst pointer array from the appropaite numa. 3. use get pool and put for mempool objects. 4. remove pktmbuf_mtod for dma and cpu memcpy. Signed-off-by: Vipin Varghese --- app/test-dma-perf/benchmark.c | 74 +-- 1 file changed, 44 insertions(+), 30 deletions(-) diff --git a/app/test-dma-perf/benchmark.c b/app/test-dma-perf/benchmark.c index 9b1f58c78c..dc6f16cc01 100644 --- a/app/test-dma-perf/benchmark.c +++ b/app/test-dma-perf/benchmark.c @@ -43,8 +43,8 @@ struct lcore_params { uint16_t kick_batch; uint32_t buf_size; uint16_t test_secs; - struct rte_mbuf **srcs; - struct rte_mbuf **dsts; + void **srcs; + void **dsts; volatile struct worker_info worker_info; }; @@ -110,17 +110,17 @@ output_result(uint8_t scenario_id, uint32_t lcore_id, char *dma_name, uint16_t r } static inline void -cache_flush_buf(__rte_unused struct rte_mbuf **array, +cache_flush_buf(__rte_unused void **array, __rte_unused uint32_t buf_size, __rte_unused uint32_t nr_buf) { #ifdef RTE_ARCH_X86_64 char *data; - struct rte_mbuf **srcs = array; + void **srcs = array; uint32_t i, offset; for (i = 0; i < nr_buf; i++) { - data = rte_pktmbuf_mtod(srcs[i], char *); + data = (char *) srcs[i]; for (offset = 0; offset < buf_size; offset += 64) __builtin_ia32_clflush(data + offset); } @@ -224,8 +224,8 @@ do_dma_mem_copy(void *p) const uint32_t nr_buf = para->nr_buf; const uint16_t kick_batch = para->kick_batch; const uint32_t buf_size = para->buf_size; - struct rte_mbuf **srcs = para->srcs; - struct rte_mbuf **dsts = para->dsts; + void **srcs = para->srcs; + void **dsts = para->dsts; uint16_t nr_cpl; uint64_t async_cnt = 0; uint32_t i; @@ -241,8 +241,12 @@ do_dma_mem_copy(void *p) while (1) { for (i = 0; i < nr_buf; i++) { dma_copy: - ret = rte_dma_copy(dev_id, 0, rte_mbuf_data_iova(srcs[i]), - rte_mbuf_data_iova(dsts[i]), buf_size, 0); + ret = rte_dma_copy(dev_id, + 0, + (rte_iova_t) srcs[i], + (rte_iova_t) dsts[i], + buf_size, + 0); if (unlikely(ret < 0)) { if (ret == -ENOSPC) { do_dma_submit_and_poll(dev_id, &async_cnt, worker_info); @@ -276,8 +280,8 @@ do_cpu_mem_copy(void *p) volatile struct worker_info *worker_info = &(para->worker_info); const uint32_t nr_buf = para->nr_buf; const uint32_t buf_size = para->buf_size; - struct rte_mbuf **srcs = para->srcs; - struct rte_mbuf **dsts = para->dsts; + void **srcs = para->srcs; + void **dsts = para->dsts; uint32_t i; worker_info->stop_flag = false; @@ -288,8 +292,8 @@ do_cpu_mem_copy(void *p) while (1) { for (i = 0; i < nr_buf; i++) { - const void *src = rte_pktmbuf_mtod(dsts[i], void *); - void *dst = rte_pktmbuf_mtod(srcs[i], void *); + const void *src = (void *) dsts[i]; + void *dst = (void *) srcs[i]; /* copy buffer form src to dst */ rte_memcpy(dst, src, (size_t)buf_size); @@ -303,8 +307,8 @@ do_cpu_mem_copy(void *p) } static int -setup_memory_env(struct test_configure *cfg, struct rte_mbuf ***srcs, - struct rte_mbuf ***dsts) +setup_memory_env(struct test_configure *cfg, void ***srcs, + void ***dsts) { unsigned int buf_size = cfg->buf_size.cur; unsigned int nr_sockets; @@ -317,47 +321,57 @@ setup_memory_env(struct test_configure *cfg, struct rte_mbuf ***srcs, return -1; } - src_pool = rte_pktmbuf_pool_create("Benchmark_DMA_SRC", + src_pool = rte_mempool_create("Benchmark_DMA_SRC", nr_buf, + buf_size, 0, 0, - buf_size + RTE_PKTMBUF_HEADROOM, - cfg->src_numa_node); + NULL, + NULL, + NULL, + NULL, + cfg->src_numa_node, + RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET); if (src_pool ==
[PATCH] app/dma-perf: add average latency per worker
Modify the user display data with total average latency per worker. Signed-off-by: Vipin Varghese --- app/test-dma-perf/benchmark.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/app/test-dma-perf/benchmark.c b/app/test-dma-perf/benchmark.c index 9b1f58c78c..8b6886af62 100644 --- a/app/test-dma-perf/benchmark.c +++ b/app/test-dma-perf/benchmark.c @@ -470,7 +470,8 @@ mem_copy_benchmark(struct test_configure *cfg, bool is_dma) bandwidth_total += bandwidth; avg_cycles_total += avg_cycles; } - printf("\nTotal Bandwidth: %.3lf Gbps, Total MOps: %.3lf\n", bandwidth_total, mops_total); + printf("\nAverage Cycles/op: %.2lf, Total Bandwidth: %.3lf Gbps, Total MOps: %.3lf\n", + (float) avg_cycles_total / nb_workers, bandwidth_total, mops_total); snprintf(output_str[MAX_WORKER_NB], MAX_OUTPUT_STR_LEN, CSV_TOTAL_LINE_FMT, cfg->scenario_id, nr_buf, memory * nb_workers, avg_cycles_total / nb_workers, bandwidth_total, mops_total); -- 2.34.1
Re: [PATCH v4 1/7] dts: add required methods to testpmd_shell
The subject could be improved. That these methods are required is kinda obvious. We should try to actually include some useful information in the subject, such as "add basic methods to testpmd shell", but even that is not saying much. Maybe "add startup verification and forwarding to testpmd shell" - I actually like something like this. On Mon, Dec 18, 2023 at 7:13 PM wrote: > > From: Jeremy Spewock > > Added a method within the testpmd interactive shell that polls the > status of ports and verifies that the link status on a given port is > "up." Polling will continue until either the link comes up, or the > timeout is reached. Also added methods for starting and stopping packet > forwarding in testpmd and a method for setting the forwarding mode on > testpmd. The method for starting packet forwarding will also attempt to > verify that forwarding did indeed start by default. > The body should not explain what we're adding, but why we're adding it. > Signed-off-by: Jeremy Spewock > --- > dts/framework/exception.py| 4 + > .../remote_session/remote/testpmd_shell.py| 92 +++ > 2 files changed, 96 insertions(+) > > diff --git a/dts/framework/exception.py b/dts/framework/exception.py > index b362e42924..e36db20e32 100644 > --- a/dts/framework/exception.py > +++ b/dts/framework/exception.py > @@ -119,6 +119,10 @@ def __str__(self) -> str: > return f"Command {self.command} returned a non-zero exit code: > {self.command_return_code}" > > > +class InteractiveCommandExecutionError(DTSError): > +severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR > + > + > class RemoteDirectoryExistsError(DTSError): > """ > Raised when a remote directory to be created already exists. > diff --git a/dts/framework/remote_session/remote/testpmd_shell.py > b/dts/framework/remote_session/remote/testpmd_shell.py > index 08ac311016..b5e4cba9b3 100644 > --- a/dts/framework/remote_session/remote/testpmd_shell.py > +++ b/dts/framework/remote_session/remote/testpmd_shell.py > @@ -1,9 +1,15 @@ > # SPDX-License-Identifier: BSD-3-Clause > # Copyright(c) 2023 University of New Hampshire > > +import time > +from enum import auto > from pathlib import PurePath > from typing import Callable > > +from framework.exception import InteractiveCommandExecutionError > +from framework.settings import SETTINGS > +from framework.utils import StrEnum > + > from .interactive_shell import InteractiveShell > > > @@ -17,6 +23,37 @@ def __str__(self) -> str: > return self.pci_address > > > +class TestPmdForwardingModes(StrEnum): > +r"""The supported packet forwarding modes for :class:`~TestPmdShell`\s""" > + > +#: > +io = auto() > +#: > +mac = auto() > +#: > +macswap = auto() > +#: > +flowgen = auto() > +#: > +rxonly = auto() > +#: > +txonly = auto() > +#: > +csum = auto() > +#: > +icmpecho = auto() > +#: > +ieee1588 = auto() > +#: > +noisy = auto() > +#: > +fivetswap = "5tswap" > +#: > +shared_rxq = "shared-rxq" > +#: > +recycle_mbufs = auto() > + > + > class TestPmdShell(InteractiveShell): > path: PurePath = PurePath("app", "dpdk-testpmd") > dpdk_app: bool = True > @@ -28,6 +65,27 @@ def _start_application(self, get_privileged_command: > Callable[[str], str] | None > self._app_args += " -- -i" > super()._start_application(get_privileged_command) > > +def start(self, verify: bool = True) -> None: > +"""Start packet forwarding with the current configuration. > + > +Args: > +verify: If :data:`True` , a second start command will be sent in > an attempt to verify > +packet forwarding started as expected. > + Isn't there a better way to verify this? Like with some show command? Or is this how it's supposed to be used? > +Raises: > +InteractiveCommandExecutionError: If `verify` is :data:`True` > and forwarding fails to > +start. > +""" > +self.send_command("start") > +if verify: > +# If forwarding was already started, sending "start" again > should tell us > +if "Packet forwarding already started" not in > self.send_command("start"): > +raise InteractiveCommandExecutionError("Testpmd failed to > start packet forwarding.") > + > +def stop(self) -> None: > +"""Stop packet forwarding.""" > +self.send_command("stop") > + Do we want to do verification here as well? Is there a reason to do such verification? > def get_devices(self) -> list[TestPmdDevice]: > """Get a list of device names that are known to testpmd > > @@ -43,3 +101,37 @@ def get_devices(self) -> list[TestPmdDevice]: > if "device name:" in line.lower(): > dev_list.append(TestPmdDevice(line)) > return dev_list > + > +def wait_link_status_up(self
Re: [PATCH v4 2/7] dts: allow passing parameters into interactive apps
We should also update the subject and the body based on our previous discussion. I don't think they properly describe the change, as we're also updating the method's behavior of DPDK apps. On Mon, Dec 18, 2023 at 7:13 PM wrote: > > From: Jeremy Spewock > > Modified interactive applications to allow for the ability to pass > parameters into the app on start up. Also modified the way EAL > parameters are handled so that the trailing "--" separator is added be > default after all EAL parameters. > > Signed-off-by: Jeremy Spewock > --- > .../remote_session/remote/testpmd_shell.py | 2 +- > dts/framework/testbed_model/sut_node.py | 16 ++-- > 2 files changed, 11 insertions(+), 7 deletions(-) > > diff --git a/dts/framework/remote_session/remote/testpmd_shell.py > b/dts/framework/remote_session/remote/testpmd_shell.py > index b5e4cba9b3..369807a33e 100644 > --- a/dts/framework/remote_session/remote/testpmd_shell.py > +++ b/dts/framework/remote_session/remote/testpmd_shell.py > @@ -62,7 +62,7 @@ class TestPmdShell(InteractiveShell): > > def _start_application(self, get_privileged_command: Callable[[str], > str] | None) -> None: > """See "_start_application" in InteractiveShell.""" > -self._app_args += " -- -i" > +self._app_args += " -i" > super()._start_application(get_privileged_command) > > def start(self, verify: bool = True) -> None: > diff --git a/dts/framework/testbed_model/sut_node.py > b/dts/framework/testbed_model/sut_node.py > index 7f75043bd3..9c92232d9e 100644 > --- a/dts/framework/testbed_model/sut_node.py > +++ b/dts/framework/testbed_model/sut_node.py > @@ -361,7 +361,8 @@ def create_interactive_shell( > shell_cls: Type[InteractiveShellType], > timeout: float = SETTINGS.timeout, > privileged: bool = False, > -eal_parameters: EalParameters | str | None = None, > +eal_parameters: EalParameters | None = None, > +app_parameters: str = "", > ) -> InteractiveShellType: > """Factory method for creating a handler for an interactive session. > > @@ -376,19 +377,22 @@ def create_interactive_shell( > eal_parameters: List of EAL parameters to use to launch the app. > If this > isn't provided or an empty string is passed, it will default > to calling > create_eal_parameters(). > +app_parameters: Additional arguments to pass into the > application on the > +command-line. > Returns: > Instance of the desired interactive application. > """ > -if not eal_parameters: > -eal_parameters = self.create_eal_parameters() > - > -# We need to append the build directory for DPDK apps > +# We need to append the build directory and add EAL parameters for > DPDK apps > if shell_cls.dpdk_app: > +if not eal_parameters: > +eal_parameters = self.create_eal_parameters() > +app_parameters = f"{eal_parameters} -- {app_parameters}" > + > shell_cls.path = self.main_session.join_remote_path( > self.remote_dpdk_build_dir, shell_cls.path > ) > > -return super().create_interactive_shell(shell_cls, timeout, > privileged, str(eal_parameters)) > +return super().create_interactive_shell(shell_cls, timeout, > privileged, app_parameters) > > def bind_ports_to_driver(self, for_dpdk: bool = True) -> None: > """Bind all ports on the SUT to a driver. > -- > 2.43.0 >
Re: [PATCH v4 3/7] dts: add optional packet filtering to scapy sniffer
Reviewed-by: Juraj Linkeš On Mon, Dec 18, 2023 at 7:13 PM wrote: > > From: Jeremy Spewock > > Added the options to filter out LLDP and ARP packets when > sniffing for packets with scapy. This was done using BPF filters to > ensure that the noise these packets provide does not interfere with test > cases. > > Signed-off-by: Jeremy Spewock > --- > dts/framework/test_suite.py | 14 -- > .../capturing_traffic_generator.py| 22 ++- > dts/framework/testbed_model/scapy.py | 28 ++- > dts/framework/testbed_model/tg_node.py| 12 ++-- > 4 files changed, 70 insertions(+), 6 deletions(-) > > diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py > index 4a7907ec33..6dfa570041 100644 > --- a/dts/framework/test_suite.py > +++ b/dts/framework/test_suite.py > @@ -27,6 +27,7 @@ > from .settings import SETTINGS > from .test_result import BuildTargetResult, Result, TestCaseResult, > TestSuiteResult > from .testbed_model import SutNode, TGNode > +from .testbed_model.capturing_traffic_generator import PacketFilteringConfig > from .testbed_model.hw.port import Port, PortLink > from .utils import get_packet_summaries > > @@ -149,7 +150,12 @@ def configure_testbed_ipv4(self, restore: bool = False) > -> None: > def _configure_ipv4_forwarding(self, enable: bool) -> None: > self.sut_node.configure_ipv4_forwarding(enable) > > -def send_packet_and_capture(self, packet: Packet, duration: float = 1) > -> list[Packet]: > +def send_packet_and_capture( > +self, > +packet: Packet, > +filter_config: PacketFilteringConfig = PacketFilteringConfig(), > +duration: float = 1, > +) -> list[Packet]: > """ > Send a packet through the appropriate interface and > receive on the appropriate interface. > @@ -158,7 +164,11 @@ def send_packet_and_capture(self, packet: Packet, > duration: float = 1) -> list[P > """ > packet = self._adjust_addresses(packet) > return self.tg_node.send_packet_and_capture( > -packet, self._tg_port_egress, self._tg_port_ingress, duration > +packet, > +self._tg_port_egress, > +self._tg_port_ingress, > +filter_config, > +duration, > ) > > def get_expected_packet(self, packet: Packet) -> Packet: > diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py > b/dts/framework/testbed_model/capturing_traffic_generator.py > index e6512061d7..c40b030fe4 100644 > --- a/dts/framework/testbed_model/capturing_traffic_generator.py > +++ b/dts/framework/testbed_model/capturing_traffic_generator.py > @@ -11,6 +11,7 @@ > > import uuid > from abc import abstractmethod > +from dataclasses import dataclass > > import scapy.utils # type: ignore[import] > from scapy.packet import Packet # type: ignore[import] > @@ -29,6 +30,19 @@ def _get_default_capture_name() -> str: > return str(uuid.uuid4()) > > > +@dataclass(slots=True) > +class PacketFilteringConfig: > +"""The supported filtering options for > :class:`CapturingTrafficGenerator`. > + > +Attributes: > +no_lldp: If :data:`True`, LLDP packets will be filtered out when > capturing. > +no_arp: If :data:`True`, ARP packets will be filtered out when > capturing. > +""" > + > +no_lldp: bool = True > +no_arp: bool = True > + > + > class CapturingTrafficGenerator(TrafficGenerator): > """Capture packets after sending traffic. > > @@ -51,6 +65,7 @@ def send_packet_and_capture( > packet: Packet, > send_port: Port, > receive_port: Port, > +filter_config: PacketFilteringConfig, > duration: float, > capture_name: str = _get_default_capture_name(), > ) -> list[Packet]: > @@ -64,6 +79,7 @@ def send_packet_and_capture( > packet: The packet to send. > send_port: The egress port on the TG node. > receive_port: The ingress port in the TG node. > +filter_config: Filters to apply when capturing packets. > duration: Capture traffic for this amount of time after sending > the packet. > capture_name: The name of the .pcap file where to store the > capture. > > @@ -71,7 +87,7 @@ def send_packet_and_capture( > A list of received packets. May be empty if no packets are > captured. > """ > return self.send_packets_and_capture( > -[packet], send_port, receive_port, duration, capture_name > +[packet], send_port, receive_port, filter_config, duration, > capture_name > ) > > def send_packets_and_capture( > @@ -79,6 +95,7 @@ def send_packets_and_capture( > packets: list[Packet], > send_port: Port, > receive_port: Port, > +filter_config: PacketFilteringConfig, > duration: float, > capture_name:
Re: [PATCH v4 4/7] dts: add pci addresses to EAL parameters
Reviewed-by: Juraj Linkeš On Mon, Dec 18, 2023 at 7:13 PM wrote: > > From: Jeremy Spewock > > Added allow list to the EAL parameters created in DTS to ensure that > only the relevant PCI devices are considered when launching DPDK > applications. > > Signed-off-by: Jeremy Spewock > --- > dts/framework/testbed_model/sut_node.py | 11 +++ > 1 file changed, 11 insertions(+) > > diff --git a/dts/framework/testbed_model/sut_node.py > b/dts/framework/testbed_model/sut_node.py > index 9c92232d9e..77caea2fc9 100644 > --- a/dts/framework/testbed_model/sut_node.py > +++ b/dts/framework/testbed_model/sut_node.py > @@ -20,6 +20,7 @@ > from framework.utils import MesonArgs > > from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice > +from .hw.port import Port > from .node import Node > > > @@ -31,6 +32,7 @@ def __init__( > prefix: str, > no_pci: bool, > vdevs: list[VirtualDevice], > +ports: list[Port], > other_eal_param: str, > ): > """ > @@ -46,6 +48,7 @@ def __init__( > VirtualDevice('net_ring0'), > VirtualDevice('net_ring1') > ] > +:param ports: the list of ports to allow. > :param other_eal_param: user defined DPDK eal parameters, eg: > other_eal_param='--single-file-segments' > """ > @@ -56,6 +59,7 @@ def __init__( > self._prefix = f"--file-prefix={prefix}" > self._no_pci = "--no-pci" if no_pci else "" > self._vdevs = " ".join(f"--vdev {vdev}" for vdev in vdevs) > +self._ports = " ".join(f"-a {port.pci}" for port in ports) > self._other_eal_param = other_eal_param > > def __str__(self) -> str: > @@ -65,6 +69,7 @@ def __str__(self) -> str: > f"{self._prefix} " > f"{self._no_pci} " > f"{self._vdevs} " > +f"{self._ports} " > f"{self._other_eal_param}" > ) > > @@ -294,6 +299,7 @@ def create_eal_parameters( > append_prefix_timestamp: bool = True, > no_pci: bool = False, > vdevs: list[VirtualDevice] = None, > +ports: list[Port] | None = None, > other_eal_param: str = "", > ) -> "EalParameters": > """ > @@ -317,6 +323,7 @@ def create_eal_parameters( > VirtualDevice('net_ring0'), > VirtualDevice('net_ring1') > ] > +:param ports: the list of ports to allow. > :param other_eal_param: user defined DPDK eal parameters, eg: > other_eal_param='--single-file-segments' > :return: eal param string, eg: > @@ -334,12 +341,16 @@ def create_eal_parameters( > if vdevs is None: > vdevs = [] > > +if ports is None: > +ports = self.ports > + > return EalParameters( > lcore_list=lcore_list, > memory_channels=self.config.memory_channels, > prefix=prefix, > no_pci=no_pci, > vdevs=vdevs, > +ports=ports, > other_eal_param=other_eal_param, > ) > > -- > 2.43.0 >
Re: [PATCH v4 5/7] dts: allow configuring MTU of ports
On Mon, Dec 18, 2023 at 7:13 PM wrote: > > From: Jeremy Spewock > > Adds methods in both os_session and linux session to allow for setting > MTU of port interfaces in an OS agnostic way. > The previous two commit messages had a little bit of an explanation, but this one is missing one. Something like why a test case/suite needs to set the MTU. > Signed-off-by: Jeremy Spewock > --- > dts/framework/remote_session/linux_session.py | 8 > dts/framework/remote_session/os_session.py| 9 + > 2 files changed, 17 insertions(+) > > diff --git a/dts/framework/remote_session/linux_session.py > b/dts/framework/remote_session/linux_session.py > index fd877fbfae..aaa4d57a36 100644 > --- a/dts/framework/remote_session/linux_session.py > +++ b/dts/framework/remote_session/linux_session.py > @@ -177,6 +177,14 @@ def configure_port_ip_address( > verify=True, > ) > > +def configure_port_mtu(self, mtu: int, port: Port) -> None: > +"""Overrides :meth:`~.os_session.OSSession.configure_port_mtu`.""" > +self.send_command( > +f"ip link set dev {port.logical_name} mtu {mtu}", > +privileged=True, > +verify=True, > +) > + > def configure_ipv4_forwarding(self, enable: bool) -> None: > state = 1 if enable else 0 > self.send_command(f"sysctl -w net.ipv4.ip_forward={state}", > privileged=True) > diff --git a/dts/framework/remote_session/os_session.py > b/dts/framework/remote_session/os_session.py > index 8a709eac1c..cd073f5774 100644 > --- a/dts/framework/remote_session/os_session.py > +++ b/dts/framework/remote_session/os_session.py > @@ -277,6 +277,15 @@ def configure_port_ip_address( > Configure (add or delete) an IP address of the input port. > """ > > +@abstractmethod > +def configure_port_mtu(self, mtu: int, port: Port) -> None: > +"""Configure `mtu` on `port`. > + > +Args: > +mtu: Desired MTU value. > +port: Port to set `mtu` on. > +""" > + > @abstractmethod > def configure_ipv4_forwarding(self, enable: bool) -> None: > """ > -- > 2.43.0 >
Re: [PATCH v4 6/7] dts: add scatter to the yaml schema
Reviewed-by: Juraj Linkeš On Mon, Dec 18, 2023 at 7:13 PM wrote: > > From: Jeremy Spewock > > Allow for scatter to be specified in the configuration file. > > Signed-off-by: Jeremy Spewock > --- > dts/framework/config/conf_yaml_schema.json | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/dts/framework/config/conf_yaml_schema.json > b/dts/framework/config/conf_yaml_schema.json > index 84e45fe3c2..e6dc50ca7f 100644 > --- a/dts/framework/config/conf_yaml_schema.json > +++ b/dts/framework/config/conf_yaml_schema.json > @@ -186,7 +186,8 @@ >"type": "string", >"enum": [ > "hello_world", > -"os_udp" > +"os_udp", > +"pmd_buffer_scatter" >] > }, > "test_target": { > -- > 2.43.0 >
Re: [Bug 1335] [dpdk-24.03-rc0] freebsd/nic_uio meson build error with clang16.0.6 and gcc12.2.0 on FreeBSD14
On Tue, 19 Dec 2023 07:46:18 + bugzi...@dpdk.org wrote: > # git log -1 > commit e5dc404d33ac1c6cea5c62a88489746c5cb5e35e (HEAD, origin/main, > origin/HEAD, main) > Author: Stephen Hemminger > Date: Mon Dec 11 12:17:32 2023 -0800 > > cryptodev: use a dynamic logtype > > The cryptodev logs are all referenced via rte_cryptodev.h, > so make it dynamic there. > > Signed-off-by: Stephen Hemminger > Acked-by: Akhil Goyal This patch did not touch the FreeBSD driver, it is a pre-existing condition.
Re: [PATCH 2/2] examples/ipsec-secgw: update stats when freeing packets
On Tue, 19 Dec 2023 10:59:23 +0530 Anoob Joseph wrote: > Instead of freeing directly, use commonly used function which also > updates stats. > > Signed-off-by: Anoob Joseph > --- > examples/ipsec-secgw/ipsec_process.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/examples/ipsec-secgw/ipsec_process.c > b/examples/ipsec-secgw/ipsec_process.c > index b0cece3ad1..ddbe30745b 100644 > --- a/examples/ipsec-secgw/ipsec_process.c > +++ b/examples/ipsec-secgw/ipsec_process.c > @@ -22,7 +22,7 @@ free_cops(struct rte_crypto_op *cop[], uint32_t n) > uint32_t i; > > for (i = 0; i != n; i++) > - rte_pktmbuf_free(cop[i]->sym->m_src); > + free_pkts(&cop[i]->sym->m_src, 1); Also, free_pkts is using a loop and should be using rte_pktmbuf_free_bulk() instead.
[PATCH 0/2] remove __typeof__ from expansion of per lcore macros
The design of the macros requires a type to be provided to the macro. By expanding the type parameter inside of typeof it also inadvertently allows an expression to be used which appears not to have been intended after evaluating the parameter name and existing macro use. Technically this is an API break but only for applications that were using these macros outside of the original design intent. Tyler Retzlaff (2): eal: provide type instead of expression to per lcore macro eal: remove typeof from per lcore macros lib/eal/common/eal_common_errno.c | 2 +- lib/eal/include/rte_per_lcore.h | 8 2 files changed, 5 insertions(+), 5 deletions(-) -- 1.8.3.1
[PATCH 1/2] eal: provide type instead of expression to per lcore macro
Adjust the use of per lcore macro to provide type as the first argument as to not require __typeof__ during expansion. Signed-off-by: Tyler Retzlaff --- lib/eal/common/eal_common_errno.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/eal/common/eal_common_errno.c b/lib/eal/common/eal_common_errno.c index b30e2f0..fff8c1f 100644 --- a/lib/eal/common/eal_common_errno.c +++ b/lib/eal/common/eal_common_errno.c @@ -28,7 +28,7 @@ static const char *sep = ""; #endif #define RETVAL_SZ 256 - static RTE_DEFINE_PER_LCORE(char[RETVAL_SZ], retval); + static RTE_DEFINE_PER_LCORE(char, retval[RETVAL_SZ]); char *ret = RTE_PER_LCORE(retval); /* since some implementations of strerror_r throw an error -- 1.8.3.1
[PATCH 2/2] eal: remove typeof from per lcore macros
The design of the macros requires a type to be provided to the macro. By expanding the type parameter inside of typeof it also inadvertently allows an expression to be used which appears not to have been intended after evaluating the parameter name and existing macro use. Technically this is an API break but only for applications that were using these macros outside of the original design intent. Signed-off-by: Tyler Retzlaff --- lib/eal/include/rte_per_lcore.h | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/lib/eal/include/rte_per_lcore.h b/lib/eal/include/rte_per_lcore.h index 41fe1f0..529995e 100644 --- a/lib/eal/include/rte_per_lcore.h +++ b/lib/eal/include/rte_per_lcore.h @@ -24,10 +24,10 @@ #ifdef RTE_TOOLCHAIN_MSVC #define RTE_DEFINE_PER_LCORE(type, name) \ - __declspec(thread) typeof(type) per_lcore_##name + __declspec(thread) type per_lcore_##name #define RTE_DECLARE_PER_LCORE(type, name) \ - extern __declspec(thread) typeof(type) per_lcore_##name + extern __declspec(thread) type per_lcore_##name #else /** * Macro to define a per lcore variable "var" of type "type", don't @@ -35,13 +35,13 @@ * whole macro. */ #define RTE_DEFINE_PER_LCORE(type, name) \ - __thread __typeof__(type) per_lcore_##name + __thread type per_lcore_##name /** * Macro to declare an extern per lcore variable "var" of type "type" */ #define RTE_DECLARE_PER_LCORE(type, name) \ - extern __thread __typeof__(type) per_lcore_##name + extern __thread type per_lcore_##name #endif /** -- 1.8.3.1
Re: [PATCH v4 7/7] dts: add scatter test suite
Should we use the full name (pmd_buffer_scatter) in the subject? I lean towards the full name. On Mon, Dec 18, 2023 at 7:13 PM wrote: > > From: Jeremy Spewock > > This test suite provides testing the support of scattered packets by > Poll Mode Drivers using testpmd. It incorporates 5 different test cases > which test the sending and receiving of packets with lengths that are > less than the mbuf data buffer size, the same as the mbuf data buffer > size, and the mbuf data buffer size plus 1, 4, and 5. The goal of this > test suite is to align with the existing dts test plan for scattered > packets within DTS. > Again, we need to describe why we're adding this commit. In the case of test suites, the why is obvious, so we should give a good high level description of what the tests test (something like the test suite tests the x feature by doing y, y being the salient part of the tests). The original test plan is actually pretty good, so we can extract the rationale from it. > Signed-off-by: Jeremy Spewock > --- > dts/tests/TestSuite_pmd_buffer_scatter.py | 105 ++ > 1 file changed, 105 insertions(+) > create mode 100644 dts/tests/TestSuite_pmd_buffer_scatter.py > > diff --git a/dts/tests/TestSuite_pmd_buffer_scatter.py > b/dts/tests/TestSuite_pmd_buffer_scatter.py > new file mode 100644 > index 00..8e2a32a1aa > --- /dev/null > +++ b/dts/tests/TestSuite_pmd_buffer_scatter.py > @@ -0,0 +1,105 @@ > +# SPDX-License-Identifier: BSD-3-Clause > +# Copyright(c) 2023 University of New Hampshire > + > +"""Multi-segment packet scattering testing suite. > + > +Configure the Rx queues to have mbuf data buffers whose sizes are smaller > than the maximum packet > +size (currently set to 2048 to fit a full 1512-byte ethernet frame) and send > a total of 5 packets > +with lengths less than, equal to, and greater than the mbuf size (CRC > included). > +""" Let's expand this. I'll point to the original test plan again, let's use some of it here. I think it makes sense to make this docstring a kind of a test plan with high level description. > +import struct > + > +from scapy.layers.inet import IP # type: ignore[import] > +from scapy.layers.l2 import Ether # type: ignore[import] > +from scapy.packet import Raw # type: ignore[import] > +from scapy.utils import hexstr # type: ignore[import] > + > +from framework.remote_session.remote.testpmd_shell import ( > +TestPmdForwardingModes, > +TestPmdShell, > +) > +from framework.test_suite import TestSuite > + > + > +class PmdBufferScatter(TestSuite): > +"""DPDK packet scattering test suite. > + And here we could add some more specifics. I'd like to utilize the original test plans and a split like this makes sense at a first glance. > +Attributes: > +mbsize: The size to se the mbuf to be. > +""" > + > +mbsize: int > + > +def set_up_suite(self) -> None: > +self.verify( > +len(self._port_links) > 1, > +"Must have at least two port links to run scatter", > +) > + > +self.tg_node.main_session.configure_port_mtu(9000, > self._tg_port_egress) > +self.tg_node.main_session.configure_port_mtu(9000, > self._tg_port_ingress) > + > +def scatter_pktgen_send_packet(self, pktsize: int) -> str: > +"""Generate and send packet to the SUT. > + > +Functional test for scatter packets. > + > +Args: > +pktsize: Size of the packet to generate and send. > +""" > +packet = Ether() / IP() / Raw() > +packet.getlayer(2).load = "" > +payload_len = pktsize - len(packet) - 4 > +payload = ["58"] * payload_len > +# pack the payload > +for X_in_hex in payload: > +packet.load += struct.pack("=B", int("%s%s" % (X_in_hex[0], > X_in_hex[1]), 16)) > +load = hexstr(packet.getlayer(2), onlyhex=1) > +received_packets = self.send_packet_and_capture(packet) > +self.verify(len(received_packets) > 0, "Did not receive any > packets.") > +load = hexstr(received_packets[0].getlayer(2), onlyhex=1) > + > +return load > + > +def pmd_scatter(self) -> None: > +"""Testpmd support of receiving and sending scattered multi-segment > packets. > + > +Support for scattered packets is shown by sending 5 packets of > differing length > +where the length of the packet is calculated by taking mbuf-size + > an offset. The > +offsets used in the test case are -1, 0, 1, 4, 5 respectively. > + > +Test: > +Start testpmd and run functional test with preset mbsize. > +""" > +testpmd = self.sut_node.create_interactive_shell( > +TestPmdShell, > +app_parameters=( > +"--mbcache=200 " > +f"--mbuf-size={self.mbsize} " > +"--max-pkt-len=9000 " > +"--port-topology=paired " > +"--tx-offloads=0x8000" >
[dpdk-dev] [RFC] ethdev: support Tx queue free descriptor query
From: Jerin Jacob Introduce a new API to retrieve the number of available free descriptors in a Tx queue. Applications can leverage this API in the fast path to inspect the Tx queue occupancy and take appropriate actions based on the available free descriptors. A notable use case could be implementing Random Early Discard (RED) in software based on Tx queue occupancy. Signed-off-by: Jerin Jacob --- doc/guides/nics/features.rst | 10 doc/guides/nics/features/default.ini | 1 + lib/ethdev/ethdev_trace_points.c | 3 ++ lib/ethdev/rte_ethdev.h | 78 lib/ethdev/rte_ethdev_core.h | 7 ++- lib/ethdev/rte_ethdev_trace_fp.h | 8 +++ 6 files changed, 106 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst index f7d9980849..9d6655473a 100644 --- a/doc/guides/nics/features.rst +++ b/doc/guides/nics/features.rst @@ -962,6 +962,16 @@ management (see :doc:`../prog_guide/power_man` for more details). * **[implements] eth_dev_ops**: ``get_monitor_addr`` +.. _nic_features_tx_queue_free_desc_query: + +Tx queue free descriptor query +-- + +Supports to get the number of free descriptors in a Tx queue. + +* **[implements] eth_dev_ops**: ``tx_queue_free_desc_get``. +* **[related] API**: ``rte_eth_tx_queue_free_desc_get()``. + .. _nic_features_other: Other dev ops not represented by a Feature diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini index 806cb033ff..b30002b1c1 100644 --- a/doc/guides/nics/features/default.ini +++ b/doc/guides/nics/features/default.ini @@ -59,6 +59,7 @@ Packet type parsing = Timesync = Rx descriptor status = Tx descriptor status = +Tx free descriptor query = Basic stats = Extended stats = Stats per queue = diff --git a/lib/ethdev/ethdev_trace_points.c b/lib/ethdev/ethdev_trace_points.c index 91f71d868b..346f37f2e4 100644 --- a/lib/ethdev/ethdev_trace_points.c +++ b/lib/ethdev/ethdev_trace_points.c @@ -481,6 +481,9 @@ RTE_TRACE_POINT_REGISTER(rte_eth_trace_count_aggr_ports, RTE_TRACE_POINT_REGISTER(rte_eth_trace_map_aggr_tx_affinity, lib.ethdev.map_aggr_tx_affinity) +RTE_TRACE_POINT_REGISTER(rte_eth_trace_tx_queue_free_desc_get, + lib.ethdev.tx_queue_free_desc_get) + RTE_TRACE_POINT_REGISTER(rte_flow_trace_copy, lib.ethdev.flow.copy) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 77331ce652..033fcb8c9b 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -6802,6 +6802,84 @@ rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id, __rte_experimental int rte_eth_buffer_split_get_supported_hdr_ptypes(uint16_t port_id, uint32_t *ptypes, int num); +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * Get the number of free descriptors in a Tx queue. + * + * This function retrieves the number of available free descriptors in a + * transmit queue. Applications can use this API in the fast path to inspect + * Tx queue occupancy and take appropriate actions based on the available + * free descriptors. An example action could be implementing the + * Random Early Discard (RED). + * + * If there are no packets in the Tx queue, the function returns the value + * of `nb_tx_desc` provided during the initialization of the Tx queue using + * rte_eth_tx_queue_setup(), signifying that all descriptors are free. + * + * @param port_id + * The port identifier of the device. + * @param tx_queue_id + * The index of the transmit queue. + * The value must be in the range [0, nb_tx_queue - 1] previously supplied + * to rte_eth_dev_configure(). + * @return + * - (<= UINT16_MAX) Number of free descriptors in a Tx queue + * - (> UINT16_MAX) if error. Enabled only when RTE_ETHDEV_DEBUG_TX is enabled + * + * @note This function is designed for fast-path use. + * + */ +__rte_experimental +static inline uint32_t +rte_eth_tx_queue_free_desc_get(uint16_t port_id, uint16_t tx_queue_id) +{ + struct rte_eth_fp_ops *fops; + uint32_t rc; + void *qd; + +#ifdef RTE_ETHDEV_DEBUG_TX + rc = UINT32_MAX; + if (port_id >= RTE_MAX_ETHPORTS || tx_queue_id >= RTE_MAX_QUEUES_PER_PORT) { + RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u or tx_queue_id=%u\n", + port_id, tx_queue_id); + + rte_eth_trace_tx_queue_free_desc_get(port_id, tx_queue_id, rc); + return rc; + } +#endif + + /* Fetch pointer to Tx queue data */ + fops = &rte_eth_fp_ops[port_id]; + qd = fops->txq.data[tx_queue_id]; + +#ifdef RTE_ETHDEV_DEBUG_TX + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0); + + if (qd == NULL) { + RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n", + tx_queue_id, port_id); + + rte_eth_trace_
Re: [PATCH 0/2] remove __typeof__ from expansion of per lcore macros
On Tue, Dec 19, 2023 at 09:27:42AM -0800, Tyler Retzlaff wrote: > The design of the macros requires a type to be provided to the macro. > > By expanding the type parameter inside of typeof it also inadvertently > allows an expression to be used which appears not to have been intended > after evaluating the parameter name and existing macro use. > > Technically this is an API break but only for applications that were > using these macros outside of the original design intent. > > Tyler Retzlaff (2): > eal: provide type instead of expression to per lcore macro > eal: remove typeof from per lcore macros > Series-acked-by: Bruce Richardson
[PATCH v2 00/24] net/cnxk: support for port representors
Introducing port representor support to CNXK drivers by adding virtual ethernet ports providing a logical representation in DPDK for physical function(PF) or SR-IOV virtual function (VF) devices for control and monitoring. These port representor ethdev instances can be spawned on an as needed basis through configuration parameters passed to the driver of the underlying base device using devargs ``-a ,representor=pf*vf*`` In case of exception path (i.e. until the flow definition is offloaded to the hardware), packets transmitted by the VFs shall be received by these representor port, while packets transmitted by representor ports shall be received by respective VFs. On receiving the VF traffic via these representor ports, applications holding these representor ports can decide to offload the traffic flow into the HW. Henceforth the matching traffic shall be directly steered to the respective VFs without being received by the application. Current virtual representor port PMD supports following operations: - Get represented port statistics - Set mac address - Flow operations - create, validate, destroy, query, flush, dump Changes since V1: * Updated communication layer between representor and represented port. * Added support for native represented ports * Port representor and represented port item and action support * Build failure fixes -- Harman Kalra (24): common/cnxk: add support for representors net/cnxk: implementing eswitch device net/cnxk: eswitch HW resource configuration net/cnxk: eswitch devargs parsing net/cnxk: probing representor ports common/cnxk: common NPC changes for eswitch common/cnxk: interface to update VLAN TPID net/cnxk: eswitch flow configurations net/cnxk: eswitch fastpath routines net/cnxk: add representor control plane common/cnxk: representee notification callback net/cnxk: handling representee notification net/cnxk: representor ethdev ops common/cnxk: get representees ethernet stats net/cnxk: ethernet statistic for representor common/cnxk: base support for eswitch VF net/cnxk: eswitch VF as ethernet device common/cnxk: support port representor and represented port net/cnxk: add represented port pattern and action net/cnxk: add port representor pattern and action net/cnxk: generalize flow operation APIs net/cnxk: flow create on representor ports net/cnxk: other flow operations doc: port representors in cnxk MAINTAINERS | 1 + doc/guides/nics/cnxk.rst| 58 ++ doc/guides/nics/features/cnxk.ini | 3 + doc/guides/nics/features/cnxk_vf.ini| 4 + drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_api.h | 3 + drivers/common/cnxk/roc_constants.h | 2 + drivers/common/cnxk/roc_dev.c | 25 + drivers/common/cnxk/roc_dev_priv.h | 3 + drivers/common/cnxk/roc_eswitch.c | 368 ++ drivers/common/cnxk/roc_eswitch.h | 33 + drivers/common/cnxk/roc_mbox.c | 2 + drivers/common/cnxk/roc_mbox.h | 73 +- drivers/common/cnxk/roc_nix.c | 46 +- drivers/common/cnxk/roc_nix.h | 4 + drivers/common/cnxk/roc_nix_priv.h | 5 +- drivers/common/cnxk/roc_nix_vlan.c | 23 +- drivers/common/cnxk/roc_npc.c | 89 ++- drivers/common/cnxk/roc_npc.h | 18 +- drivers/common/cnxk/roc_npc_mcam.c | 64 +- drivers/common/cnxk/roc_npc_parse.c | 28 +- drivers/common/cnxk/roc_npc_priv.h | 5 +- drivers/common/cnxk/roc_platform.c | 2 + drivers/common/cnxk/roc_platform.h | 4 + drivers/common/cnxk/version.map | 14 + drivers/net/cnxk/cn10k_ethdev.c | 1 + drivers/net/cnxk/cnxk_eswitch.c | 871 drivers/net/cnxk/cnxk_eswitch.h | 213 ++ drivers/net/cnxk/cnxk_eswitch_devargs.c | 237 +++ drivers/net/cnxk/cnxk_eswitch_flow.c| 445 drivers/net/cnxk/cnxk_eswitch_rxtx.c| 212 ++ drivers/net/cnxk/cnxk_ethdev.c | 39 +- drivers/net/cnxk/cnxk_ethdev.h | 3 + drivers/net/cnxk/cnxk_ethdev_ops.c | 4 + drivers/net/cnxk/cnxk_flow.c| 521 +++--- drivers/net/cnxk/cnxk_flow.h| 28 +- drivers/net/cnxk/cnxk_link.c| 3 +- drivers/net/cnxk/cnxk_rep.c | 555 +++ drivers/net/cnxk/cnxk_rep.h | 141 drivers/net/cnxk/cnxk_rep_flow.c| 813 ++ drivers/net/cnxk/cnxk_rep_msg.c | 823 ++ drivers/net/cnxk/cnxk_rep_msg.h | 169 + drivers/net/cnxk/cnxk_rep_ops.c | 715 +++ drivers/net/cnxk/meson.build| 8 + 44 files changed, 6498 insertions(+), 181 deletions(-) create mode 100644 drivers/common/cnxk/roc_eswitch.c create mode 100644 drivers/common/cnxk/roc_eswitch.h create mode 100644 drivers/net/cnxk/cnxk_eswitch.c create mode 100644 drivers
[PATCH v2 01/24] common/cnxk: add support for representors
- Mailbox for registering base device behind all representors - Registering debug log type for representors Signed-off-by: Harman Kalra --- drivers/common/cnxk/roc_constants.h | 1 + drivers/common/cnxk/roc_mbox.h | 8 drivers/common/cnxk/roc_nix.c | 31 + drivers/common/cnxk/roc_nix.h | 3 +++ drivers/common/cnxk/roc_platform.c | 2 ++ drivers/common/cnxk/roc_platform.h | 4 drivers/common/cnxk/version.map | 3 +++ 7 files changed, 52 insertions(+) diff --git a/drivers/common/cnxk/roc_constants.h b/drivers/common/cnxk/roc_constants.h index 291b6a4bc9..cb4edbea58 100644 --- a/drivers/common/cnxk/roc_constants.h +++ b/drivers/common/cnxk/roc_constants.h @@ -43,6 +43,7 @@ #define PCI_DEVID_CNXK_RVU_NIX_INL_VF 0xA0F1 #define PCI_DEVID_CNXK_RVU_REE_PF 0xA0f4 #define PCI_DEVID_CNXK_RVU_REE_VF 0xA0f5 +#define PCI_DEVID_CNXK_RVU_ESWITCH_PF 0xA0E0 #define PCI_DEVID_CN9K_CGX 0xA059 #define PCI_DEVID_CN10K_RPM 0xA060 diff --git a/drivers/common/cnxk/roc_mbox.h b/drivers/common/cnxk/roc_mbox.h index 3257a370bc..b7e2f43d45 100644 --- a/drivers/common/cnxk/roc_mbox.h +++ b/drivers/common/cnxk/roc_mbox.h @@ -68,6 +68,7 @@ struct mbox_msghdr { M(NDC_SYNC_OP, 0x009, ndc_sync_op, ndc_sync_op, msg_rsp) \ M(LMTST_TBL_SETUP, 0x00a, lmtst_tbl_setup, lmtst_tbl_setup_req,\ msg_rsp) \ + M(GET_REP_CNT, 0x00d, get_rep_cnt, msg_req, get_rep_cnt_rsp) \ /* CGX mbox IDs (range 0x200 - 0x3FF) */ \ M(CGX_START_RXTX, 0x200, cgx_start_rxtx, msg_req, msg_rsp) \ M(CGX_STOP_RXTX, 0x201, cgx_stop_rxtx, msg_req, msg_rsp) \ @@ -546,6 +547,13 @@ struct lmtst_tbl_setup_req { uint64_t __io rsvd[2]; /* Future use */ }; +#define MAX_PFVF_REP 64 +struct get_rep_cnt_rsp { + struct mbox_msghdr hdr; + uint16_t __io rep_cnt; + uint16_t __io rep_pfvf_map[MAX_PFVF_REP]; +}; + /* CGX mbox message formats */ /* CGX mailbox error codes * Range 1101 - 1200. diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c index f64933a1d9..7e327a7e6e 100644 --- a/drivers/common/cnxk/roc_nix.c +++ b/drivers/common/cnxk/roc_nix.c @@ -531,3 +531,34 @@ roc_nix_dev_fini(struct roc_nix *roc_nix) rc |= dev_fini(&nix->dev, nix->pci_dev); return rc; } + +int +roc_nix_max_rep_count(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + struct mbox *mbox = mbox_get(dev->mbox); + struct get_rep_cnt_rsp *rsp; + struct msg_req *req; + int rc, i; + + req = mbox_alloc_msg_get_rep_cnt(mbox); + if (!req) { + rc = -ENOSPC; + goto exit; + } + + req->hdr.pcifunc = roc_nix_get_pf_func(roc_nix); + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto exit; + + roc_nix->rep_cnt = rsp->rep_cnt; + for (i = 0; i < rsp->rep_cnt; i++) + roc_nix->rep_pfvf_map[i] = rsp->rep_pfvf_map[i]; + +exit: + mbox_put(mbox); + return rc; +} diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 84e6fc3df5..b369335fc4 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -483,6 +483,8 @@ struct roc_nix { uint32_t buf_sz; uint64_t meta_aura_handle; uintptr_t meta_mempool; + uint16_t rep_cnt; + uint16_t rep_pfvf_map[MAX_PFVF_REP]; TAILQ_ENTRY(roc_nix) next; #define ROC_NIX_MEM_SZ (6 * 1070) @@ -1013,4 +1015,5 @@ int __roc_api roc_nix_mcast_list_setup(struct mbox *mbox, uint8_t intf, int nb_e uint16_t *pf_funcs, uint16_t *channels, uint32_t *rqs, uint32_t *grp_index, uint32_t *start_index); int __roc_api roc_nix_mcast_list_free(struct mbox *mbox, uint32_t mcast_grp_index); +int __roc_api roc_nix_max_rep_count(struct roc_nix *roc_nix); #endif /* _ROC_NIX_H_ */ diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c index 15cbb6d68f..181902a585 100644 --- a/drivers/common/cnxk/roc_platform.c +++ b/drivers/common/cnxk/roc_platform.c @@ -96,4 +96,6 @@ RTE_LOG_REGISTER_DEFAULT(cnxk_logtype_sso, NOTICE); RTE_LOG_REGISTER_DEFAULT(cnxk_logtype_tim, NOTICE); RTE_LOG_REGISTER_DEFAULT(cnxk_logtype_tm, NOTICE); RTE_LOG_REGISTER_DEFAULT(cnxk_logtype_dpi, NOTICE); +RTE_LOG_REGISTER_DEFAULT(cnxk_logtype_rep, NOTICE); +RTE_LOG_REGISTER_DEFAULT(cnxk_logtype_esw, NOTICE); RTE_LOG_REGISTER_DEFAULT(cnxk_logtype_ree, NOTICE); diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h index ba23b2e0d7..e08eb7f6ba 100644 --- a/drivers/common/cnxk/roc_platform.h +++ b/drivers/common/cnxk/roc_platform.h @@ -264,6 +264,8 @@ extern
[PATCH v2 02/24] net/cnxk: implementing eswitch device
Eswitch device is a parent or base device behind all the representors, acting as transport layer between representors and representees Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_eswitch.c | 465 drivers/net/cnxk/cnxk_eswitch.h | 103 +++ drivers/net/cnxk/meson.build| 1 + 3 files changed, 569 insertions(+) create mode 100644 drivers/net/cnxk/cnxk_eswitch.c create mode 100644 drivers/net/cnxk/cnxk_eswitch.h diff --git a/drivers/net/cnxk/cnxk_eswitch.c b/drivers/net/cnxk/cnxk_eswitch.c new file mode 100644 index 00..51110a762d --- /dev/null +++ b/drivers/net/cnxk/cnxk_eswitch.c @@ -0,0 +1,465 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include + +#define CNXK_NIX_DEF_SQ_COUNT 512 + +static int +cnxk_eswitch_dev_remove(struct rte_pci_device *pci_dev) +{ + struct cnxk_eswitch_dev *eswitch_dev; + struct roc_nix *nix; + int rc = 0; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + eswitch_dev = cnxk_eswitch_pmd_priv(); + + /* Check if this device is hosting common resource */ + nix = roc_idev_npa_nix_get(); + if (!nix || nix->pci_dev != pci_dev) { + rc = -EINVAL; + goto exit; + } + + /* Try nix fini now */ + rc = roc_nix_dev_fini(&eswitch_dev->nix); + if (rc == -EAGAIN) { + plt_info("%s: common resource in use by other devices", pci_dev->name); + goto exit; + } else if (rc) { + plt_err("Failed in nix dev fini, rc=%d", rc); + goto exit; + } + + rte_free(eswitch_dev); +exit: + return rc; +} + +static int +eswitch_dev_nix_flow_ctrl_set(struct cnxk_eswitch_dev *eswitch_dev) +{ + /* TODO enable flow control */ + return 0; + enum roc_nix_fc_mode mode_map[] = {ROC_NIX_FC_NONE, ROC_NIX_FC_RX, ROC_NIX_FC_TX, + ROC_NIX_FC_FULL}; + struct roc_nix *nix = &eswitch_dev->nix; + struct roc_nix_fc_cfg fc_cfg; + uint8_t rx_pause, tx_pause; + struct roc_nix_sq *sq; + struct roc_nix_cq *cq; + struct roc_nix_rq *rq; + uint8_t tc; + int rc, i; + + rx_pause = 1; + tx_pause = 1; + + /* Check if TX pause frame is already enabled or not */ + tc = tx_pause ? 0 : ROC_NIX_PFC_CLASS_INVALID; + + for (i = 0; i < eswitch_dev->nb_rxq; i++) { + memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg)); + + rq = &eswitch_dev->rxq[i].rqs; + cq = &eswitch_dev->cxq[i].cqs; + + fc_cfg.type = ROC_NIX_FC_RQ_CFG; + fc_cfg.rq_cfg.enable = !!tx_pause; + fc_cfg.rq_cfg.tc = tc; + fc_cfg.rq_cfg.rq = rq->qid; + fc_cfg.rq_cfg.pool = rq->aura_handle; + fc_cfg.rq_cfg.spb_pool = rq->spb_aura_handle; + fc_cfg.rq_cfg.cq_drop = cq->drop_thresh; + fc_cfg.rq_cfg.pool_drop_pct = ROC_NIX_AURA_THRESH; + + rc = roc_nix_fc_config_set(nix, &fc_cfg); + if (rc) + return rc; + } + + /* Check if RX pause frame is enabled or not */ + tc = rx_pause ? 0 : ROC_NIX_PFC_CLASS_INVALID; + for (i = 0; i < eswitch_dev->nb_txq; i++) { + memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg)); + + sq = &eswitch_dev->txq[i].sqs; + + fc_cfg.type = ROC_NIX_FC_TM_CFG; + fc_cfg.tm_cfg.sq = sq->qid; + fc_cfg.tm_cfg.tc = tc; + fc_cfg.tm_cfg.enable = !!rx_pause; + rc = roc_nix_fc_config_set(nix, &fc_cfg); + if (rc && rc != EEXIST) + return rc; + } + + rc = roc_nix_fc_mode_set(nix, mode_map[ROC_NIX_FC_FULL]); + if (rc) + return rc; + + return rc; +} + +int +cnxk_eswitch_nix_rsrc_start(struct cnxk_eswitch_dev *eswitch_dev) +{ + int rc; + + /* Update Flow control configuration */ + rc = eswitch_dev_nix_flow_ctrl_set(eswitch_dev); + if (rc) { + plt_err("Failed to enable flow control. error code(%d)", rc); + goto done; + } + + /* Enable Rx in NPC */ + rc = roc_nix_npc_rx_ena_dis(&eswitch_dev->nix, true); + if (rc) { + plt_err("Failed to enable NPC rx %d", rc); + goto done; + } + + rc = roc_npc_mcam_enable_all_entries(&eswitch_dev->npc, 1); + if (rc) { + plt_err("Failed to enable NPC entries %d", rc); + goto done; + } + +done: + return 0; +} + +int +cnxk_eswitch_txq_start(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid) +{ + struct roc_nix_sq *sq = &eswitch_dev->txq[qid].sqs; + int rc = -EINVAL; + + if (eswitch_dev->txq[qid].state == CNXK_ESWITCH_QUEUE_STATE_STARTED) + return 0;
[PATCH v2 03/24] net/cnxk: eswitch HW resource configuration
Configuring the hardware resources used by the eswitch device. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_eswitch.c | 206 1 file changed, 206 insertions(+) diff --git a/drivers/net/cnxk/cnxk_eswitch.c b/drivers/net/cnxk/cnxk_eswitch.c index 51110a762d..306edc6037 100644 --- a/drivers/net/cnxk/cnxk_eswitch.c +++ b/drivers/net/cnxk/cnxk_eswitch.c @@ -6,6 +6,30 @@ #define CNXK_NIX_DEF_SQ_COUNT 512 +static int +eswitch_hw_rsrc_cleanup(struct cnxk_eswitch_dev *eswitch_dev) +{ + struct roc_nix *nix; + int rc = 0; + + nix = &eswitch_dev->nix; + + roc_nix_unregister_queue_irqs(nix); + roc_nix_tm_fini(nix); + rc = roc_nix_lf_free(nix); + if (rc) { + plt_err("Failed to cleanup sq, rc %d", rc); + goto exit; + } + + rte_free(eswitch_dev->txq); + rte_free(eswitch_dev->rxq); + rte_free(eswitch_dev->cxq); + +exit: + return rc; +} + static int cnxk_eswitch_dev_remove(struct rte_pci_device *pci_dev) { @@ -18,6 +42,7 @@ cnxk_eswitch_dev_remove(struct rte_pci_device *pci_dev) eswitch_dev = cnxk_eswitch_pmd_priv(); + eswitch_hw_rsrc_cleanup(eswitch_dev); /* Check if this device is hosting common resource */ nix = roc_idev_npa_nix_get(); if (!nix || nix->pci_dev != pci_dev) { @@ -404,6 +429,178 @@ cnxk_eswitch_txq_setup(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid, uint1 return rc; } +static int +nix_lf_setup(struct cnxk_eswitch_dev *eswitch_dev) +{ + uint16_t nb_rxq, nb_txq, nb_cq; + struct roc_nix_fc_cfg fc_cfg; + struct roc_nix *nix; + uint64_t rx_cfg; + void *qs; + int rc; + + /* Initialize base roc nix */ + nix = &eswitch_dev->nix; + nix->pci_dev = eswitch_dev->pci_dev; + nix->hw_vlan_ins = true; + nix->reta_sz = ROC_NIX_RSS_RETA_SZ_256; + rc = roc_nix_dev_init(nix); + if (rc) { + plt_err("Failed to init nix eswitch device, rc=%d(%s)", rc, roc_error_msg_get(rc)); + goto fail; + } + + /* Get the representors count */ + rc = roc_nix_max_rep_count(&eswitch_dev->nix); + if (rc) { + plt_err("Failed to get rep cnt, rc=%d(%s)", rc, roc_error_msg_get(rc)); + goto free_cqs; + } + + /* Allocating an NIX LF */ + nb_rxq = CNXK_ESWITCH_MAX_RXQ; + nb_txq = CNXK_ESWITCH_MAX_TXQ; + nb_cq = CNXK_ESWITCH_MAX_RXQ; + rx_cfg = ROC_NIX_LF_RX_CFG_DIS_APAD; + rc = roc_nix_lf_alloc(nix, nb_rxq, nb_txq, rx_cfg); + if (rc) { + plt_err("lf alloc failed = %s(%d)", roc_error_msg_get(rc), rc); + goto dev_fini; + } + + if (nb_rxq) { + /* Allocate memory for eswitch rq's and cq's */ + qs = plt_zmalloc(sizeof(struct cnxk_eswitch_rxq) * nb_rxq, 0); + if (!qs) { + plt_err("Failed to alloc eswitch rxq"); + goto lf_free; + } + eswitch_dev->rxq = qs; + } + + if (nb_txq) { + /* Allocate memory for roc sq's */ + qs = plt_zmalloc(sizeof(struct cnxk_eswitch_txq) * nb_txq, 0); + if (!qs) { + plt_err("Failed to alloc eswitch txq"); + goto free_rqs; + } + eswitch_dev->txq = qs; + } + + if (nb_cq) { + qs = plt_zmalloc(sizeof(struct cnxk_eswitch_cxq) * nb_cq, 0); + if (!qs) { + plt_err("Failed to alloc eswitch cxq"); + goto free_sqs; + } + eswitch_dev->cxq = qs; + } + + eswitch_dev->nb_rxq = nb_rxq; + eswitch_dev->nb_txq = nb_txq; + + /* Re-enable NIX LF error interrupts */ + roc_nix_err_intr_ena_dis(nix, true); + roc_nix_ras_intr_ena_dis(nix, true); + + rc = roc_nix_lso_fmt_setup(nix); + if (rc) { + plt_err("lso setup failed = %s(%d)", roc_error_msg_get(rc), rc); + goto free_cqs; + } + + rc = roc_nix_switch_hdr_set(nix, 0, 0, 0, 0); + if (rc) { + plt_err("switch hdr set failed = %s(%d)", roc_error_msg_get(rc), rc); + goto free_cqs; + } + + rc = roc_nix_rss_default_setup(nix, + FLOW_KEY_TYPE_IPV4 | FLOW_KEY_TYPE_TCP | FLOW_KEY_TYPE_UDP); + if (rc) { + plt_err("rss default setup failed = %s(%d)", roc_error_msg_get(rc), rc); + goto free_cqs; + } + + rc = roc_nix_tm_init(nix); + if (rc) { + plt_err("tm failed = %s(%d)", roc_error_msg_get(rc), rc); + goto free_cqs; + } + + /* Register queue IRQs */ + rc = roc_nix_register_queue_irqs(nix); + if (rc) { + plt_err("Failed to register queue
[PATCH v2 04/24] net/cnxk: eswitch devargs parsing
Implementing the devargs parsing logic via which the representors pattern is provided. These patterns define for which representies representors shall be created. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_eswitch.c | 88 + drivers/net/cnxk/cnxk_eswitch.h | 52 ++ drivers/net/cnxk/cnxk_eswitch_devargs.c | 236 drivers/net/cnxk/meson.build| 1 + 4 files changed, 377 insertions(+) create mode 100644 drivers/net/cnxk/cnxk_eswitch_devargs.c diff --git a/drivers/net/cnxk/cnxk_eswitch.c b/drivers/net/cnxk/cnxk_eswitch.c index 306edc6037..739a09c034 100644 --- a/drivers/net/cnxk/cnxk_eswitch.c +++ b/drivers/net/cnxk/cnxk_eswitch.c @@ -456,6 +456,7 @@ nix_lf_setup(struct cnxk_eswitch_dev *eswitch_dev) plt_err("Failed to get rep cnt, rc=%d(%s)", rc, roc_error_msg_get(rc)); goto free_cqs; } + eswitch_dev->repr_cnt.max_repr = eswitch_dev->nix.rep_cnt; /* Allocating an NIX LF */ nb_rxq = CNXK_ESWITCH_MAX_RXQ; @@ -601,11 +602,73 @@ eswitch_hw_rsrc_setup(struct cnxk_eswitch_dev *eswitch_dev) return rc; } +int +cnxk_eswitch_representor_info_get(struct cnxk_eswitch_dev *eswitch_dev, + struct rte_eth_representor_info *info) +{ + struct cnxk_eswitch_devargs *esw_da; + int rc = 0, n_entries, i, j = 0, k = 0; + + for (i = 0; i < eswitch_dev->nb_esw_da; i++) { + for (j = 0; j < eswitch_dev->esw_da[i].nb_repr_ports; j++) + k++; + } + n_entries = k; + + if (info == NULL) + goto out; + + if ((uint32_t)n_entries > info->nb_ranges_alloc) + n_entries = info->nb_ranges_alloc; + + k = 0; + info->controller = 0; + info->pf = 0; + for (i = 0; i < eswitch_dev->nb_esw_da; i++) { + esw_da = &eswitch_dev->esw_da[i]; + info->ranges[k].type = esw_da->da.type; + switch (esw_da->da.type) { + case RTE_ETH_REPRESENTOR_PF: + info->ranges[k].controller = 0; + info->ranges[k].pf = esw_da->repr_hw_info[0].pfvf; + info->ranges[k].vf = 0; + info->ranges[k].id_base = info->ranges[i].pf; + info->ranges[k].id_end = info->ranges[i].pf; + snprintf(info->ranges[k].name, sizeof(info->ranges[k].name), "pf%d", +info->ranges[k].pf); + k++; + break; + case RTE_ETH_REPRESENTOR_VF: + for (j = 0; j < esw_da->nb_repr_ports; j++) { + info->ranges[k].controller = 0; + info->ranges[k].pf = esw_da->da.ports[0]; + info->ranges[k].vf = esw_da->repr_hw_info[j].pfvf; + info->ranges[k].id_base = esw_da->repr_hw_info[j].port_id; + info->ranges[k].id_end = esw_da->repr_hw_info[j].port_id; + snprintf(info->ranges[k].name, sizeof(info->ranges[k].name), +"pf%dvf%d", info->ranges[k].pf, info->ranges[k].vf); + k++; + } + break; + default: + plt_err("Invalid type %d", esw_da->da.type); + rc = 0; + goto fail; + }; + } + info->nb_ranges = k; +fail: + return rc; +out: + return n_entries; +} + static int cnxk_eswitch_dev_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) { struct cnxk_eswitch_dev *eswitch_dev; const struct rte_memzone *mz = NULL; + uint16_t num_reps; int rc = -ENOMEM; RTE_SET_USED(pci_drv); @@ -638,12 +701,37 @@ cnxk_eswitch_dev_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pc } } + if (pci_dev->device.devargs) { + rc = cnxk_eswitch_repr_devargs(pci_dev, eswitch_dev); + if (rc) + goto rsrc_cleanup; + } + + if (eswitch_dev->repr_cnt.nb_repr_created > eswitch_dev->repr_cnt.max_repr) { + plt_err("Representors to be created %d can be greater than max allowed %d", + eswitch_dev->repr_cnt.nb_repr_created, eswitch_dev->repr_cnt.max_repr); + rc = -EINVAL; + goto rsrc_cleanup; + } + + num_reps = eswitch_dev->repr_cnt.nb_repr_created; + if (!num_reps) { + plt_err("No representors enabled"); + goto fail; + } + + plt_esw_dbg("Max no of reps %d reps to be created %d Eswtch pfunc %x", + eswitch_dev->repr_cnt.max_repr, eswitch_dev->repr_cnt.nb_repr_crea
[PATCH v2 05/24] net/cnxk: probing representor ports
Basic skeleton for probing representor devices. If PF device is passed with "representor" devargs, representor ports gets probed as a separate ethdev device. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_eswitch.c | 12 ++ drivers/net/cnxk/cnxk_eswitch.h | 8 +- drivers/net/cnxk/cnxk_rep.c | 256 drivers/net/cnxk/cnxk_rep.h | 50 +++ drivers/net/cnxk/cnxk_rep_ops.c | 129 drivers/net/cnxk/meson.build| 2 + 6 files changed, 456 insertions(+), 1 deletion(-) create mode 100644 drivers/net/cnxk/cnxk_rep.c create mode 100644 drivers/net/cnxk/cnxk_rep.h create mode 100644 drivers/net/cnxk/cnxk_rep_ops.c diff --git a/drivers/net/cnxk/cnxk_eswitch.c b/drivers/net/cnxk/cnxk_eswitch.c index 739a09c034..563b224a6c 100644 --- a/drivers/net/cnxk/cnxk_eswitch.c +++ b/drivers/net/cnxk/cnxk_eswitch.c @@ -3,6 +3,7 @@ */ #include +#include #define CNXK_NIX_DEF_SQ_COUNT 512 @@ -42,6 +43,10 @@ cnxk_eswitch_dev_remove(struct rte_pci_device *pci_dev) eswitch_dev = cnxk_eswitch_pmd_priv(); + /* Remove representor devices associated with PF */ + if (eswitch_dev->repr_cnt.nb_repr_created) + cnxk_rep_dev_remove(eswitch_dev); + eswitch_hw_rsrc_cleanup(eswitch_dev); /* Check if this device is hosting common resource */ nix = roc_idev_npa_nix_get(); @@ -724,6 +729,13 @@ cnxk_eswitch_dev_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pc eswitch_dev->repr_cnt.max_repr, eswitch_dev->repr_cnt.nb_repr_created, roc_nix_get_pf_func(&eswitch_dev->nix)); + /* Probe representor ports */ + rc = cnxk_rep_dev_probe(pci_dev, eswitch_dev); + if (rc) { + plt_err("Failed to probe representor ports"); + goto rsrc_cleanup; + } + /* Spinlock for synchronization between representors traffic and control * messages */ diff --git a/drivers/net/cnxk/cnxk_eswitch.h b/drivers/net/cnxk/cnxk_eswitch.h index dcb787cf02..4908c3ba95 100644 --- a/drivers/net/cnxk/cnxk_eswitch.h +++ b/drivers/net/cnxk/cnxk_eswitch.h @@ -66,6 +66,11 @@ struct cnxk_eswitch_repr_cnt { uint16_t nb_repr_started; }; +struct cnxk_eswitch_switch_domain { + uint16_t switch_domain_id; + uint16_t pf; +}; + struct cnxk_rep_info { struct rte_eth_dev *rep_eth_dev; }; @@ -121,7 +126,8 @@ struct cnxk_eswitch_dev { /* Port representor fields */ rte_spinlock_t rep_lock; - uint16_t switch_domain_id; + uint16_t nb_switch_domain; + struct cnxk_eswitch_switch_domain sw_dom[RTE_MAX_ETHPORTS]; uint16_t eswitch_vdev; struct cnxk_rep_info *rep_info; }; diff --git a/drivers/net/cnxk/cnxk_rep.c b/drivers/net/cnxk/cnxk_rep.c new file mode 100644 index 00..295bea3724 --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep.c @@ -0,0 +1,256 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ +#include + +#define PF_SHIFT 10 +#define PF_MASK 0x3F + +static uint16_t +get_pf(uint16_t hw_func) +{ + return (hw_func >> PF_SHIFT) & PF_MASK; +} + +static uint16_t +switch_domain_id_allocate(struct cnxk_eswitch_dev *eswitch_dev, uint16_t pf) +{ + int i = 0; + + for (i = 0; i < eswitch_dev->nb_switch_domain; i++) { + if (eswitch_dev->sw_dom[i].pf == pf) + return eswitch_dev->sw_dom[i].switch_domain_id; + } + + return RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID; +} + +int +cnxk_rep_dev_uninit(struct rte_eth_dev *ethdev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + plt_rep_dbg("Representor port:%d uninit", ethdev->data->port_id); + rte_free(ethdev->data->mac_addrs); + ethdev->data->mac_addrs = NULL; + + return 0; +} + +int +cnxk_rep_dev_remove(struct cnxk_eswitch_dev *eswitch_dev) +{ + int i, rc = 0; + + for (i = 0; i < eswitch_dev->nb_switch_domain; i++) { + rc = rte_eth_switch_domain_free(eswitch_dev->sw_dom[i].switch_domain_id); + if (rc) + plt_err("Failed to alloc switch domain: %d", rc); + } + + return rc; +} + +static int +cnxk_rep_parent_setup(struct cnxk_eswitch_dev *eswitch_dev) +{ + uint16_t pf, prev_pf = 0, switch_domain_id; + int rc, i, j = 0; + + if (eswitch_dev->rep_info) + return 0; + + eswitch_dev->rep_info = + plt_zmalloc(sizeof(eswitch_dev->rep_info[0]) * eswitch_dev->repr_cnt.max_repr, 0); + if (!eswitch_dev->rep_info) { + plt_err("Failed to alloc memory for rep info"); + rc = -ENOMEM; + goto fail; + } + + /* Allocate switch domain for all PFs (VFs will be under same domain as PF) */ + for (i = 0; i < eswitch_dev->repr_cnt.max_repr; i++) { + pf = get_pf(eswitch_dev->nix
[PATCH v2 06/24] common/cnxk: common NPC changes for eswitch
- adding support for installing flow using npc_install_flow mbox - rss action configuration for eswitch - new mcam helper apis Signed-off-by: Harman Kalra --- drivers/common/cnxk/meson.build| 1 + drivers/common/cnxk/roc_api.h | 3 + drivers/common/cnxk/roc_eswitch.c | 285 + drivers/common/cnxk/roc_eswitch.h | 21 +++ drivers/common/cnxk/roc_mbox.h | 25 +++ drivers/common/cnxk/roc_npc.c | 26 ++- drivers/common/cnxk/roc_npc.h | 5 +- drivers/common/cnxk/roc_npc_mcam.c | 2 +- drivers/common/cnxk/roc_npc_priv.h | 3 +- drivers/common/cnxk/version.map| 6 + 10 files changed, 368 insertions(+), 9 deletions(-) create mode 100644 drivers/common/cnxk/roc_eswitch.c create mode 100644 drivers/common/cnxk/roc_eswitch.h diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index 56eea52909..e0e4600989 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -20,6 +20,7 @@ sources = files( 'roc_cpt_debug.c', 'roc_dev.c', 'roc_dpi.c', +'roc_eswitch.c', 'roc_hash.c', 'roc_idev.c', 'roc_irq.c', diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h index f630853088..6a86863c57 100644 --- a/drivers/common/cnxk/roc_api.h +++ b/drivers/common/cnxk/roc_api.h @@ -117,4 +117,7 @@ /* MACsec */ #include "roc_mcs.h" +/* Eswitch */ +#include "roc_eswitch.h" + #endif /* _ROC_API_H_ */ diff --git a/drivers/common/cnxk/roc_eswitch.c b/drivers/common/cnxk/roc_eswitch.c new file mode 100644 index 00..42a27e7442 --- /dev/null +++ b/drivers/common/cnxk/roc_eswitch.c @@ -0,0 +1,285 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include + +#include "roc_api.h" +#include "roc_priv.h" + +static int +eswitch_vlan_rx_cfg(uint16_t pcifunc, struct mbox *mbox) +{ + struct nix_vtag_config *vtag_cfg; + int rc; + + vtag_cfg = mbox_alloc_msg_nix_vtag_cfg(mbox_get(mbox)); + + /* config strip, capture and size */ + vtag_cfg->hdr.pcifunc = pcifunc; + vtag_cfg->vtag_size = NIX_VTAGSIZE_T4; + vtag_cfg->cfg_type = VTAG_RX; /* rx vlan cfg */ + vtag_cfg->rx.vtag_type = NIX_RX_VTAG_TYPE0; + vtag_cfg->rx.strip_vtag = true; + vtag_cfg->rx.capture_vtag = true; + + rc = mbox_process(mbox); + if (rc) + goto exit; + + rc = 0; +exit: + mbox_put(mbox); + return rc; +} + +static int +eswitch_vlan_tx_cfg(struct roc_npc_flow *flow, uint16_t pcifunc, struct mbox *mbox, + uint16_t vlan_tci, uint16_t *vidx) +{ + struct nix_vtag_config *vtag_cfg; + struct nix_vtag_config_rsp *rsp; + int rc; + + union { + uint64_t reg; + struct nix_tx_vtag_action_s act; + } tx_vtag_action; + + vtag_cfg = mbox_alloc_msg_nix_vtag_cfg(mbox_get(mbox)); + + /* Insert vlan tag */ + vtag_cfg->hdr.pcifunc = pcifunc; + vtag_cfg->vtag_size = NIX_VTAGSIZE_T4; + vtag_cfg->cfg_type = VTAG_TX; /* tx vlan cfg */ + vtag_cfg->tx.cfg_vtag0 = true; + vtag_cfg->tx.vtag0 = (((uint32_t)ROC_ESWITCH_VLAN_TPID << 16) | vlan_tci); + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto exit; + + if (rsp->vtag0_idx < 0) { + plt_err("Failed to config TX VTAG action"); + rc = -EINVAL; + goto exit; + } + + *vidx = rsp->vtag0_idx; + tx_vtag_action.reg = 0; + tx_vtag_action.act.vtag0_def = rsp->vtag0_idx; + tx_vtag_action.act.vtag0_lid = NPC_LID_LA; + tx_vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT; + tx_vtag_action.act.vtag0_relptr = NIX_TX_VTAGACTION_VTAG0_RELPTR; + + flow->vtag_action = tx_vtag_action.reg; + + rc = 0; +exit: + mbox_put(mbox); + return rc; + + return 0; +} + +int +roc_eswitch_npc_mcam_tx_rule(struct roc_npc *roc_npc, struct roc_npc_flow *flow, uint16_t pcifunc, +uint32_t vlan_tci) +{ + struct npc *npc = roc_npc_to_npc_priv(roc_npc); + struct npc_install_flow_req *req; + struct npc_install_flow_rsp *rsp; + struct mbox *mbox = npc->mbox; + uint16_t vidx = 0, lbkid; + int rc; + + rc = eswitch_vlan_tx_cfg(flow, roc_npc->pf_func, mbox, vlan_tci, &vidx); + if (rc) { + plt_err("Failed to configure VLAN TX, err %d", rc); + goto fail; + } + + req = mbox_alloc_msg_npc_install_flow(mbox_get(mbox)); + + lbkid = 0; + req->hdr.pcifunc = roc_npc->pf_func; /* Eswitch PF is requester */ + req->vf = pcifunc; + req->entry = flow->mcam_id; + req->intf = NPC_MCAM_TX; + req->op = NIX_TX_ACTIONOP_UCAST_CHAN; + req->index = (lbkid << 8) | ROC_ESWITCH_LBK_CHAN; + req->set_cntr = 1; + req->vtag0_def = vidx; + req->vta
[PATCH v2 07/24] common/cnxk: interface to update VLAN TPID
Introducing eswitch variant of set vlan tpid api which can be using for PF and VF Signed-off-by: Harman Kalra --- drivers/common/cnxk/roc_eswitch.c | 15 +++ drivers/common/cnxk/roc_eswitch.h | 4 drivers/common/cnxk/roc_nix_priv.h | 4 ++-- drivers/common/cnxk/roc_nix_vlan.c | 23 ++- drivers/common/cnxk/version.map| 1 + 5 files changed, 40 insertions(+), 7 deletions(-) diff --git a/drivers/common/cnxk/roc_eswitch.c b/drivers/common/cnxk/roc_eswitch.c index 42a27e7442..7f2a8e6c06 100644 --- a/drivers/common/cnxk/roc_eswitch.c +++ b/drivers/common/cnxk/roc_eswitch.c @@ -283,3 +283,18 @@ roc_eswitch_npc_rss_action_configure(struct roc_npc *roc_npc, struct roc_npc_flo ((uint64_t)(rss_grp_idx & NPC_RSS_ACT_GRP_MASK) << NPC_RSS_ACT_GRP_OFFSET); return 0; } + +int +roc_eswitch_nix_vlan_tpid_set(struct roc_nix *roc_nix, uint32_t type, uint16_t tpid, bool is_vf) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + int rc = 0; + + /* Configuring for PF/VF */ + rc = nix_vlan_tpid_set(dev->mbox, dev->pf_func | is_vf, type, tpid); + if (rc) + plt_err("Failed to set tpid for PF, rc %d", rc); + + return rc; +} diff --git a/drivers/common/cnxk/roc_eswitch.h b/drivers/common/cnxk/roc_eswitch.h index 35976b7ff6..0dd23ff76a 100644 --- a/drivers/common/cnxk/roc_eswitch.h +++ b/drivers/common/cnxk/roc_eswitch.h @@ -18,4 +18,8 @@ int __roc_api roc_eswitch_npc_mcam_delete_rule(struct roc_npc *roc_npc, struct r int __roc_api roc_eswitch_npc_rss_action_configure(struct roc_npc *roc_npc, struct roc_npc_flow *flow, uint32_t flowkey_cfg, uint16_t *reta_tbl); + +/* NIX */ +int __roc_api roc_eswitch_nix_vlan_tpid_set(struct roc_nix *nix, uint32_t type, uint16_t tpid, + bool is_vf); #endif /* __ROC_ESWITCH_H__ */ diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h index a582b9df33..8767a62577 100644 --- a/drivers/common/cnxk/roc_nix_priv.h +++ b/drivers/common/cnxk/roc_nix_priv.h @@ -473,9 +473,9 @@ int nix_lf_stat_reg_dump(uintptr_t nix_lf_base, uint64_t *data, uint8_t lf_tx_stats, uint8_t lf_rx_stats); int nix_lf_int_reg_dump(uintptr_t nix_lf_base, uint64_t *data, uint16_t qints, uint16_t cints); -int nix_q_ctx_get(struct dev *dev, uint8_t ctype, uint16_t qid, - __io void **ctx_p); +int nix_q_ctx_get(struct dev *dev, uint8_t ctype, uint16_t qid, __io void **ctx_p); uint8_t nix_tm_lbk_relchan_get(struct nix *nix); +int nix_vlan_tpid_set(struct mbox *mbox, uint16_t pcifunc, uint32_t type, uint16_t tpid); /* * Telemetry diff --git a/drivers/common/cnxk/roc_nix_vlan.c b/drivers/common/cnxk/roc_nix_vlan.c index abd2eb0571..db218593ad 100644 --- a/drivers/common/cnxk/roc_nix_vlan.c +++ b/drivers/common/cnxk/roc_nix_vlan.c @@ -211,18 +211,17 @@ roc_nix_vlan_insert_ena_dis(struct roc_nix *roc_nix, } int -roc_nix_vlan_tpid_set(struct roc_nix *roc_nix, uint32_t type, uint16_t tpid) +nix_vlan_tpid_set(struct mbox *mbox, uint16_t pcifunc, uint32_t type, uint16_t tpid) { - struct nix *nix = roc_nix_to_nix_priv(roc_nix); - struct dev *dev = &nix->dev; - struct mbox *mbox = mbox_get(dev->mbox); struct nix_set_vlan_tpid *tpid_cfg; int rc = -ENOSPC; - tpid_cfg = mbox_alloc_msg_nix_set_vlan_tpid(mbox); + /* Configuring for PF */ + tpid_cfg = mbox_alloc_msg_nix_set_vlan_tpid(mbox_get(mbox)); if (tpid_cfg == NULL) goto exit; tpid_cfg->tpid = tpid; + tpid_cfg->hdr.pcifunc = pcifunc; if (type & ROC_NIX_VLAN_TYPE_OUTER) tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER; @@ -234,3 +233,17 @@ roc_nix_vlan_tpid_set(struct roc_nix *roc_nix, uint32_t type, uint16_t tpid) mbox_put(mbox); return rc; } + +int +roc_nix_vlan_tpid_set(struct roc_nix *roc_nix, uint32_t type, uint16_t tpid) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + int rc; + + rc = nix_vlan_tpid_set(dev->mbox, dev->pf_func, type, tpid); + if (rc) + plt_err("Failed to set tpid for PF, rc %d", rc); + + return rc; +} diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index feda34b852..78c421677d 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -91,6 +91,7 @@ INTERNAL { roc_dpi_disable; roc_dpi_enable; roc_error_msg_get; + roc_eswitch_nix_vlan_tpid_set; roc_eswitch_npc_mcam_delete_rule; roc_eswitch_npc_mcam_rx_rule; roc_eswitch_npc_mcam_tx_rule; -- 2.18.0
[PATCH v2 08/24] net/cnxk: eswitch flow configurations
- Adding flow rules for eswitch PF and VF - Interfaces to delete shift flow rules Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_eswitch.c | 43 ++- drivers/net/cnxk/cnxk_eswitch.h | 25 +- drivers/net/cnxk/cnxk_eswitch_devargs.c | 1 + drivers/net/cnxk/cnxk_eswitch_flow.c| 445 drivers/net/cnxk/meson.build| 1 + 5 files changed, 511 insertions(+), 4 deletions(-) create mode 100644 drivers/net/cnxk/cnxk_eswitch_flow.c diff --git a/drivers/net/cnxk/cnxk_eswitch.c b/drivers/net/cnxk/cnxk_eswitch.c index 563b224a6c..1cb0f0310a 100644 --- a/drivers/net/cnxk/cnxk_eswitch.c +++ b/drivers/net/cnxk/cnxk_eswitch.c @@ -2,11 +2,30 @@ * Copyright(C) 2023 Marvell. */ +#include + #include #include #define CNXK_NIX_DEF_SQ_COUNT 512 +struct cnxk_esw_repr_hw_info * +cnxk_eswitch_representor_hw_info(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func) +{ + struct cnxk_eswitch_devargs *esw_da; + int i, j; + + /* Traversing the initialized represented list */ + for (i = 0; i < eswitch_dev->nb_esw_da; i++) { + esw_da = &eswitch_dev->esw_da[i]; + for (j = 0; j < esw_da->nb_repr_ports; j++) { + if (esw_da->repr_hw_info[j].hw_func == hw_func) + return &esw_da->repr_hw_info[j]; + } + } + return NULL; +} + static int eswitch_hw_rsrc_cleanup(struct cnxk_eswitch_dev *eswitch_dev) { @@ -48,6 +67,10 @@ cnxk_eswitch_dev_remove(struct rte_pci_device *pci_dev) cnxk_rep_dev_remove(eswitch_dev); eswitch_hw_rsrc_cleanup(eswitch_dev); + + /* Cleanup NPC rxtx flow rules */ + cnxk_eswitch_flow_rules_remove_list(eswitch_dev, &eswitch_dev->esw_flow_list); + /* Check if this device is hosting common resource */ nix = roc_idev_npa_nix_get(); if (!nix || nix->pci_dev != pci_dev) { @@ -58,7 +81,7 @@ cnxk_eswitch_dev_remove(struct rte_pci_device *pci_dev) /* Try nix fini now */ rc = roc_nix_dev_fini(&eswitch_dev->nix); if (rc == -EAGAIN) { - plt_info("%s: common resource in use by other devices", pci_dev->name); + plt_esw_dbg("%s: common resource in use by other devices", pci_dev->name); goto exit; } else if (rc) { plt_err("Failed in nix dev fini, rc=%d", rc); @@ -154,6 +177,21 @@ cnxk_eswitch_nix_rsrc_start(struct cnxk_eswitch_dev *eswitch_dev) goto done; } + /* Install eswitch PF mcam rules */ + rc = cnxk_eswitch_pfvf_flow_rules_install(eswitch_dev, false); + if (rc) { + plt_err("Failed to install rxtx rules, rc %d", rc); + goto done; + } + + /* Configure TPID for Eswitch PF LFs */ + rc = roc_eswitch_nix_vlan_tpid_set(&eswitch_dev->nix, ROC_NIX_VLAN_TYPE_OUTER, + CNXK_ESWITCH_VLAN_TPID, false); + if (rc) { + plt_err("Failed to configure tpid, rc %d", rc); + goto done; + } + rc = roc_npc_mcam_enable_all_entries(&eswitch_dev->npc, 1); if (rc) { plt_err("Failed to enable NPC entries %d", rc); @@ -600,6 +638,9 @@ eswitch_hw_rsrc_setup(struct cnxk_eswitch_dev *eswitch_dev) if (rc) goto rsrc_cleanup; + /* List for eswitch default flows */ + TAILQ_INIT(&eswitch_dev->esw_flow_list); + return rc; rsrc_cleanup: eswitch_hw_rsrc_cleanup(eswitch_dev); diff --git a/drivers/net/cnxk/cnxk_eswitch.h b/drivers/net/cnxk/cnxk_eswitch.h index 4908c3ba95..470e4035bf 100644 --- a/drivers/net/cnxk/cnxk_eswitch.h +++ b/drivers/net/cnxk/cnxk_eswitch.h @@ -13,11 +13,10 @@ #include "cn10k_tx.h" #define CNXK_ESWITCH_CTRL_MSG_SOCK_PATH "/tmp/cxk_rep_ctrl_msg_sock" +#define CNXK_ESWITCH_VLAN_TPID ROC_ESWITCH_VLAN_TPID #define CNXK_REP_ESWITCH_DEV_MZ"cnxk_eswitch_dev" -#define CNXK_ESWITCH_VLAN_TPID 0x8100 /* TODO change */ #define CNXK_ESWITCH_MAX_TXQ 256 #define CNXK_ESWITCH_MAX_RXQ 256 -#define CNXK_ESWITCH_LBK_CHAN 63 #define CNXK_ESWITCH_VFPF_SHIFT8 #define CNXK_ESWITCH_QUEUE_STATE_RELEASED 0 @@ -25,6 +24,7 @@ #define CNXK_ESWITCH_QUEUE_STATE_STARTED2 #define CNXK_ESWITCH_QUEUE_STATE_STOPPED3 +TAILQ_HEAD(eswitch_flow_list, roc_npc_flow); enum cnxk_esw_da_pattern_type { CNXK_ESW_DA_TYPE_LIST = 0, CNXK_ESW_DA_TYPE_PFVF, @@ -39,6 +39,9 @@ struct cnxk_esw_repr_hw_info { uint16_t pfvf; /* representor port id assigned to representee */ uint16_t port_id; + uint16_t num_flow_entries; + + TAILQ_HEAD(flow_list, roc_npc_flow) repr_flow_list; }; /* Structure representing per devarg information - this can be per representee @@ -90,7 +93,6 @@ struct cnxk_eswitch_cxq { uint8_t state; }; -TAI
[PATCH v2 09/24] net/cnxk: eswitch fastpath routines
Implementing fastpath RX and TX fast path routines which can be invoked from respective representors rx burst and tx burst Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_eswitch.h | 5 + drivers/net/cnxk/cnxk_eswitch_rxtx.c | 212 +++ drivers/net/cnxk/meson.build | 1 + 3 files changed, 218 insertions(+) create mode 100644 drivers/net/cnxk/cnxk_eswitch_rxtx.c diff --git a/drivers/net/cnxk/cnxk_eswitch.h b/drivers/net/cnxk/cnxk_eswitch.h index 470e4035bf..d92c4f4778 100644 --- a/drivers/net/cnxk/cnxk_eswitch.h +++ b/drivers/net/cnxk/cnxk_eswitch.h @@ -177,4 +177,9 @@ int cnxk_eswitch_pfvf_flow_rules_install(struct cnxk_eswitch_dev *eswitch_dev, b int cnxk_eswitch_flow_rule_shift(uint16_t hw_func, uint16_t *new_entry); int cnxk_eswitch_flow_rules_remove_list(struct cnxk_eswitch_dev *eswitch_dev, struct flow_list *list); +/* RX TX fastpath routines */ +uint16_t cnxk_eswitch_dev_tx_burst(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid, + struct rte_mbuf **pkts, uint16_t nb_tx, const uint16_t flags); +uint16_t cnxk_eswitch_dev_rx_burst(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid, + struct rte_mbuf **pkts, uint16_t nb_pkts); #endif /* __CNXK_ESWITCH_H__ */ diff --git a/drivers/net/cnxk/cnxk_eswitch_rxtx.c b/drivers/net/cnxk/cnxk_eswitch_rxtx.c new file mode 100644 index 00..b5a69e3338 --- /dev/null +++ b/drivers/net/cnxk/cnxk_eswitch_rxtx.c @@ -0,0 +1,212 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include + +static __rte_always_inline struct rte_mbuf * +eswitch_nix_get_mbuf_from_cqe(void *cq, const uint64_t data_off) +{ + rte_iova_t buff; + + /* Skip CQE, NIX_RX_PARSE_S and SG HDR(9 DWORDs) and peek buff addr */ + buff = *((rte_iova_t *)((uint64_t *)cq + 9)); + return (struct rte_mbuf *)(buff - data_off); +} + +static inline uint64_t +eswitch_nix_rx_nb_pkts(struct roc_nix_cq *cq, const uint64_t wdata, const uint32_t qmask) +{ + uint64_t reg, head, tail; + uint32_t available; + + /* Update the available count if cached value is not enough */ + + /* Use LDADDA version to avoid reorder */ + reg = roc_atomic64_add_sync(wdata, cq->status); + /* CQ_OP_STATUS operation error */ + if (reg & BIT_ULL(NIX_CQ_OP_STAT_OP_ERR) || reg & BIT_ULL(NIX_CQ_OP_STAT_CQ_ERR)) + return 0; + + tail = reg & 0xF; + head = (reg >> 20) & 0xF; + if (tail < head) + available = tail - head + qmask + 1; + else + available = tail - head; + + return available; +} + +static inline void +nix_cn9k_xmit_one(uint64_t *cmd, void *lmt_addr, const plt_iova_t io_addr) +{ + uint64_t lmt_status; + + do { + roc_lmt_mov(lmt_addr, cmd, 0); + lmt_status = roc_lmt_submit_ldeor(io_addr); + } while (lmt_status == 0); +} + +uint16_t +cnxk_eswitch_dev_tx_burst(struct cnxk_eswitch_dev *eswitch_dev, uint16_t qid, + struct rte_mbuf **pkts, uint16_t nb_xmit, const uint16_t flags) +{ + struct roc_nix_sq *sq = &eswitch_dev->txq[qid].sqs; + struct roc_nix_rq *rq = &eswitch_dev->rxq[qid].rqs; + uint16_t lmt_id, pkt = 0, nb_tx = 0; + struct nix_send_ext_s *send_hdr_ext; + uint64_t aura_handle, cmd[6], data; + struct nix_send_hdr_s *send_hdr; + uint16_t vlan_tci = qid; + union nix_send_sg_s *sg; + uintptr_t lmt_base, pa; + int64_t fc_pkts, dw_m1; + rte_iova_t io_addr; + + if (unlikely(eswitch_dev->txq[qid].state != CNXK_ESWITCH_QUEUE_STATE_STARTED)) + return 0; + + lmt_base = sq->roc_nix->lmt_base; + io_addr = sq->io_addr; + aura_handle = rq->aura_handle; + /* Get LMT base address and LMT ID as per thread ID */ + lmt_id = roc_plt_control_lmt_id_get(); + lmt_base += ((uint64_t)lmt_id << ROC_LMT_LINE_SIZE_LOG2); + /* Double word minus 1: LMTST size-1 in units of 128 bits */ + /* 2(HDR) + 2(EXT_HDR) + 1(SG) + 1(IOVA) = 6/2 - 1 = 2 */ + dw_m1 = cn10k_nix_tx_ext_subs(flags) + 1; + + memset(cmd, 0, sizeof(cmd)); + send_hdr = (struct nix_send_hdr_s *)&cmd[0]; + send_hdr->w0.sizem1 = dw_m1; + send_hdr->w0.sq = sq->qid; + + if (dw_m1 >= 2) { + send_hdr_ext = (struct nix_send_ext_s *)&cmd[2]; + send_hdr_ext->w0.subdc = NIX_SUBDC_EXT; + if (flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) { + send_hdr_ext->w1.vlan0_ins_ena = true; + /* 2B before end of l2 header */ + send_hdr_ext->w1.vlan0_ins_ptr = 12; + send_hdr_ext->w1.vlan0_ins_tci = 0; + } + sg = (union nix_send_sg_s *)&cmd[4]; + } else { + sg = (union nix_send_sg
[PATCH v2 10/24] net/cnxk: add representor control plane
Implementing the control path for representor ports, where represented ports can be configured using TLV messaging. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_eswitch.c | 67 ++- drivers/net/cnxk/cnxk_eswitch.h | 8 + drivers/net/cnxk/cnxk_rep.c | 52 ++ drivers/net/cnxk/cnxk_rep.h | 3 + drivers/net/cnxk/cnxk_rep_msg.c | 823 drivers/net/cnxk/cnxk_rep_msg.h | 95 drivers/net/cnxk/meson.build| 1 + 7 files changed, 1041 insertions(+), 8 deletions(-) create mode 100644 drivers/net/cnxk/cnxk_rep_msg.c create mode 100644 drivers/net/cnxk/cnxk_rep_msg.h diff --git a/drivers/net/cnxk/cnxk_eswitch.c b/drivers/net/cnxk/cnxk_eswitch.c index 1cb0f0310a..ffcf89b1b1 100644 --- a/drivers/net/cnxk/cnxk_eswitch.c +++ b/drivers/net/cnxk/cnxk_eswitch.c @@ -9,6 +9,27 @@ #define CNXK_NIX_DEF_SQ_COUNT 512 +int +cnxk_eswitch_representor_id(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func, + uint16_t *rep_id) +{ + struct cnxk_esw_repr_hw_info *repr_info; + int rc = 0; + + repr_info = cnxk_eswitch_representor_hw_info(eswitch_dev, hw_func); + if (!repr_info) { + plt_warn("Failed to get representor group for %x", hw_func); + rc = -ENOENT; + goto fail; + } + + *rep_id = repr_info->rep_id; + + return 0; +fail: + return rc; +} + struct cnxk_esw_repr_hw_info * cnxk_eswitch_representor_hw_info(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func) { @@ -63,8 +84,38 @@ cnxk_eswitch_dev_remove(struct rte_pci_device *pci_dev) eswitch_dev = cnxk_eswitch_pmd_priv(); /* Remove representor devices associated with PF */ - if (eswitch_dev->repr_cnt.nb_repr_created) + if (eswitch_dev->repr_cnt.nb_repr_created) { + /* Exiting the rep msg ctrl thread */ + if (eswitch_dev->start_ctrl_msg_thrd) { + uint32_t sunlen; + struct sockaddr_un sun = {0}; + int sock_fd; + + eswitch_dev->start_ctrl_msg_thrd = false; + if (!eswitch_dev->client_connected) { + plt_esw_dbg("Establishing connection for teardown"); + sock_fd = socket(AF_UNIX, SOCK_STREAM, 0); + if (sock_fd == -1) { + plt_err("Failed to open socket. err %d", -errno); + return -errno; + } + sun.sun_family = AF_UNIX; + sunlen = sizeof(struct sockaddr_un); + strncpy(sun.sun_path, CNXK_ESWITCH_CTRL_MSG_SOCK_PATH, + sizeof(sun.sun_path) - 1); + + if (connect(sock_fd, (struct sockaddr *)&sun, sunlen) < 0) { + plt_err("Failed to connect socket: %s, err %d", + CNXK_ESWITCH_CTRL_MSG_SOCK_PATH, errno); + return -errno; + } + } + rte_thread_join(eswitch_dev->rep_ctrl_msg_thread, NULL); + } + + /* Remove representor devices associated with PF */ cnxk_rep_dev_remove(eswitch_dev); + } eswitch_hw_rsrc_cleanup(eswitch_dev); @@ -170,13 +221,6 @@ cnxk_eswitch_nix_rsrc_start(struct cnxk_eswitch_dev *eswitch_dev) goto done; } - /* Enable Rx in NPC */ - rc = roc_nix_npc_rx_ena_dis(&eswitch_dev->nix, true); - if (rc) { - plt_err("Failed to enable NPC rx %d", rc); - goto done; - } - /* Install eswitch PF mcam rules */ rc = cnxk_eswitch_pfvf_flow_rules_install(eswitch_dev, false); if (rc) { @@ -192,6 +236,13 @@ cnxk_eswitch_nix_rsrc_start(struct cnxk_eswitch_dev *eswitch_dev) goto done; } + /* Enable Rx in NPC */ + rc = roc_nix_npc_rx_ena_dis(&eswitch_dev->nix, true); + if (rc) { + plt_err("Failed to enable NPC rx %d", rc); + goto done; + } + rc = roc_npc_mcam_enable_all_entries(&eswitch_dev->npc, 1); if (rc) { plt_err("Failed to enable NPC entries %d", rc); diff --git a/drivers/net/cnxk/cnxk_eswitch.h b/drivers/net/cnxk/cnxk_eswitch.h index d92c4f4778..a2f4aa0fcc 100644 --- a/drivers/net/cnxk/cnxk_eswitch.h +++ b/drivers/net/cnxk/cnxk_eswitch.h @@ -133,6 +133,12 @@ struct cnxk_eswitch_dev { /* No of representors */ struct cnxk_eswitch_repr_cnt repr_cnt; + /* Representor control channel field */ + bool start_ctrl_msg_thrd; + rte_thread_t rep_ctrl_msg_thread; + bool client_connected; + int
[PATCH v2 11/24] common/cnxk: representee notification callback
Setting up a callback which gets invoked every time a representee comes up or goes down. Later this callback gets handled by network conterpart. Signed-off-by: Harman Kalra --- drivers/common/cnxk/roc_dev.c | 24 drivers/common/cnxk/roc_dev_priv.h | 3 +++ drivers/common/cnxk/roc_eswitch.c | 23 +++ drivers/common/cnxk/roc_eswitch.h | 6 ++ drivers/common/cnxk/roc_mbox.c | 2 ++ drivers/common/cnxk/roc_mbox.h | 10 +- drivers/common/cnxk/version.map| 2 ++ 7 files changed, 69 insertions(+), 1 deletion(-) diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c index e7e89bf3d6..b12732de34 100644 --- a/drivers/common/cnxk/roc_dev.c +++ b/drivers/common/cnxk/roc_dev.c @@ -538,6 +538,29 @@ pf_vf_mbox_send_up_msg(struct dev *dev, void *rec_msg) } } +static int +mbox_up_handler_esw_repte_notify(struct dev *dev, struct esw_repte_req *req, struct msg_rsp *rsp) +{ + int rc = 0; + + plt_base_dbg("pf:%d/vf:%d msg id 0x%x (%s) from: pf:%d/vf:%d", dev_get_pf(dev->pf_func), +dev_get_vf(dev->pf_func), req->hdr.id, mbox_id2name(req->hdr.id), +dev_get_pf(req->hdr.pcifunc), dev_get_vf(req->hdr.pcifunc)); + + plt_base_dbg("repte pcifunc %x, enable %d", req->repte_pcifunc, req->enable); + + if (dev->ops && dev->ops->repte_notify) { + rc = dev->ops->repte_notify(dev->roc_nix, req->repte_pcifunc, + req->enable); + if (rc < 0) + plt_err("Failed to sent new representee %x notification to %s", + req->repte_pcifunc, (req->enable == true) ? "enable" : "disable"); + } + + rsp->hdr.rc = rc; + return rc; +} + static int mbox_up_handler_mcs_intr_notify(struct dev *dev, struct mcs_intr_info *info, struct msg_rsp *rsp) { @@ -712,6 +735,7 @@ mbox_process_msgs_up(struct dev *dev, struct mbox_msghdr *req) } MBOX_UP_CGX_MESSAGES MBOX_UP_MCS_MESSAGES + MBOX_UP_ESW_MESSAGES #undef M } diff --git a/drivers/common/cnxk/roc_dev_priv.h b/drivers/common/cnxk/roc_dev_priv.h index 5b2c5096f8..dd694b8572 100644 --- a/drivers/common/cnxk/roc_dev_priv.h +++ b/drivers/common/cnxk/roc_dev_priv.h @@ -36,12 +36,15 @@ typedef void (*q_err_cb_t)(void *roc_nix, void *data); /* Link status get callback */ typedef void (*link_status_get_t)(void *roc_nix, struct cgx_link_user_info *link); +/* Representee notification callback */ +typedef int (*repte_notify_t)(void *roc_nix, uint16_t pf_func, bool enable); struct dev_ops { link_info_t link_status_update; ptp_info_t ptp_info_update; link_status_get_t link_status_get; q_err_cb_t q_err_cb; + repte_notify_t repte_notify; }; #define dev_is_vf(dev) ((dev)->hwcap & DEV_HWCAP_F_VF) diff --git a/drivers/common/cnxk/roc_eswitch.c b/drivers/common/cnxk/roc_eswitch.c index 7f2a8e6c06..31bdba3985 100644 --- a/drivers/common/cnxk/roc_eswitch.c +++ b/drivers/common/cnxk/roc_eswitch.c @@ -298,3 +298,26 @@ roc_eswitch_nix_vlan_tpid_set(struct roc_nix *roc_nix, uint32_t type, uint16_t t return rc; } + +int +roc_eswitch_nix_process_repte_notify_cb_register(struct roc_nix *roc_nix, +process_repte_notify_t proc_repte_nt) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + + if (proc_repte_nt == NULL) + return NIX_ERR_PARAM; + + dev->ops->repte_notify = (repte_notify_t)proc_repte_nt; + return 0; +} + +void +roc_eswitch_nix_process_repte_notify_cb_unregister(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + + dev->ops->repte_notify = NULL; +} diff --git a/drivers/common/cnxk/roc_eswitch.h b/drivers/common/cnxk/roc_eswitch.h index 0dd23ff76a..8837e19b22 100644 --- a/drivers/common/cnxk/roc_eswitch.h +++ b/drivers/common/cnxk/roc_eswitch.h @@ -8,6 +8,9 @@ #define ROC_ESWITCH_VLAN_TPID 0x8100 #define ROC_ESWITCH_LBK_CHAN 63 +/* Process representee notification callback */ +typedef int (*process_repte_notify_t)(void *roc_nix, uint16_t pf_func, bool enable); + /* NPC */ int __roc_api roc_eswitch_npc_mcam_rx_rule(struct roc_npc *roc_npc, struct roc_npc_flow *flow, uint16_t pcifunc, uint16_t vlan_tci, @@ -22,4 +25,7 @@ int __roc_api roc_eswitch_npc_rss_action_configure(struct roc_npc *roc_npc, /* NIX */ int __roc_api roc_eswitch_nix_vlan_tpid_set(struct roc_nix *nix, uint32_t type, uint16_t tpid, bool is_vf); +int __roc_api roc_eswitch_nix_process_repte_notify_cb_register(struct roc_nix *roc_nix, + process_repte_
[PATCH v2 12/24] net/cnxk: handling representee notification
In case of any representee coming up or going down, kernel sends a mbox up call which signals a thread to process these messages and enable/disable HW resources accordingly. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_eswitch.c | 8 + drivers/net/cnxk/cnxk_eswitch.h | 20 +++ drivers/net/cnxk/cnxk_rep.c | 263 drivers/net/cnxk/cnxk_rep.h | 36 + 4 files changed, 327 insertions(+) diff --git a/drivers/net/cnxk/cnxk_eswitch.c b/drivers/net/cnxk/cnxk_eswitch.c index ffcf89b1b1..35c517f124 100644 --- a/drivers/net/cnxk/cnxk_eswitch.c +++ b/drivers/net/cnxk/cnxk_eswitch.c @@ -113,6 +113,14 @@ cnxk_eswitch_dev_remove(struct rte_pci_device *pci_dev) rte_thread_join(eswitch_dev->rep_ctrl_msg_thread, NULL); } + if (eswitch_dev->repte_msg_proc.start_thread) { + eswitch_dev->repte_msg_proc.start_thread = false; + pthread_cond_signal(&eswitch_dev->repte_msg_proc.repte_msg_cond); + rte_thread_join(eswitch_dev->repte_msg_proc.repte_msg_thread, NULL); + pthread_mutex_destroy(&eswitch_dev->repte_msg_proc.mutex); + pthread_cond_destroy(&eswitch_dev->repte_msg_proc.repte_msg_cond); + } + /* Remove representor devices associated with PF */ cnxk_rep_dev_remove(eswitch_dev); } diff --git a/drivers/net/cnxk/cnxk_eswitch.h b/drivers/net/cnxk/cnxk_eswitch.h index a2f4aa0fcc..8aab3e8a72 100644 --- a/drivers/net/cnxk/cnxk_eswitch.h +++ b/drivers/net/cnxk/cnxk_eswitch.h @@ -30,6 +30,23 @@ enum cnxk_esw_da_pattern_type { CNXK_ESW_DA_TYPE_PFVF, }; +struct cnxk_esw_repte_msg { + uint16_t hw_func; + bool enable; + + TAILQ_ENTRY(cnxk_esw_repte_msg) next; +}; + +struct cnxk_esw_repte_msg_proc { + bool start_thread; + uint8_t msg_avail; + rte_thread_t repte_msg_thread; + pthread_cond_t repte_msg_cond; + pthread_mutex_t mutex; + + TAILQ_HEAD(esw_repte_msg_list, cnxk_esw_repte_msg) msg_list; +}; + struct cnxk_esw_repr_hw_info { /* Representee pcifunc value */ uint16_t hw_func; @@ -139,6 +156,9 @@ struct cnxk_eswitch_dev { bool client_connected; int sock_fd; + /* Representee notification */ + struct cnxk_esw_repte_msg_proc repte_msg_proc; + /* Port representor fields */ rte_spinlock_t rep_lock; uint16_t nb_switch_domain; diff --git a/drivers/net/cnxk/cnxk_rep.c b/drivers/net/cnxk/cnxk_rep.c index f8e1d5b965..3b01856bc8 100644 --- a/drivers/net/cnxk/cnxk_rep.c +++ b/drivers/net/cnxk/cnxk_rep.c @@ -4,6 +4,8 @@ #include #include +#define REPTE_MSG_PROC_THRD_NAME_MAX_LEN 30 + #define PF_SHIFT 10 #define PF_MASK 0x3F @@ -86,6 +88,7 @@ cnxk_rep_dev_remove(struct cnxk_eswitch_dev *eswitch_dev) { int i, rc = 0; + roc_eswitch_nix_process_repte_notify_cb_unregister(&eswitch_dev->nix); for (i = 0; i < eswitch_dev->nb_switch_domain; i++) { rc = rte_eth_switch_domain_free(eswitch_dev->sw_dom[i].switch_domain_id); if (rc) @@ -95,6 +98,236 @@ cnxk_rep_dev_remove(struct cnxk_eswitch_dev *eswitch_dev) return rc; } +static int +cnxk_representee_release(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func) +{ + struct cnxk_rep_dev *rep_dev = NULL; + struct rte_eth_dev *rep_eth_dev; + int i, rc = 0; + + for (i = 0; i < eswitch_dev->repr_cnt.nb_repr_probed; i++) { + rep_eth_dev = eswitch_dev->rep_info[i].rep_eth_dev; + if (!rep_eth_dev) { + plt_err("Failed to get rep ethdev handle"); + rc = -EINVAL; + goto done; + } + + rep_dev = cnxk_rep_pmd_priv(rep_eth_dev); + if (rep_dev->hw_func == hw_func && + (!rep_dev->native_repte || rep_dev->is_vf_active)) { + rep_dev->is_vf_active = false; + rc = cnxk_rep_dev_stop(rep_eth_dev); + if (rc) { + plt_err("Failed to stop repr port %d, rep id %d", rep_dev->port_id, + rep_dev->rep_id); + goto done; + } + + cnxk_rep_rx_queue_release(rep_eth_dev, 0); + cnxk_rep_tx_queue_release(rep_eth_dev, 0); + plt_rep_dbg("Released representor ID %d representing %x", rep_dev->rep_id, + hw_func); + break; + } + } +done: + return rc; +} + +static int +cnxk_representee_setup(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func, uint16_t rep_id) +{ + struct cnxk_rep_dev *rep_dev = NULL; + struct rte_eth_dev *rep_eth_dev; +
[PATCH v2 13/24] net/cnxk: representor ethdev ops
Implementing ethernet device operation callbacks for port representors PMD Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_rep.c | 28 +- drivers/net/cnxk/cnxk_rep.h | 35 +++ drivers/net/cnxk/cnxk_rep_msg.h | 8 + drivers/net/cnxk/cnxk_rep_ops.c | 495 ++-- 4 files changed, 523 insertions(+), 43 deletions(-) diff --git a/drivers/net/cnxk/cnxk_rep.c b/drivers/net/cnxk/cnxk_rep.c index 3b01856bc8..6e2424db40 100644 --- a/drivers/net/cnxk/cnxk_rep.c +++ b/drivers/net/cnxk/cnxk_rep.c @@ -73,6 +73,8 @@ cnxk_rep_state_update(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func, ui int cnxk_rep_dev_uninit(struct rte_eth_dev *ethdev) { + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; @@ -80,6 +82,8 @@ cnxk_rep_dev_uninit(struct rte_eth_dev *ethdev) rte_free(ethdev->data->mac_addrs); ethdev->data->mac_addrs = NULL; + rep_dev->parent_dev->repr_cnt.nb_repr_probed--; + return 0; } @@ -369,26 +373,6 @@ cnxk_rep_parent_setup(struct cnxk_eswitch_dev *eswitch_dev) return rc; } -static uint16_t -cnxk_rep_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) -{ - PLT_SET_USED(tx_queue); - PLT_SET_USED(tx_pkts); - PLT_SET_USED(nb_pkts); - - return 0; -} - -static uint16_t -cnxk_rep_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) -{ - PLT_SET_USED(rx_queue); - PLT_SET_USED(rx_pkts); - PLT_SET_USED(nb_pkts); - - return 0; -} - static int cnxk_rep_dev_init(struct rte_eth_dev *eth_dev, void *params) { @@ -418,8 +402,8 @@ cnxk_rep_dev_init(struct rte_eth_dev *eth_dev, void *params) eth_dev->dev_ops = &cnxk_rep_dev_ops; /* Rx/Tx functions stubs to avoid crashing */ - eth_dev->rx_pkt_burst = cnxk_rep_rx_burst; - eth_dev->tx_pkt_burst = cnxk_rep_tx_burst; + eth_dev->rx_pkt_burst = cnxk_rep_rx_burst_dummy; + eth_dev->tx_pkt_burst = cnxk_rep_tx_burst_dummy; /* Only single queues for representor devices */ eth_dev->data->nb_rx_queues = 1; diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h index 9172fae641..266dd4a688 100644 --- a/drivers/net/cnxk/cnxk_rep.h +++ b/drivers/net/cnxk/cnxk_rep.h @@ -7,6 +7,13 @@ #ifndef __CNXK_REP_H__ #define __CNXK_REP_H__ +#define CNXK_REP_TX_OFFLOAD_CAPA \ + (RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \ +RTE_ETH_TX_OFFLOAD_MULTI_SEGS) + +#define CNXK_REP_RX_OFFLOAD_CAPA \ + (RTE_ETH_RX_OFFLOAD_SCATTER | RTE_ETH_RX_OFFLOAD_RSS_HASH | RTE_ETH_RX_OFFLOAD_VLAN_STRIP) + /* Common ethdev ops */ extern struct eth_dev_ops cnxk_rep_dev_ops; @@ -57,12 +64,33 @@ struct cnxk_rep_dev { uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; }; +/* Inline functions */ +static inline void +cnxk_rep_lock(struct cnxk_rep_dev *rep) +{ + rte_spinlock_lock(&rep->parent_dev->rep_lock); +} + +static inline void +cnxk_rep_unlock(struct cnxk_rep_dev *rep) +{ + rte_spinlock_unlock(&rep->parent_dev->rep_lock); +} + static inline struct cnxk_rep_dev * cnxk_rep_pmd_priv(const struct rte_eth_dev *eth_dev) { return eth_dev->data->dev_private; } +static __rte_always_inline void +cnxk_rep_pool_buffer_stats(struct rte_mempool *pool) +{ + plt_rep_dbg("pool %s size %d buffer count in use %d available %d\n", pool->name, + pool->size, rte_mempool_in_use_count(pool), rte_mempool_avail_count(pool)); +} + +/* Prototypes */ int cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswitch_dev); int cnxk_rep_dev_remove(struct cnxk_eswitch_dev *eswitch_dev); int cnxk_rep_dev_uninit(struct rte_eth_dev *ethdev); @@ -85,5 +113,12 @@ int cnxk_rep_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats) int cnxk_rep_stats_reset(struct rte_eth_dev *eth_dev); int cnxk_rep_flow_ops_get(struct rte_eth_dev *ethdev, const struct rte_flow_ops **ops); int cnxk_rep_state_update(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func, uint16_t *rep_id); +int cnxk_rep_promiscuous_enable(struct rte_eth_dev *ethdev); +int cnxk_rep_promiscuous_disable(struct rte_eth_dev *ethdev); +int cnxk_rep_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr); +uint16_t cnxk_rep_tx_burst_dummy(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t cnxk_rep_rx_burst_dummy(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +void cnxk_rep_tx_queue_stop(struct rte_eth_dev *ethdev, uint16_t queue_id); +void cnxk_rep_rx_queue_stop(struct rte_eth_dev *ethdev, uint16_t queue_id); #endif /* __CNXK_REP_H__ */ diff --git a/drivers/net/cnxk/cnxk_rep_msg.h b/drivers/net/c
[PATCH v2 14/24] common/cnxk: get representees ethernet stats
Implementing an mbox interface to fetch the representees's ethernet stats from the kernel. Signed-off-by: Harman Kalra --- drivers/common/cnxk/roc_eswitch.c | 45 +++ drivers/common/cnxk/roc_eswitch.h | 2 ++ drivers/common/cnxk/roc_mbox.h| 30 + drivers/common/cnxk/version.map | 1 + 4 files changed, 78 insertions(+) diff --git a/drivers/common/cnxk/roc_eswitch.c b/drivers/common/cnxk/roc_eswitch.c index 31bdba3985..034a5e6c92 100644 --- a/drivers/common/cnxk/roc_eswitch.c +++ b/drivers/common/cnxk/roc_eswitch.c @@ -321,3 +321,48 @@ roc_eswitch_nix_process_repte_notify_cb_unregister(struct roc_nix *roc_nix) dev->ops->repte_notify = NULL; } + +int +roc_eswitch_nix_repte_stats(struct roc_nix *roc_nix, uint16_t pf_func, struct roc_nix_stats *stats) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + struct nix_get_lf_stats_req *req; + struct nix_lf_stats_rsp *rsp; + struct mbox *mbox; + int rc; + + mbox = mbox_get(dev->mbox); + req = mbox_alloc_msg_nix_get_lf_stats(mbox); + if (!req) { + rc = -ENOSPC; + goto exit; + } + + req->hdr.pcifunc = roc_nix_get_pf_func(roc_nix); + req->pcifunc = pf_func; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto exit; + + stats->rx_octs = rsp->rx.octs; + stats->rx_ucast = rsp->rx.ucast; + stats->rx_bcast = rsp->rx.bcast; + stats->rx_mcast = rsp->rx.mcast; + stats->rx_drop = rsp->rx.drop; + stats->rx_drop_octs = rsp->rx.drop_octs; + stats->rx_drop_bcast = rsp->rx.drop_bcast; + stats->rx_drop_mcast = rsp->rx.drop_mcast; + stats->rx_err = rsp->rx.err; + + stats->tx_ucast = rsp->tx.ucast; + stats->tx_bcast = rsp->tx.bcast; + stats->tx_mcast = rsp->tx.mcast; + stats->tx_drop = rsp->tx.drop; + stats->tx_octs = rsp->tx.octs; + +exit: + mbox_put(mbox); + return rc; +} diff --git a/drivers/common/cnxk/roc_eswitch.h b/drivers/common/cnxk/roc_eswitch.h index 8837e19b22..907e6c37c6 100644 --- a/drivers/common/cnxk/roc_eswitch.h +++ b/drivers/common/cnxk/roc_eswitch.h @@ -25,6 +25,8 @@ int __roc_api roc_eswitch_npc_rss_action_configure(struct roc_npc *roc_npc, /* NIX */ int __roc_api roc_eswitch_nix_vlan_tpid_set(struct roc_nix *nix, uint32_t type, uint16_t tpid, bool is_vf); +int __roc_api roc_eswitch_nix_repte_stats(struct roc_nix *roc_nix, uint16_t pf_func, + struct roc_nix_stats *stats); int __roc_api roc_eswitch_nix_process_repte_notify_cb_register(struct roc_nix *roc_nix, process_repte_notify_t proc_repte_nt); void __roc_api roc_eswitch_nix_process_repte_notify_cb_unregister(struct roc_nix *roc_nix); diff --git a/drivers/common/cnxk/roc_mbox.h b/drivers/common/cnxk/roc_mbox.h index 2bedf1fb81..1a6bb2f5a2 100644 --- a/drivers/common/cnxk/roc_mbox.h +++ b/drivers/common/cnxk/roc_mbox.h @@ -304,6 +304,7 @@ struct mbox_msghdr { M(NIX_MCAST_GRP_DESTROY, 0x802c, nix_mcast_grp_destroy, nix_mcast_grp_destroy_req, msg_rsp)\ M(NIX_MCAST_GRP_UPDATE, 0x802d, nix_mcast_grp_update, nix_mcast_grp_update_req,\ nix_mcast_grp_update_rsp) \ + M(NIX_GET_LF_STATS,0x802e, nix_get_lf_stats, nix_get_lf_stats_req, nix_lf_stats_rsp) \ /* MCS mbox IDs (range 0xa000 - 0xbFFF) */ \ M(MCS_ALLOC_RESOURCES, 0xa000, mcs_alloc_resources, mcs_alloc_rsrc_req, \ mcs_alloc_rsrc_rsp) \ @@ -1846,6 +1847,35 @@ struct nix_mcast_grp_update_rsp { uint32_t __io mce_start_index; }; +struct nix_get_lf_stats_req { + struct mbox_msghdr hdr; + uint16_t __io pcifunc; + uint64_t __io rsvd; +}; + +struct nix_lf_stats_rsp { + struct mbox_msghdr hdr; + struct { + uint64_t __io octs; + uint64_t __io ucast; + uint64_t __io bcast; + uint64_t __io mcast; + uint64_t __io drop; + uint64_t __io drop_octs; + uint64_t __io drop_mcast; + uint64_t __io drop_bcast; + uint64_t __io err; + uint64_t __io rsvd[5]; + } rx; + struct { + uint64_t __io ucast; + uint64_t __io bcast; + uint64_t __io mcast; + uint64_t __io drop; + uint64_t __io octs; + } tx; +}; + /* Global NIX inline IPSec configuration */ struct nix_inline_ipsec_cfg { struct mbox_msghdr hdr; diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/v
[PATCH v2 15/24] net/cnxk: ethernet statistic for representor
Adding representor ethernet statistics support which can fetch stats for representees which are operating independently or part of companian app. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_rep_msg.h | 7 ++ drivers/net/cnxk/cnxk_rep_ops.c | 140 +++- 2 files changed, 143 insertions(+), 4 deletions(-) diff --git a/drivers/net/cnxk/cnxk_rep_msg.h b/drivers/net/cnxk/cnxk_rep_msg.h index 37953ac74f..3236de50ad 100644 --- a/drivers/net/cnxk/cnxk_rep_msg.h +++ b/drivers/net/cnxk/cnxk_rep_msg.h @@ -21,6 +21,8 @@ typedef enum CNXK_REP_MSG { CNXK_REP_MSG_EXIT, /* Ethernet operation msgs */ CNXK_REP_MSG_ETH_SET_MAC, + CNXK_REP_MSG_ETH_STATS_GET, + CNXK_REP_MSG_ETH_STATS_CLEAR, /* End of messaging sequence */ CNXK_REP_MSG_END, } cnxk_rep_msg_t; @@ -89,6 +91,11 @@ typedef struct cnxk_rep_msg_eth_mac_set_meta { uint8_t addr_bytes[RTE_ETHER_ADDR_LEN]; } __rte_packed cnxk_rep_msg_eth_set_mac_meta_t; +/* Ethernet op - get/clear stats */ +typedef struct cnxk_rep_msg_eth_stats_meta { + uint16_t portid; +} __rte_packed cnxk_rep_msg_eth_stats_meta_t; + void cnxk_rep_msg_populate_command(void *buffer, uint32_t *length, cnxk_rep_msg_t type, uint32_t size); void cnxk_rep_msg_populate_command_meta(void *buffer, uint32_t *length, void *msg_meta, uint32_t sz, diff --git a/drivers/net/cnxk/cnxk_rep_ops.c b/drivers/net/cnxk/cnxk_rep_ops.c index 4b3fe28acc..e07c63dcb2 100644 --- a/drivers/net/cnxk/cnxk_rep_ops.c +++ b/drivers/net/cnxk/cnxk_rep_ops.c @@ -486,19 +486,151 @@ cnxk_rep_tx_queue_release(struct rte_eth_dev *ethdev, uint16_t queue_id) plt_err("Failed to release txq %d, rc=%d", rc, txq->qid); } +static int +process_eth_stats(struct cnxk_rep_dev *rep_dev, cnxk_rep_msg_ack_data_t *adata, cnxk_rep_msg_t msg) +{ + cnxk_rep_msg_eth_stats_meta_t msg_st_meta; + uint32_t len = 0, rc; + void *buffer; + size_t size; + + size = CNXK_REP_MSG_MAX_BUFFER_SZ; + buffer = plt_zmalloc(size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + rc = -ENOMEM; + goto fail; + } + + cnxk_rep_msg_populate_header(buffer, &len); + + msg_st_meta.portid = rep_dev->rep_id; + cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_st_meta, + sizeof(cnxk_rep_msg_eth_stats_meta_t), msg); + cnxk_rep_msg_populate_msg_end(buffer, &len); + + rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, adata); + if (rc) { + plt_err("Failed to process the message, err %d", rc); + goto fail; + } + + rte_free(buffer); + + return 0; +fail: + rte_free(buffer); + return rc; +} + +static int +native_repte_eth_stats(struct cnxk_rep_dev *rep_dev, struct rte_eth_stats *stats) +{ + struct roc_nix_stats nix_stats; + int rc = 0; + + rc = roc_eswitch_nix_repte_stats(&rep_dev->parent_dev->nix, rep_dev->hw_func, &nix_stats); + if (rc) { + plt_err("Failed to get stats for representee %x, err %d", rep_dev->hw_func, rc); + goto fail; + } + + memset(stats, 0, sizeof(struct rte_eth_stats)); + stats->opackets = nix_stats.tx_ucast; + stats->opackets += nix_stats.tx_mcast; + stats->opackets += nix_stats.tx_bcast; + stats->oerrors = nix_stats.tx_drop; + stats->obytes = nix_stats.tx_octs; + + stats->ipackets = nix_stats.rx_ucast; + stats->ipackets += nix_stats.rx_mcast; + stats->ipackets += nix_stats.rx_bcast; + stats->imissed = nix_stats.rx_drop; + stats->ibytes = nix_stats.rx_octs; + stats->ierrors = nix_stats.rx_err; + + return 0; +fail: + return rc; +} + int cnxk_rep_stats_get(struct rte_eth_dev *ethdev, struct rte_eth_stats *stats) { - PLT_SET_USED(ethdev); - PLT_SET_USED(stats); + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(ethdev); + struct rte_eth_stats vf_stats; + cnxk_rep_msg_ack_data_t adata; + int rc; + + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) + return 0; + + if (rep_dev->native_repte) { + /* For representees which are independent */ + rc = native_repte_eth_stats(rep_dev, &vf_stats); + if (rc) { + plt_err("Failed to get stats for vf rep %x (hw_func %x), err %d", + rep_dev->port_id, rep_dev->hw_func, rc); + goto fail; + } + } else { + /* For representees which are part of companian app */ + rc = process_eth_stats(rep_dev, &adata, CNXK_REP_MSG_ETH_STATS_GET); + if (rc || adata.u.sval < 0) { + if (adata.u.sval < 0) +
[PATCH v2 16/24] common/cnxk: base support for eswitch VF
- ROC layer changes for supporting eswitch VF - NIX lbk changes for esw Signed-off-by: Harman Kalra --- drivers/common/cnxk/roc_constants.h | 1 + drivers/common/cnxk/roc_dev.c | 1 + drivers/common/cnxk/roc_nix.c | 15 +-- drivers/common/cnxk/roc_nix.h | 1 + drivers/common/cnxk/roc_nix_priv.h | 1 + drivers/common/cnxk/version.map | 1 + 6 files changed, 18 insertions(+), 2 deletions(-) diff --git a/drivers/common/cnxk/roc_constants.h b/drivers/common/cnxk/roc_constants.h index cb4edbea58..21b3998cee 100644 --- a/drivers/common/cnxk/roc_constants.h +++ b/drivers/common/cnxk/roc_constants.h @@ -44,6 +44,7 @@ #define PCI_DEVID_CNXK_RVU_REE_PF 0xA0f4 #define PCI_DEVID_CNXK_RVU_REE_VF 0xA0f5 #define PCI_DEVID_CNXK_RVU_ESWITCH_PF 0xA0E0 +#define PCI_DEVID_CNXK_RVU_ESWITCH_VF 0xA0E1 #define PCI_DEVID_CN9K_CGX 0xA059 #define PCI_DEVID_CN10K_RPM 0xA060 diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c index b12732de34..4d4cfeaaca 100644 --- a/drivers/common/cnxk/roc_dev.c +++ b/drivers/common/cnxk/roc_dev.c @@ -1225,6 +1225,7 @@ dev_vf_hwcap_update(struct plt_pci_device *pci_dev, struct dev *dev) case PCI_DEVID_CNXK_RVU_VF: case PCI_DEVID_CNXK_RVU_SDP_VF: case PCI_DEVID_CNXK_RVU_NIX_INL_VF: + case PCI_DEVID_CNXK_RVU_ESWITCH_VF: dev->hwcap |= DEV_HWCAP_F_VF; break; } diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c index 7e327a7e6e..f1eaca3ab4 100644 --- a/drivers/common/cnxk/roc_nix.c +++ b/drivers/common/cnxk/roc_nix.c @@ -13,6 +13,14 @@ roc_nix_is_lbk(struct roc_nix *roc_nix) return nix->lbk_link; } +bool +roc_nix_is_esw(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + return nix->esw_link; +} + int roc_nix_get_base_chan(struct roc_nix *roc_nix) { @@ -156,7 +164,7 @@ roc_nix_max_pkt_len(struct roc_nix *roc_nix) if (roc_model_is_cn9k()) return NIX_CN9K_MAX_HW_FRS; - if (nix->lbk_link) + if (nix->lbk_link || nix->esw_link) return NIX_LBK_MAX_HW_FRS; return NIX_RPM_MAX_HW_FRS; @@ -349,7 +357,7 @@ roc_nix_get_hw_info(struct roc_nix *roc_nix) rc = mbox_process_msg(mbox, (void *)&hw_info); if (rc == 0) { nix->vwqe_interval = hw_info->vwqe_delay; - if (nix->lbk_link) + if (nix->lbk_link || nix->esw_link) roc_nix->dwrr_mtu = hw_info->lbk_dwrr_mtu; else if (nix->sdp_link) roc_nix->dwrr_mtu = hw_info->sdp_dwrr_mtu; @@ -366,6 +374,7 @@ sdp_lbk_id_update(struct plt_pci_device *pci_dev, struct nix *nix) { nix->sdp_link = false; nix->lbk_link = false; + nix->esw_link = false; /* Update SDP/LBK link based on PCI device id */ switch (pci_dev->id.device_id) { @@ -374,7 +383,9 @@ sdp_lbk_id_update(struct plt_pci_device *pci_dev, struct nix *nix) nix->sdp_link = true; break; case PCI_DEVID_CNXK_RVU_AF_VF: + case PCI_DEVID_CNXK_RVU_ESWITCH_VF: nix->lbk_link = true; + nix->esw_link = true; break; default: break; diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index b369335fc4..ffea84dae8 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -527,6 +527,7 @@ int __roc_api roc_nix_dev_fini(struct roc_nix *roc_nix); /* Type */ bool __roc_api roc_nix_is_lbk(struct roc_nix *roc_nix); +bool __roc_api roc_nix_is_esw(struct roc_nix *roc_nix); bool __roc_api roc_nix_is_sdp(struct roc_nix *roc_nix); bool __roc_api roc_nix_is_pf(struct roc_nix *roc_nix); bool __roc_api roc_nix_is_vf_or_sdp(struct roc_nix *roc_nix); diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h index 8767a62577..e2f65a49c8 100644 --- a/drivers/common/cnxk/roc_nix_priv.h +++ b/drivers/common/cnxk/roc_nix_priv.h @@ -170,6 +170,7 @@ struct nix { uintptr_t base; bool sdp_link; bool lbk_link; + bool esw_link; bool ptp_en; bool is_nix1; diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 87c9d7511f..cdb46d8739 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -276,6 +276,7 @@ INTERNAL { roc_nix_inl_outb_cpt_lfs_dump; roc_nix_cpt_ctx_cache_sync; roc_nix_is_lbk; + roc_nix_is_esw; roc_nix_is_pf; roc_nix_is_sdp; roc_nix_is_vf_or_sdp; -- 2.18.0
[PATCH v2 17/24] net/cnxk: eswitch VF as ethernet device
Adding support for eswitch VF to probe as normal cnxk ethernet device Signed-off-by: Harman Kalra --- drivers/net/cnxk/cn10k_ethdev.c| 1 + drivers/net/cnxk/cnxk_ethdev.c | 39 ++ drivers/net/cnxk/cnxk_ethdev.h | 3 +++ drivers/net/cnxk/cnxk_ethdev_ops.c | 4 +++ drivers/net/cnxk/cnxk_link.c | 3 ++- 5 files changed, 39 insertions(+), 11 deletions(-) diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c index a2e943a3d0..9a072b72a7 100644 --- a/drivers/net/cnxk/cn10k_ethdev.c +++ b/drivers/net/cnxk/cn10k_ethdev.c @@ -963,6 +963,7 @@ static const struct rte_pci_id cn10k_pci_nix_map[] = { CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KB, PCI_DEVID_CNXK_RVU_PF), CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CNF10KB, PCI_DEVID_CNXK_RVU_PF), CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_VF), + CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_ESWITCH_VF), CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_VF), CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CNF10KA, PCI_DEVID_CNXK_RVU_VF), CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KB, PCI_DEVID_CNXK_RVU_VF), diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index 2372a4e793..50f1641c38 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -1449,12 +1449,14 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev) goto cq_fini; /* Init flow control configuration */ - fc_cfg.type = ROC_NIX_FC_RXCHAN_CFG; - fc_cfg.rxchan_cfg.enable = true; - rc = roc_nix_fc_config_set(nix, &fc_cfg); - if (rc) { - plt_err("Failed to initialize flow control rc=%d", rc); - goto cq_fini; + if (!roc_nix_is_esw(nix)) { + fc_cfg.type = ROC_NIX_FC_RXCHAN_CFG; + fc_cfg.rxchan_cfg.enable = true; + rc = roc_nix_fc_config_set(nix, &fc_cfg); + if (rc) { + plt_err("Failed to initialize flow control rc=%d", rc); + goto cq_fini; + } } /* Update flow control configuration to PMD */ @@ -1688,10 +1690,12 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev) } /* Update Flow control configuration */ - rc = nix_update_flow_ctrl_config(eth_dev); - if (rc) { - plt_err("Failed to enable flow control. error code(%d)", rc); - return rc; + if (!roc_nix_is_esw(&dev->nix)) { + rc = nix_update_flow_ctrl_config(eth_dev); + if (rc) { + plt_err("Failed to enable flow control. error code(%d)", rc); + return rc; + } } /* Enable Rx in NPC */ @@ -1976,6 +1980,16 @@ cnxk_eth_dev_init(struct rte_eth_dev *eth_dev) TAILQ_INIT(&dev->mcs_list); } + /* Reserve a switch domain for eswitch device */ + if (pci_dev->id.device_id == PCI_DEVID_CNXK_RVU_ESWITCH_VF) { + eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR; + rc = rte_eth_switch_domain_alloc(&dev->switch_domain_id); + if (rc) { + plt_err("Failed to alloc switch domain: %d", rc); + goto free_mac_addrs; + } + } + plt_nix_dbg("Port=%d pf=%d vf=%d ver=%s hwcap=0x%" PRIx64 " rxoffload_capa=0x%" PRIx64 " txoffload_capa=0x%" PRIx64, eth_dev->data->port_id, roc_nix_get_pf(nix), @@ -2046,6 +2060,11 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool reset) } } + /* Free switch domain ID reserved for eswitch device */ + if ((eth_dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR) && + rte_eth_switch_domain_free(dev->switch_domain_id)) + plt_err("Failed to free switch domain"); + /* Disable and free rte_meter entries */ nix_meter_fini(dev); diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index 4d3ebf123b..d8eba5e1dd 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -424,6 +424,9 @@ struct cnxk_eth_dev { /* MCS device */ struct cnxk_mcs_dev *mcs_dev; struct cnxk_macsec_sess_list mcs_list; + + /* Eswitch domain ID */ + uint16_t switch_domain_id; }; struct cnxk_eth_rxq_sp { diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c index 5de2919047..67fbf7c269 100644 --- a/drivers/net/cnxk/cnxk_ethdev_ops.c +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c @@ -71,6 +71,10 @@ cnxk_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; devinfo->max_rx_mempools = CNXK_NIX_NUM_POOLS_MAX; + if (eth_dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR) { +
[PATCH v2 18/24] common/cnxk: support port representor and represented port
Implementing the common infrastructural changes for supporting port representors and represented ports used as action and pattern in net layer. Signed-off-by: Kiran Kumar K Signed-off-by: Satheesh Paul Signed-off-by: Harman Kalra --- drivers/common/cnxk/roc_npc.c | 63 +++-- drivers/common/cnxk/roc_npc.h | 13 +- drivers/common/cnxk/roc_npc_mcam.c | 62 +++- drivers/common/cnxk/roc_npc_parse.c | 28 - drivers/common/cnxk/roc_npc_priv.h | 2 + drivers/net/cnxk/cnxk_flow.c| 2 +- 6 files changed, 125 insertions(+), 45 deletions(-) diff --git a/drivers/common/cnxk/roc_npc.c b/drivers/common/cnxk/roc_npc.c index 67a660a2bc..5a836f16f5 100644 --- a/drivers/common/cnxk/roc_npc.c +++ b/drivers/common/cnxk/roc_npc.c @@ -570,6 +570,8 @@ npc_parse_actions(struct roc_npc *roc_npc, const struct roc_npc_attr *attr, flow->ctr_id = NPC_COUNTER_NONE; flow->mtr_id = ROC_NIX_MTR_ID_INVALID; pf_func = npc->pf_func; + if (flow->has_rep) + pf_func = flow->rep_pf_func; for (; actions->type != ROC_NPC_ACTION_TYPE_END; actions++) { switch (actions->type) { @@ -898,10 +900,14 @@ npc_parse_pattern(struct npc *npc, const struct roc_npc_item_info pattern[], struct roc_npc_flow *flow, struct npc_parse_state *pst) { npc_parse_stage_func_t parse_stage_funcs[] = { - npc_parse_meta_items, npc_parse_mark_item, npc_parse_pre_l2, npc_parse_cpt_hdr, - npc_parse_higig2_hdr, npc_parse_tx_queue, npc_parse_la, npc_parse_lb, - npc_parse_lc, npc_parse_ld,npc_parse_le, npc_parse_lf, - npc_parse_lg, npc_parse_lh, + npc_parse_meta_items, npc_parse_port_representor_id, + npc_parse_mark_item, npc_parse_pre_l2, + npc_parse_cpt_hdr,npc_parse_higig2_hdr, + npc_parse_tx_queue, npc_parse_la, + npc_parse_lb, npc_parse_lc, + npc_parse_ld, npc_parse_le, + npc_parse_lf, npc_parse_lg, + npc_parse_lh, }; uint8_t layer = 0; int key_offset; @@ -1140,15 +1146,20 @@ npc_rss_action_program(struct roc_npc *roc_npc, struct roc_npc_flow *flow) { const struct roc_npc_action_rss *rss; + struct roc_npc *npc = roc_npc; uint32_t rss_grp; uint8_t alg_idx; int rc; + if (flow->has_rep) { + npc = roc_npc->rep_npc; + npc->flowkey_cfg_state = roc_npc->flowkey_cfg_state; + } + for (; actions->type != ROC_NPC_ACTION_TYPE_END; actions++) { if (actions->type == ROC_NPC_ACTION_TYPE_RSS) { rss = (const struct roc_npc_action_rss *)actions->conf; - rc = npc_rss_action_configure(roc_npc, rss, &alg_idx, - &rss_grp, flow->mcam_id); + rc = npc_rss_action_configure(npc, rss, &alg_idx, &rss_grp, flow->mcam_id); if (rc) return rc; @@ -1171,7 +1182,7 @@ npc_vtag_cfg_delete(struct roc_npc *roc_npc, struct roc_npc_flow *flow) struct roc_nix *roc_nix = roc_npc->roc_nix; struct nix_vtag_config *vtag_cfg; struct nix_vtag_config_rsp *rsp; - struct mbox *mbox; + struct mbox *mbox, *ombox; struct nix *nix; int rc = 0; @@ -1181,7 +1192,10 @@ npc_vtag_cfg_delete(struct roc_npc *roc_npc, struct roc_npc_flow *flow) } tx_vtag_action; nix = roc_nix_to_nix_priv(roc_nix); - mbox = mbox_get((&nix->dev)->mbox); + ombox = (&nix->dev)->mbox; + if (flow->has_rep) + ombox = flow->rep_mbox; + mbox = mbox_get(ombox); tx_vtag_action.reg = flow->vtag_action; vtag_cfg = mbox_alloc_msg_nix_vtag_cfg(mbox); @@ -1400,6 +1414,7 @@ npc_vtag_strip_action_configure(struct mbox *mbox, rx_vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15); rx_vtag_action |= ((uint64_t)NPC_LID_LB << 8); + rx_vtag_action |= (NIX_RX_VTAG_TYPE7 << 12); rx_vtag_action |= NIX_RX_VTAGACTION_VTAG0_RELPTR; if (*strip_cnt == 2) { @@ -1432,6 +1447,8 @@ npc_vtag_action_program(struct roc_npc *roc_npc, nix = roc_nix_to_nix_priv(roc_nix); mbox = (&nix->dev)->mbox; + if (flow->has_rep) + mbox = flow->rep_mbox; memset(vlan_info, 0, sizeof(vlan_info)); @@ -1448,6 +1465,7 @@ npc_vtag_action_program(struct roc_npc *roc_npc, if (rc) return rc; + plt_npc_dbg("VLAN strip action, strip_cnt %d", strip_cnt); if (strip_cnt == 2) actions++; @@ -1587,6 +1605,17 @@ roc_npc_flow_cr
[PATCH v2 19/24] net/cnxk: add represented port pattern and action
Adding support for represented_port item matching and action. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_flow.c | 107 +++ 1 file changed, 57 insertions(+), 50 deletions(-) diff --git a/drivers/net/cnxk/cnxk_flow.c b/drivers/net/cnxk/cnxk_flow.c index 5f74c356b1..a3b21f761f 100644 --- a/drivers/net/cnxk/cnxk_flow.c +++ b/drivers/net/cnxk/cnxk_flow.c @@ -4,67 +4,48 @@ #include const struct cnxk_rte_flow_term_info term[] = { - [RTE_FLOW_ITEM_TYPE_ETH] = {ROC_NPC_ITEM_TYPE_ETH, - sizeof(struct rte_flow_item_eth)}, - [RTE_FLOW_ITEM_TYPE_VLAN] = {ROC_NPC_ITEM_TYPE_VLAN, -sizeof(struct rte_flow_item_vlan)}, - [RTE_FLOW_ITEM_TYPE_E_TAG] = {ROC_NPC_ITEM_TYPE_E_TAG, - sizeof(struct rte_flow_item_e_tag)}, - [RTE_FLOW_ITEM_TYPE_IPV4] = {ROC_NPC_ITEM_TYPE_IPV4, -sizeof(struct rte_flow_item_ipv4)}, - [RTE_FLOW_ITEM_TYPE_IPV6] = {ROC_NPC_ITEM_TYPE_IPV6, -sizeof(struct rte_flow_item_ipv6)}, - [RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT] = { - ROC_NPC_ITEM_TYPE_IPV6_FRAG_EXT, - sizeof(struct rte_flow_item_ipv6_frag_ext)}, - [RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4] = { - ROC_NPC_ITEM_TYPE_ARP_ETH_IPV4, - sizeof(struct rte_flow_item_arp_eth_ipv4)}, - [RTE_FLOW_ITEM_TYPE_MPLS] = {ROC_NPC_ITEM_TYPE_MPLS, -sizeof(struct rte_flow_item_mpls)}, - [RTE_FLOW_ITEM_TYPE_ICMP] = {ROC_NPC_ITEM_TYPE_ICMP, -sizeof(struct rte_flow_item_icmp)}, - [RTE_FLOW_ITEM_TYPE_UDP] = {ROC_NPC_ITEM_TYPE_UDP, - sizeof(struct rte_flow_item_udp)}, - [RTE_FLOW_ITEM_TYPE_TCP] = {ROC_NPC_ITEM_TYPE_TCP, - sizeof(struct rte_flow_item_tcp)}, - [RTE_FLOW_ITEM_TYPE_SCTP] = {ROC_NPC_ITEM_TYPE_SCTP, -sizeof(struct rte_flow_item_sctp)}, - [RTE_FLOW_ITEM_TYPE_ESP] = {ROC_NPC_ITEM_TYPE_ESP, - sizeof(struct rte_flow_item_esp)}, - [RTE_FLOW_ITEM_TYPE_GRE] = {ROC_NPC_ITEM_TYPE_GRE, - sizeof(struct rte_flow_item_gre)}, - [RTE_FLOW_ITEM_TYPE_NVGRE] = {ROC_NPC_ITEM_TYPE_NVGRE, - sizeof(struct rte_flow_item_nvgre)}, - [RTE_FLOW_ITEM_TYPE_VXLAN] = {ROC_NPC_ITEM_TYPE_VXLAN, - sizeof(struct rte_flow_item_vxlan)}, - [RTE_FLOW_ITEM_TYPE_GTPC] = {ROC_NPC_ITEM_TYPE_GTPC, -sizeof(struct rte_flow_item_gtp)}, - [RTE_FLOW_ITEM_TYPE_GTPU] = {ROC_NPC_ITEM_TYPE_GTPU, -sizeof(struct rte_flow_item_gtp)}, + [RTE_FLOW_ITEM_TYPE_ETH] = {ROC_NPC_ITEM_TYPE_ETH, sizeof(struct rte_flow_item_eth)}, + [RTE_FLOW_ITEM_TYPE_VLAN] = {ROC_NPC_ITEM_TYPE_VLAN, sizeof(struct rte_flow_item_vlan)}, + [RTE_FLOW_ITEM_TYPE_E_TAG] = {ROC_NPC_ITEM_TYPE_E_TAG, sizeof(struct rte_flow_item_e_tag)}, + [RTE_FLOW_ITEM_TYPE_IPV4] = {ROC_NPC_ITEM_TYPE_IPV4, sizeof(struct rte_flow_item_ipv4)}, + [RTE_FLOW_ITEM_TYPE_IPV6] = {ROC_NPC_ITEM_TYPE_IPV6, sizeof(struct rte_flow_item_ipv6)}, + [RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT] = {ROC_NPC_ITEM_TYPE_IPV6_FRAG_EXT, + sizeof(struct rte_flow_item_ipv6_frag_ext)}, + [RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4] = {ROC_NPC_ITEM_TYPE_ARP_ETH_IPV4, +sizeof(struct rte_flow_item_arp_eth_ipv4)}, + [RTE_FLOW_ITEM_TYPE_MPLS] = {ROC_NPC_ITEM_TYPE_MPLS, sizeof(struct rte_flow_item_mpls)}, + [RTE_FLOW_ITEM_TYPE_ICMP] = {ROC_NPC_ITEM_TYPE_ICMP, sizeof(struct rte_flow_item_icmp)}, + [RTE_FLOW_ITEM_TYPE_UDP] = {ROC_NPC_ITEM_TYPE_UDP, sizeof(struct rte_flow_item_udp)}, + [RTE_FLOW_ITEM_TYPE_TCP] = {ROC_NPC_ITEM_TYPE_TCP, sizeof(struct rte_flow_item_tcp)}, + [RTE_FLOW_ITEM_TYPE_SCTP] = {ROC_NPC_ITEM_TYPE_SCTP, sizeof(struct rte_flow_item_sctp)}, + [RTE_FLOW_ITEM_TYPE_ESP] = {ROC_NPC_ITEM_TYPE_ESP, sizeof(struct rte_flow_item_esp)}, + [RTE_FLOW_ITEM_TYPE_GRE] = {ROC_NPC_ITEM_TYPE_GRE, sizeof(struct rte_flow_item_gre)}, + [RTE_FLOW_ITEM_TYPE_NVGRE] = {ROC_NPC_ITEM_TYPE_NVGRE, sizeof(struct rte_flow_item_nvgre)}, + [RTE_FLOW_ITEM_TYPE_VXLAN] = {ROC_NPC_ITEM_TYPE_VXLAN, sizeof(struct rte_flow_item_vxlan)}, + [RTE_FLOW_ITEM_TYPE_GTPC] = {ROC_NPC_ITEM_TYPE_GTPC, sizeof(struct rte_flow_item_gtp)}, + [RTE_FLOW_ITEM_TYPE_GTPU] = {ROC_NPC_ITEM_TYPE_GTPU, sizeof(struct rte_flow_item_gtp)}, [RTE_FLOW_ITEM_TYPE_GENEVE] = {ROC_NPC_ITEM_TYPE_GENEVE, sizeof(struct rte_flow_item_geneve)}, -
[PATCH v2 20/24] net/cnxk: add port representor pattern and action
Adding support for port_representor as item matching and action. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_flow.c | 224 +++ drivers/net/cnxk/cnxk_rep.h | 14 +++ 2 files changed, 212 insertions(+), 26 deletions(-) diff --git a/drivers/net/cnxk/cnxk_flow.c b/drivers/net/cnxk/cnxk_flow.c index a3b21f761f..959d773513 100644 --- a/drivers/net/cnxk/cnxk_flow.c +++ b/drivers/net/cnxk/cnxk_flow.c @@ -2,6 +2,7 @@ * Copyright(C) 2021 Marvell. */ #include +#include const struct cnxk_rte_flow_term_info term[] = { [RTE_FLOW_ITEM_TYPE_ETH] = {ROC_NPC_ITEM_TYPE_ETH, sizeof(struct rte_flow_item_eth)}, @@ -185,11 +186,44 @@ roc_npc_parse_sample_subaction(struct rte_eth_dev *eth_dev, const struct rte_flo return 0; } +static int +representor_portid_action(struct roc_npc_action *in_actions, struct rte_eth_dev *portid_eth_dev, + uint16_t *dst_pf_func, uint8_t has_tunnel_pattern, int *act_cnt) +{ + struct rte_eth_dev *rep_eth_dev = portid_eth_dev; + struct rte_flow_action_mark *act_mark; + struct cnxk_rep_dev *rep_dev; + /* For inserting an action in the list */ + int i = *act_cnt; + + rep_dev = cnxk_rep_pmd_priv(rep_eth_dev); + *dst_pf_func = rep_dev->hw_func; + + /* Add Mark action */ + i++; + act_mark = plt_zmalloc(sizeof(struct rte_flow_action_mark), 0); + if (!act_mark) { + plt_err("Error allocation memory"); + return -ENOMEM; + } + + /* Mark ID format: (tunnel type - VxLAN, Geneve << 6) | Tunnel decap */ + act_mark->id = has_tunnel_pattern ? ((has_tunnel_pattern << 6) | 5) : 1; + in_actions[i].type = ROC_NPC_ACTION_TYPE_MARK; + in_actions[i].conf = (struct rte_flow_action_mark *)act_mark; + + *act_cnt = i; + plt_rep_dbg("Rep port %d ID %d mark ID is %d rep_dev->hw_func 0x%x", rep_dev->port_id, + rep_dev->rep_id, act_mark->id, rep_dev->hw_func); + + return 0; +} + static int cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, const struct rte_flow_action actions[], struct roc_npc_action in_actions[], struct roc_npc_action_sample *in_sample_actions, uint32_t *flowkey_cfg, -uint16_t *dst_pf_func) +uint16_t *dst_pf_func, uint8_t has_tunnel_pattern) { struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); const struct rte_flow_action_queue *act_q = NULL; @@ -256,14 +290,27 @@ cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, plt_err("eth_dev not found for output port id"); goto err_exit; } - if (strcmp(portid_eth_dev->device->driver->name, - eth_dev->device->driver->name) != 0) { - plt_err("Output port not under same driver"); - goto err_exit; + + if (cnxk_ethdev_is_representor(if_name)) { + plt_rep_dbg("Representor port %d act port %d", port_act->id, + act_ethdev->port_id); + if (representor_portid_action(in_actions, portid_eth_dev, + dst_pf_func, has_tunnel_pattern, + &i)) { + plt_err("Representor port action set failed"); + goto err_exit; + } + } else { + if (strcmp(portid_eth_dev->device->driver->name, + eth_dev->device->driver->name) != 0) { + plt_err("Output port not under same driver"); + goto err_exit; + } + + hw_dst = portid_eth_dev->data->dev_private; + roc_npc_dst = &hw_dst->npc; + *dst_pf_func = roc_npc_dst->pf_func; } - hw_dst = portid_eth_dev->data->dev_private; - roc_npc_dst = &hw_dst->npc; - *dst_pf_func = roc_npc_dst->pf_func; break; case RTE_FLOW_ACTION_TYPE_QUEUE: @@ -324,6 +371,8 @@ cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, in_actions[i].type = ROC_NPC_ACTION_TYPE_SAMPLE; in_actions[i].conf = in_sample_actions; break; + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: + continue; default:
[PATCH v2 21/24] net/cnxk: generalize flow operation APIs
Flow operations can be performed on cnxk ports as well as representor ports. Since representor ports are not cnxk ports but have eswitch as base device underneath, special handling is required to align with base infra. Introducing a flag to generic flow APIs to discriminate if the operation request made on normal or representor ports. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_flow.c | 240 +++ drivers/net/cnxk/cnxk_flow.h | 19 +++ 2 files changed, 205 insertions(+), 54 deletions(-) diff --git a/drivers/net/cnxk/cnxk_flow.c b/drivers/net/cnxk/cnxk_flow.c index 959d773513..7959f2ed6b 100644 --- a/drivers/net/cnxk/cnxk_flow.c +++ b/drivers/net/cnxk/cnxk_flow.c @@ -223,7 +223,7 @@ static int cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, const struct rte_flow_action actions[], struct roc_npc_action in_actions[], struct roc_npc_action_sample *in_sample_actions, uint32_t *flowkey_cfg, -uint16_t *dst_pf_func, uint8_t has_tunnel_pattern) +uint16_t *dst_pf_func, uint8_t has_tunnel_pattern, bool is_rep) { struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); const struct rte_flow_action_queue *act_q = NULL; @@ -273,6 +273,9 @@ cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: case RTE_FLOW_ACTION_TYPE_PORT_ID: + /* No port ID action on representor ethdevs */ + if (is_rep) + continue; in_actions[i].type = ROC_NPC_ACTION_TYPE_PORT_ID; in_actions[i].conf = actions->conf; act_ethdev = (const struct rte_flow_action_ethdev *) @@ -320,6 +323,9 @@ cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, break; case RTE_FLOW_ACTION_TYPE_RSS: + /* No RSS action on representor ethdevs */ + if (is_rep) + continue; rc = npc_rss_action_validate(eth_dev, attr, actions); if (rc) goto err_exit; @@ -396,22 +402,37 @@ cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, static int cnxk_map_pattern(struct rte_eth_dev *eth_dev, const struct rte_flow_item pattern[], -struct roc_npc_item_info in_pattern[], uint8_t *has_tunnel_pattern) +struct roc_npc_item_info in_pattern[], uint8_t *has_tunnel_pattern, bool is_rep) { - struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); const struct rte_flow_item_ethdev *rep_eth_dev; struct rte_eth_dev *portid_eth_dev; char if_name[RTE_ETH_NAME_MAX_LEN]; struct cnxk_eth_dev *hw_dst; + struct cnxk_rep_dev *rdev; + struct cnxk_eth_dev *dev; + struct roc_npc *npc; int i = 0; + if (!is_rep) { + dev = cnxk_eth_pmd_priv(eth_dev); + npc = &dev->npc; + } else { + rdev = cnxk_rep_pmd_priv(eth_dev); + npc = &rdev->parent_dev->npc; + + npc->rep_npc = npc; + npc->rep_port_id = rdev->port_id; + npc->rep_pf_func = rdev->hw_func; + } + while (pattern->type != RTE_FLOW_ITEM_TYPE_END) { in_pattern[i].spec = pattern->spec; in_pattern[i].last = pattern->last; in_pattern[i].mask = pattern->mask; in_pattern[i].type = term[pattern->type].item_type; in_pattern[i].size = term[pattern->type].item_size; - if (pattern->type == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT) { + if (pattern->type == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT || + pattern->type == RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR) { rep_eth_dev = (const struct rte_flow_item_ethdev *)pattern->spec; if (rte_eth_dev_get_name_by_port(rep_eth_dev->port_id, if_name)) { plt_err("Name not found for output port id"); @@ -422,11 +443,6 @@ cnxk_map_pattern(struct rte_eth_dev *eth_dev, const struct rte_flow_item pattern plt_err("eth_dev not found for output port id"); goto fail; } - if (strcmp(portid_eth_dev->device->driver->name, - eth_dev->device->driver->name) != 0) { - plt_err("Output port not under same driver"); - goto fail; - } if (cnxk_ethdev_is_representor(if_name)) { /* Case where represented port not part of same
[PATCH v2 22/24] net/cnxk: flow create on representor ports
- Implementing base infra for handling flow operations performed on representor ports, where these representor ports may be representing native representees or part of companian apps. - Handling flow create operation Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_flow.h | 9 +- drivers/net/cnxk/cnxk_rep.h | 3 + drivers/net/cnxk/cnxk_rep_flow.c | 399 +++ drivers/net/cnxk/cnxk_rep_msg.h | 27 +++ drivers/net/cnxk/cnxk_rep_ops.c | 3 +- drivers/net/cnxk/meson.build | 1 + 6 files changed, 439 insertions(+), 3 deletions(-) create mode 100644 drivers/net/cnxk/cnxk_rep_flow.c diff --git a/drivers/net/cnxk/cnxk_flow.h b/drivers/net/cnxk/cnxk_flow.h index 84333e7f9d..26384400c1 100644 --- a/drivers/net/cnxk/cnxk_flow.h +++ b/drivers/net/cnxk/cnxk_flow.h @@ -16,8 +16,13 @@ struct cnxk_rte_flow_term_info { uint16_t item_size; }; -struct roc_npc_flow *cnxk_flow_create(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, +struct cnxk_rte_flow_action_info { + uint16_t conf_size; +}; + +extern const struct cnxk_rte_flow_term_info term[]; + +struct roc_npc_flow *cnxk_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error); diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h index 9ac675426e..2b850e7e59 100644 --- a/drivers/net/cnxk/cnxk_rep.h +++ b/drivers/net/cnxk/cnxk_rep.h @@ -20,6 +20,9 @@ /* Common ethdev ops */ extern struct eth_dev_ops cnxk_rep_dev_ops; +/* Flow ops for representor ports */ +extern struct rte_flow_ops cnxk_rep_flow_ops; + struct cnxk_rep_queue_stats { uint64_t pkts; uint64_t bytes; diff --git a/drivers/net/cnxk/cnxk_rep_flow.c b/drivers/net/cnxk/cnxk_rep_flow.c new file mode 100644 index 00..ab9ced6ece --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep_flow.c @@ -0,0 +1,399 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include +#include + +#include +#include +#include + +#define DEFAULT_DUMP_FILE_NAME "/tmp/fdump" +#define MAX_BUFFER_SIZE 1500 + +const struct cnxk_rte_flow_action_info action_info[] = { + [RTE_FLOW_ACTION_TYPE_MARK] = {sizeof(struct rte_flow_action_mark)}, + [RTE_FLOW_ACTION_TYPE_VF] = {sizeof(struct rte_flow_action_vf)}, + [RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT] = {sizeof(struct rte_flow_action_port_id)}, + [RTE_FLOW_ACTION_TYPE_PORT_ID] = {sizeof(struct rte_flow_action_port_id)}, + [RTE_FLOW_ACTION_TYPE_QUEUE] = {sizeof(struct rte_flow_action_queue)}, + [RTE_FLOW_ACTION_TYPE_RSS] = {sizeof(struct rte_flow_action_rss)}, + [RTE_FLOW_ACTION_TYPE_SECURITY] = {sizeof(struct rte_flow_action_security)}, + [RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID] = {sizeof(struct rte_flow_action_of_set_vlan_vid)}, + [RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = {sizeof(struct rte_flow_action_of_push_vlan)}, + [RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP] = {sizeof(struct rte_flow_action_of_set_vlan_pcp)}, + [RTE_FLOW_ACTION_TYPE_METER] = {sizeof(struct rte_flow_action_meter)}, + [RTE_FLOW_ACTION_TYPE_OF_POP_MPLS] = {sizeof(struct rte_flow_action_of_pop_mpls)}, + [RTE_FLOW_ACTION_TYPE_OF_PUSH_MPLS] = {sizeof(struct rte_flow_action_of_push_mpls)}, + [RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP] = {sizeof(struct rte_flow_action_vxlan_encap)}, + [RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP] = {sizeof(struct rte_flow_action_nvgre_encap)}, + [RTE_FLOW_ACTION_TYPE_RAW_ENCAP] = {sizeof(struct rte_flow_action_raw_encap)}, + [RTE_FLOW_ACTION_TYPE_RAW_DECAP] = {sizeof(struct rte_flow_action_raw_decap)}, + [RTE_FLOW_ACTION_TYPE_COUNT] = {sizeof(struct rte_flow_action_count)}, +}; + +static void +cnxk_flow_params_count(const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + uint16_t *n_pattern, uint16_t *n_action) +{ + int i = 0; + + for (; pattern->type != RTE_FLOW_ITEM_TYPE_END; pattern++) + i++; + + *n_pattern = ++i; + plt_rep_dbg("Total patterns is %d", *n_pattern); + + i = 0; + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) + i++; + *n_action = ++i; + plt_rep_dbg("Total actions is %d", *n_action); +} + +static void +populate_attr_data(void *buffer, uint32_t *length, const struct rte_flow_attr *attr) +{ + uint32_t sz = sizeof(struct rte_flow_attr); + uint32_t len; + + cnxk_rep_msg_populate_type(buffer, length, CNXK_TYPE_ATTR, sz); + + len = *length; + /* Populate the attribute data */ + rte_memcpy(RTE_PTR_ADD(buffer, len), attr, sz); + len += sz; + + *length = len; +} + +static uint16_t +prepare
[PATCH v2 23/24] net/cnxk: other flow operations
Implementing other flow operations - validate, destroy, query, flush, dump for representor ports Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_rep_flow.c | 414 +++ drivers/net/cnxk/cnxk_rep_msg.h | 32 +++ 2 files changed, 446 insertions(+) diff --git a/drivers/net/cnxk/cnxk_rep_flow.c b/drivers/net/cnxk/cnxk_rep_flow.c index ab9ced6ece..2abec485bc 100644 --- a/drivers/net/cnxk/cnxk_rep_flow.c +++ b/drivers/net/cnxk/cnxk_rep_flow.c @@ -270,6 +270,221 @@ populate_action_data(void *buffer, uint32_t *length, const struct rte_flow_actio *length = len; } +static int +process_flow_destroy(struct cnxk_rep_dev *rep_dev, void *flow, cnxk_rep_msg_ack_data_t *adata) +{ + cnxk_rep_msg_flow_destroy_meta_t msg_fd_meta; + uint32_t len = 0, rc; + void *buffer; + size_t size; + + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) + return 0; + + size = MAX_BUFFER_SIZE; + buffer = plt_zmalloc(size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + rc = -ENOMEM; + goto fail; + } + + cnxk_rep_msg_populate_header(buffer, &len); + + msg_fd_meta.portid = rep_dev->rep_id; + msg_fd_meta.flow = (uint64_t)flow; + plt_rep_dbg("Flow Destroy: flow 0x%" PRIu64 ", portid %d", msg_fd_meta.flow, + msg_fd_meta.portid); + cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_fd_meta, + sizeof(cnxk_rep_msg_flow_destroy_meta_t), + CNXK_REP_MSG_FLOW_DESTROY); + cnxk_rep_msg_populate_msg_end(buffer, &len); + + rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, adata); + if (rc) { + plt_err("Failed to process the message, err %d", rc); + goto fail; + } + + return 0; +fail: + return rc; +} + +static int +copy_flow_dump_file(FILE *target) +{ + FILE *source = NULL; + int pos; + char ch; + + source = fopen(DEFAULT_DUMP_FILE_NAME, "r"); + if (source == NULL) { + plt_err("Failed to read default dump file: %s, err %d", DEFAULT_DUMP_FILE_NAME, + errno); + return errno; + } + + fseek(source, 0L, SEEK_END); + pos = ftell(source); + fseek(source, 0L, SEEK_SET); + while (pos--) { + ch = fgetc(source); + fputc(ch, target); + } + + fclose(source); + + /* Remove the default file after reading */ + remove(DEFAULT_DUMP_FILE_NAME); + + return 0; +} + +static int +process_flow_dump(struct cnxk_rep_dev *rep_dev, struct rte_flow *flow, FILE *file, + cnxk_rep_msg_ack_data_t *adata) +{ + cnxk_rep_msg_flow_dump_meta_t msg_fp_meta; + uint32_t len = 0, rc; + void *buffer; + size_t size; + + size = MAX_BUFFER_SIZE; + buffer = plt_zmalloc(size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + rc = -ENOMEM; + goto fail; + } + + cnxk_rep_msg_populate_header(buffer, &len); + + msg_fp_meta.portid = rep_dev->rep_id; + msg_fp_meta.flow = (uint64_t)flow; + msg_fp_meta.is_stdout = (file == stdout) ? 1 : 0; + + plt_rep_dbg("Flow Dump: flow 0x%" PRIu64 ", portid %d stdout %d", msg_fp_meta.flow, + msg_fp_meta.portid, msg_fp_meta.is_stdout); + cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_fp_meta, + sizeof(cnxk_rep_msg_flow_dump_meta_t), + CNXK_REP_MSG_FLOW_DUMP); + cnxk_rep_msg_populate_msg_end(buffer, &len); + + rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, adata); + if (rc) { + plt_err("Failed to process the message, err %d", rc); + goto fail; + } + + /* Copy contents from default file to user file */ + if (file != stdout) + copy_flow_dump_file(file); + + return 0; +fail: + return rc; +} + +static int +process_flow_flush(struct cnxk_rep_dev *rep_dev, cnxk_rep_msg_ack_data_t *adata) +{ + cnxk_rep_msg_flow_flush_meta_t msg_ff_meta; + uint32_t len = 0, rc; + void *buffer; + size_t size; + + size = MAX_BUFFER_SIZE; + buffer = plt_zmalloc(size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + rc = -ENOMEM; + goto fail; + } + + cnxk_rep_msg_populate_header(buffer, &len); + + msg_ff_meta.portid = rep_dev->rep_id; + plt_rep_dbg("Flow Flush: portid %d", msg_ff_meta.portid); + cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_ff_meta, + sizeof(cnxk_rep_msg_flow_flush_meta_t), +
[PATCH v2 24/24] doc: port representors in cnxk
Updating the CNXK PMD documentation with the added support for port representors. Signed-off-by: Harman Kalra --- MAINTAINERS | 1 + doc/guides/nics/cnxk.rst | 58 doc/guides/nics/features/cnxk.ini| 3 ++ doc/guides/nics/features/cnxk_vf.ini | 4 ++ 4 files changed, 66 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 0d1c8126e3..2716178e18 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -827,6 +827,7 @@ M: Nithin Dabilpuram M: Kiran Kumar K M: Sunil Kumar Kori M: Satha Rao +M: Harman Kalra T: git://dpdk.org/next/dpdk-next-net-mrvl F: drivers/common/cnxk/ F: drivers/net/cnxk/ diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst index 9ec52e380f..5fd1f6513a 100644 --- a/doc/guides/nics/cnxk.rst +++ b/doc/guides/nics/cnxk.rst @@ -37,6 +37,9 @@ Features of the CNXK Ethdev PMD are: - Inline IPsec processing support - Ingress meter support - Queue based priority flow control support +- Port representors +- Represented port pattern matching and action +- Port representor pattern matching and action Prerequisites - @@ -613,6 +616,57 @@ Runtime Config Options for inline device With the above configuration, driver would poll for aging flows every 50 seconds. +Port Representors +- + +The CNXK driver supports port representor model by adding virtual ethernet +ports providing a logical representation in DPDK for physical function(PF) or +SR-IOV virtual function (VF) devices for control and monitoring. + +Base device or parent device underneath these representor ports is a eswitch +device which is not a cnxk ethernet device but has NIC RX and TX capabilities. +Each representor port is represented by a RQ and SQ pair of this eswitch +device. + +Current implementation supports representors for both physical function and +virtual function. + +These port representor ethdev instances can be spawned on an as needed basis +through configuration parameters passed to the driver of the underlying +base device using devargs ``-a ,representor=pf*vf*`` + +.. note:: + + Representor ports to be created for respective representees should be + defined via these representor devargs. + Eg. To create a representor for representee PF1VF0, devargs to be passed + is ``-a ,representor=pf0vf0`` + + For PF representor + ``-a ,representor=pf2`` + + For defining range of vfs, say 5 representor ports under a PF + ``-a ,representor=pf0vf[0-4]`` + + For representing different VFs under different PFs + ``-a ,representor=pf0vf[1,2],representor=pf1vf[2-5]`` + +In case of exception path (i.e. until the flow definition is offloaded to the +hardware), packets transmitted by the VFs shall be received by these +representor port, while packets transmitted by representor ports shall be +received by respective VFs. + +On receiving the VF traffic via these representor ports, applications holding +these representor ports can decide to offload the traffic flow into the HW. +Henceforth the matching traffic shall be directly steered to the respective +VFs without being received by the application. + +Current virtual representor port PMD supports following operations: + +- Get and clear VF statistics +- Set mac address +- Flow operations - create, validate, destroy, query, flush, dump + Debugging Options - @@ -627,3 +681,7 @@ Debugging Options +---++---+ | 2 | NPC| --log-level='pmd\.net.cnxk\.flow,8' | +---++---+ + | 3 | REP| --log-level='pmd\.net.cnxk\.rep,8' | + +---++---+ + | 4 | ESW| --log-level='pmd\.net.cnxk\.esw,8' | + +---++---+ diff --git a/doc/guides/nics/features/cnxk.ini b/doc/guides/nics/features/cnxk.ini index 94e7a6ab8d..88d54e 100644 --- a/doc/guides/nics/features/cnxk.ini +++ b/doc/guides/nics/features/cnxk.ini @@ -73,6 +73,8 @@ mpls = Y nvgre= Y pppoes = Y raw = Y +represented_port = Y +port_representor = Y sctp = Y tcp = Y tx_queue = Y @@ -96,6 +98,7 @@ pf = Y port_id = Y queue= Y represented_port = Y +port_representor = Y rss = Y sample = Y security = Y diff --git a/doc/guides/nics/features/cnxk_vf.ini b/doc/guides/nics/features/cnxk_vf.ini index 53aa2a3d0c..7d7a1cad1b 100644 --- a/doc/guides/nics/features/cnxk_vf.ini +++ b/doc/guides/nics/features/cnxk_vf.ini @@ -64,6 +64,8 @@ mpls = Y nvgre= Y pppoes = Y raw = Y +rep
[PATCH] windows: install sched.h header
rte_os.h includes sched.h so install sched.h to allow DPDK installed to DESTDIR to be usable. Signed-off-by: Tyler Retzlaff --- lib/eal/windows/include/meson.build | 1 + 1 file changed, 1 insertion(+) diff --git a/lib/eal/windows/include/meson.build b/lib/eal/windows/include/meson.build index 5fb1962..e985a77 100644 --- a/lib/eal/windows/include/meson.build +++ b/lib/eal/windows/include/meson.build @@ -6,4 +6,5 @@ includes += include_directories('.') headers += files( 'rte_os.h', 'rte_windows.h', +'sched.h', ) -- 1.8.3.1
RE: [PATCH v2 1/6] examples/l3fwd: fix lcore ID restriction
[AMD Official Use Only - General] Hi Konstantin, > -Original Message- > From: Konstantin Ananyev > Sent: Tuesday, December 19, 2023 8:40 PM > To: Tummala, Sivaprasad ; > david.h...@intel.com; anatoly.bura...@intel.com; jer...@marvell.com; > radu.nico...@intel.com; gak...@marvell.com; cristian.dumitre...@intel.com; > Yigit, > Ferruh > Cc: dev@dpdk.org; sta...@dpdk.org > Subject: RE: [PATCH v2 1/6] examples/l3fwd: fix lcore ID restriction > > Caution: This message originated from an External Source. Use proper caution > when opening attachments, clicking links, or responding. > > > Hi Sivaprasad, > > > > > Hi Konstantin, > > > > > -Original Message- > > > From: Konstantin Ananyev > > > Sent: Tuesday, December 19, 2023 6:00 PM > > > To: Konstantin Ananyev ; Tummala, > > > Sivaprasad ; david.h...@intel.com; > > > anatoly.bura...@intel.com; jer...@marvell.com; > > > radu.nico...@intel.com; gak...@marvell.com; > > > cristian.dumitre...@intel.com; Yigit, Ferruh > > > Cc: dev@dpdk.org; sta...@dpdk.org > > > Subject: RE: [PATCH v2 1/6] examples/l3fwd: fix lcore ID restriction > > > > > > Caution: This message originated from an External Source. Use proper > > > caution when opening attachments, clicking links, or responding. > > > > > > > > > > > > > > > Currently the config option allows lcore IDs up to 255, > > > > > irrespective of RTE_MAX_LCORES and needs to be fixed. > > > > > > > > > > The patch allows config options based on DPDK config. > > > > > > > > > > Fixes: af75078fece3 ("first public release") > > > > > Cc: sta...@dpdk.org > > > > > > > > > > Signed-off-by: Sivaprasad Tummala > > > > > --- > > > > > examples/l3fwd/main.c | 19 +++ > > > > > 1 file changed, 11 insertions(+), 8 deletions(-) > > > > > > > > > > diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index > > > > > 3bf28aec0c..ed116da09c 100644 > > > > > --- a/examples/l3fwd/main.c > > > > > +++ b/examples/l3fwd/main.c > > > > > @@ -99,7 +99,7 @@ struct parm_cfg parm_config; struct > > > > > lcore_params { > > > > > uint16_t port_id; > > > > > uint8_t queue_id; > > > > > > Actually one comment: > > > As lcore_id becomes uint16_t it might be worth to do the same > > > queue_id, they usually are very much related. > > Yes, that's a valid statement for one network interface. > > With multiple interfaces, it's a combination of port/queue that maps to a > > specific > lcore. > > If there a NICs that support more than 256 queues, then it makes sense > > to change the queue_id type as well. > > AFAIK, majority of modern NICs do support more than 256 queues. > That's why in rte_ethev API queue_id is uint16_t. Thanks. Will update the queue_id type to uint16_t in next version. > > > > > Please let me know your thoughts. > > > > > > > > - uint8_t lcore_id; > > > > > + uint16_t lcore_id; > > > > > } __rte_cache_aligned; > > > > > > > > > > static struct lcore_params > > > > > lcore_params_array[MAX_LCORE_PARAMS]; > > > > > @@ -292,8 +292,8 @@ setup_l3fwd_lookup_tables(void) static int > > > > > check_lcore_params(void) > > > > > { > > > > > - uint8_t queue, lcore; > > > > > - uint16_t i; > > > > > + uint8_t queue; > > > > > + uint16_t i, lcore; > > > > > int socketid; > > > > > > > > > > for (i = 0; i < nb_lcore_params; ++i) { @@ -304,12 +304,12 > > > > > @@ > > > > > check_lcore_params(void) > > > > > } > > > > > lcore = lcore_params[i].lcore_id; > > > > > if (!rte_lcore_is_enabled(lcore)) { > > > > > - printf("error: lcore %hhu is not enabled in lcore > > > > > mask\n", > lcore); > > > > > + printf("error: lcore %hu is not enabled in > > > > > + lcore mask\n", lcore); > > > > > return -1; > > > > > } > > > > > if ((socketid = rte_lcore_to_socket_id(lcore) != 0) && > > > > > (numa_on == 0)) { > > > > > - printf("warning: lcore %hhu is on socket %d with > > > > > numa off > \n", > > > > > + printf("warning: lcore %hu is on socket %d > > > > > + with numa off\n", > > > > > lcore, socketid); > > > > > } > > > > > } > > > > > @@ -359,7 +359,7 @@ static int > > > > > init_lcore_rx_queues(void) > > > > > { > > > > > uint16_t i, nb_rx_queue; > > > > > - uint8_t lcore; > > > > > + uint16_t lcore; > > > > > > > > > > for (i = 0; i < nb_lcore_params; ++i) { > > > > > lcore = lcore_params[i].lcore_id; @@ -500,6 +500,8 > > > > > @@ parse_config(const char *q_arg) > > > > > char *str_fld[_NUM_FLD]; > > > > > int i; > > > > > unsigned size; > > > > > + unsigned int max_fld[_NUM_FLD] = {RTE_MAX_ETHPORTS, > > > > > + 255, RTE_MAX_LCORE}; > > > > > > > > > > nb_lcore_params = 0; > > > > > > > > > > @@ -518,7 +520,8 @@ parse_config(const char *q_arg) > > > > > for (i = 0; i < _NUM_FLD;
RE: [PATCH] net/gve: Enable stats reporting for GQ format
> -Original Message- > From: Rushil Gupta > Sent: Tuesday, December 19, 2023 10:17 > To: Guo, Junfeng ; jeroe...@google.com; > joshw...@google.com; ferruh.yi...@amd.com > Cc: dev@dpdk.org; Rushil Gupta > Subject: [PATCH] net/gve: Enable stats reporting for GQ format > > Read from shared region to retrieve imissed statistics for GQ. > Tested using `show port xstats ` in interactive mode. > This metric can be triggered by using queues > cores. > > Signed-off-by: Rushil Gupta > Reviewed-by: Joshua Washington > --- > drivers/net/gve/base/gve_adminq.h | 11 > drivers/net/gve/gve_ethdev.c | 83 > +++ > drivers/net/gve/gve_ethdev.h | 6 +++ > 3 files changed, 100 insertions(+) > > diff --git a/drivers/net/gve/base/gve_adminq.h > b/drivers/net/gve/base/gve_adminq.h > index e30b184913..f05362f85f 100644 > --- a/drivers/net/gve/base/gve_adminq.h > +++ b/drivers/net/gve/base/gve_adminq.h > @@ -314,6 +314,17 @@ struct gve_stats_report { > > GVE_CHECK_STRUCT_LEN(8, gve_stats_report); > > +/* Numbers of gve tx/rx stats in stats report. */ > +#define GVE_TX_STATS_REPORT_NUM6 > +#define GVE_RX_STATS_REPORT_NUM2 > + > +/* Interval to schedule a stats report update, 2ms. */ > +#define GVE_STATS_REPORT_TIMER_PERIOD 2 > + > +/* Numbers of NIC tx/rx stats in stats report. */ > +#define NIC_TX_STATS_REPORT_NUM0 > +#define NIC_RX_STATS_REPORT_NUM4 > + > enum gve_stat_names { > /* stats from gve */ > TX_WAKE_CNT = 1, > diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c > index ecd37ff37f..0db612f25c 100644 > --- a/drivers/net/gve/gve_ethdev.c > +++ b/drivers/net/gve/gve_ethdev.c > @@ -125,6 +125,70 @@ gve_link_update(struct rte_eth_dev *dev, > __rte_unused int wait_to_complete) > return rte_eth_linkstatus_set(dev, &link); > } > > +static int gve_alloc_stats_report(struct gve_priv *priv, Minor observation here and for below newly-added functions about the coding style. In DPDK, the function type is placed on a new line by itself preceding the function, while in the kernel, it is placed on the same line as the function. But you can keep this kernel coding style for the code under the base folder of the driver. It's always good to keep the coding style be consistent within each individual file. : ) https://doc.dpdk.org/guides-23.11/contributing/coding_style.html Regards, Junfeng > + uint16_t nb_tx_queues, uint16_t nb_rx_queues) > +{ > + int tx_stats_cnt; > + int rx_stats_cnt; > + > + tx_stats_cnt = (GVE_TX_STATS_REPORT_NUM + > NIC_TX_STATS_REPORT_NUM) * > + nb_tx_queues; > + rx_stats_cnt = (GVE_RX_STATS_REPORT_NUM + > NIC_RX_STATS_REPORT_NUM) * > + nb_rx_queues; > + priv->stats_report_len = sizeof(struct gve_stats_report) + > + sizeof(struct stats) * (tx_stats_cnt + rx_stats_cnt); > + priv->stats_report_mem = > rte_memzone_reserve_aligned("report_stats", > + priv->stats_report_len, > + rte_socket_id(), > + RTE_MEMZONE_IOVA_CONTIG, PAGE_SIZE); > + > + if (!priv->stats_report_mem) > + return -ENOMEM; > + > + /* offset by skipping stats written by gve. */ > + priv->stats_start_idx = (GVE_TX_STATS_REPORT_NUM * > nb_tx_queues) + > + (GVE_RX_STATS_REPORT_NUM * nb_rx_queues); > + priv->stats_end_idx = priv->stats_start_idx + > + (NIC_TX_STATS_REPORT_NUM * nb_tx_queues) + > + (NIC_RX_STATS_REPORT_NUM * nb_rx_queues) - 1; > + > + return 0; > +} > + > +static void gve_free_stats_report(struct rte_eth_dev *dev) > +{ > + struct gve_priv *priv = dev->data->dev_private; > + rte_memzone_free(priv->stats_report_mem); > +} > + > +/* Read Rx NIC stats from shared region */ > +static void gve_get_imissed_from_nic(struct rte_eth_dev *dev) > +{ > + struct gve_stats_report *stats_report; > + struct gve_rx_queue *rxq; > + struct gve_priv *priv; > + struct stats stat; > + int queue_id; > + int stat_id; > + int i; > + > + priv = dev->data->dev_private; > + stats_report = (struct gve_stats_report *) > + priv->stats_report_mem->addr; > + > + for (i = priv->stats_start_idx; i <= priv->stats_end_idx; i++) { > + stat = stats_report->stats[i]; > + queue_id = cpu_to_be32(stat.queue_id); > + rxq = dev->data->rx_queues[queue_id]; > + if (rxq == NULL) > + continue; > + stat_id = cpu_to_be32(stat.stat_name); > + /* Update imissed. */ > + if (stat_id == RX_NO_BUFFERS_POSTED) > + rxq->stats.imissed = cpu_to_be64(stat.value); > + } > +} > + > static int > gve_start_queues(struct rte_eth_dev *dev) > { > @@ -176,6 +240,7 @@ gve_start_queues(struct rte_eth_dev *dev) > static int > gve_dev_start(struct rte_eth_dev
RE: [PATCH] net/e1000: support launchtime feature
Hi Chuanyu, > -Original Message- > From: Chuanyu Xue > Sent: Monday, December 18, 2023 4:21 AM > To: Lu, Wenzhuo ; Zhang, Qi Z > ; Xing, Beilei > Cc: dev@dpdk.org; Chuanyu Xue > Subject: [PATCH] net/e1000: support launchtime feature > > Enable the time-based scheduled Tx of packets based on the > RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP flag. The launchtime defines > the packet transmission time based on PTP clock at MAC layer, which should > be set to the advanced transmit descriptor. > > Signed-off-by: Chuanyu Xue > --- > drivers/net/e1000/base/e1000_regs.h | 1 + > drivers/net/e1000/e1000_ethdev.h| 3 ++ > drivers/net/e1000/igb_ethdev.c | 28 ++ > drivers/net/e1000/igb_rxtx.c| 44 - > 4 files changed, 69 insertions(+), 7 deletions(-) > > diff --git a/drivers/net/e1000/base/e1000_regs.h > b/drivers/net/e1000/base/e1000_regs.h > index d44de59c29..092d9d71e6 100644 > --- a/drivers/net/e1000/base/e1000_regs.h > +++ b/drivers/net/e1000/base/e1000_regs.h > @@ -162,6 +162,7 @@ > > /* QAV Tx mode control register */ > #define E1000_I210_TQAVCTRL 0x3570 > +#define E1000_I210_LAUNCH_OS0 0x3578 What does this register mean? > > /* QAV Tx mode control register bitfields masks */ > /* QAV enable */ > diff --git a/drivers/net/e1000/e1000_ethdev.h > b/drivers/net/e1000/e1000_ethdev.h > index 718a9746ed..174f7aaf52 100644 > --- a/drivers/net/e1000/e1000_ethdev.h > +++ b/drivers/net/e1000/e1000_ethdev.h > @@ -382,6 +382,9 @@ extern struct igb_rss_filter_list igb_filter_rss_list; > TAILQ_HEAD(igb_flow_mem_list, igb_flow_mem); extern struct > igb_flow_mem_list igb_flow_list; > > +extern uint64_t igb_tx_timestamp_dynflag; extern int > +igb_tx_timestamp_dynfield_offset; > + > extern const struct rte_flow_ops igb_flow_ops; > > /* > diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c > index 8858f975f8..4d3d8ae30a 100644 > --- a/drivers/net/e1000/igb_ethdev.c > +++ b/drivers/net/e1000/igb_ethdev.c > @@ -223,6 +223,7 @@ static int igb_timesync_read_time(struct rte_eth_dev > *dev, > struct timespec *timestamp); > static int igb_timesync_write_time(struct rte_eth_dev *dev, > const struct timespec *timestamp); > +static int eth_igb_read_clock(__rte_unused struct rte_eth_dev *dev, > +uint64_t *clock); > static int eth_igb_rx_queue_intr_enable(struct rte_eth_dev *dev, > uint16_t queue_id); > static int eth_igb_rx_queue_intr_disable(struct rte_eth_dev *dev, @@ -313,6 > +314,9 @@ static const struct rte_pci_id pci_id_igbvf_map[] = { > { .vendor_id = 0, /* sentinel */ }, > }; > > +uint64_t igb_tx_timestamp_dynflag; > +int igb_tx_timestamp_dynfield_offset = -1; > + > static const struct rte_eth_desc_lim rx_desc_lim = { > .nb_max = E1000_MAX_RING_DESC, > .nb_min = E1000_MIN_RING_DESC, > @@ -389,6 +393,7 @@ static const struct eth_dev_ops eth_igb_ops = { > .timesync_adjust_time = igb_timesync_adjust_time, > .timesync_read_time = igb_timesync_read_time, > .timesync_write_time = igb_timesync_write_time, > + .read_clock = eth_igb_read_clock, > }; > > /* > @@ -1198,6 +1203,7 @@ eth_igb_start(struct rte_eth_dev *dev) > struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); > struct rte_intr_handle *intr_handle = pci_dev->intr_handle; > int ret, mask; > + uint32_t tqavctrl; > uint32_t intr_vector = 0; > uint32_t ctrl_ext; > uint32_t *speeds; > @@ -1281,6 +1287,15 @@ eth_igb_start(struct rte_eth_dev *dev) > return ret; > } > > + if (igb_tx_timestamp_dynflag > 0) { > + tqavctrl = E1000_READ_REG(hw, E1000_I210_TQAVCTRL); > + tqavctrl |= E1000_TQAVCTRL_MODE; > + tqavctrl |= E1000_TQAVCTRL_FETCH_ARB; /* Fetch the queue most > empty, no Round Robin*/ > + tqavctrl |= E1000_TQAVCTRL_LAUNCH_TIMER_ENABLE; /* Enable > launch time */ In kernel driver, "E1000_TQAVCTRL_DATATRANTIM (BIT(9))" and "E1000_TQAVCTRL_FETCHTIME_DELTA (0x << 16)" are set, does it have some other intention here? > + E1000_WRITE_REG(hw, E1000_I210_TQAVCTRL, tqavctrl); > + E1000_WRITE_REG(hw, E1000_I210_LAUNCH_OS0, 1ULL << 31); /* > Set launch offset to default */ > + } > + > e1000_clear_hw_cntrs_base_generic(hw); > > /* > @@ -4882,6 +4897,19 @@ igb_timesync_read_tx_timestamp(struct > rte_eth_dev *dev, > return 0; > } > > +static int > +eth_igb_read_clock(__rte_unused struct rte_eth_dev *dev, uint64_t > +*clock) { > + uint64_t systime_cycles; > + struct e1000_adapter *adapter = dev->data->dev_private; > + > + systime_cycles = igb_read_systime_cyclecounter(dev); > + uint64_t ns = rte_timecounter_update(&adapter->systime_tc, > systime_cycles); Do you also run "ptp timesync" when testing this launchtime feature? > +
[PATCH v3 0/6] fix lcore ID restriction
With modern CPUs, it is possible to have higher CPU count thus we can have higher RTE_MAX_LCORES. In DPDK sample applications, the current config lcore options are hard limited to 255. The patchset fixes these constraints by allowing all lcore IDs up to RTE_MAX_LCORES. Sivaprasad Tummala (6): examples/l3fwd: fix lcore ID restriction examples/l3fwd-power: fix lcore ID restriction examples/l3fwd-graph: fix lcore ID restriction examples/ipsec-secgw: fix lcore ID restriction examples/qos_sched: fix lcore ID restriction examples/vm_power_manager: fix lcore ID restriction examples/ipsec-secgw/event_helper.h | 2 +- examples/ipsec-secgw/ipsec-secgw.c| 32 +-- examples/ipsec-secgw/ipsec.c | 2 +- examples/ipsec-secgw/ipsec.h | 2 +- examples/ipsec-secgw/ipsec_worker.c | 10 ++-- examples/l3fwd-graph/main.c | 31 +- examples/l3fwd-power/main.c | 57 +-- examples/l3fwd-power/main.h | 4 +- examples/l3fwd-power/perf_core.c | 10 ++-- examples/l3fwd/l3fwd.h| 2 +- examples/l3fwd/l3fwd_acl.c| 4 +- examples/l3fwd/l3fwd_em.c | 4 +- examples/l3fwd/l3fwd_event.h | 2 +- examples/l3fwd/l3fwd_fib.c| 4 +- examples/l3fwd/l3fwd_lpm.c| 5 +- examples/l3fwd/main.c | 36 ++-- examples/qos_sched/args.c | 6 +- .../guest_cli/vm_power_cli_guest.c| 4 +- 18 files changed, 109 insertions(+), 108 deletions(-) -- 2.25.1
[PATCH v3 1/6] examples/l3fwd: fix lcore ID restriction
Currently the config option allows lcore IDs up to 255, irrespective of RTE_MAX_LCORES and needs to be fixed. The patch allows config options based on DPDK config. Fixes: af75078fece3 ("first public release") Cc: sta...@dpdk.org Signed-off-by: Sivaprasad Tummala Acked-by: Konstantin Ananyev --- examples/l3fwd/l3fwd.h | 2 +- examples/l3fwd/l3fwd_acl.c | 4 ++-- examples/l3fwd/l3fwd_em.c| 4 ++-- examples/l3fwd/l3fwd_event.h | 2 +- examples/l3fwd/l3fwd_fib.c | 4 ++-- examples/l3fwd/l3fwd_lpm.c | 5 ++--- examples/l3fwd/main.c| 36 7 files changed, 30 insertions(+), 27 deletions(-) diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h index e7ae0e5834..12c264cb4c 100644 --- a/examples/l3fwd/l3fwd.h +++ b/examples/l3fwd/l3fwd.h @@ -74,7 +74,7 @@ struct mbuf_table { struct lcore_rx_queue { uint16_t port_id; - uint8_t queue_id; + uint16_t queue_id; } __rte_cache_aligned; struct lcore_conf { diff --git a/examples/l3fwd/l3fwd_acl.c b/examples/l3fwd/l3fwd_acl.c index 401692bcec..2bd63181bc 100644 --- a/examples/l3fwd/l3fwd_acl.c +++ b/examples/l3fwd/l3fwd_acl.c @@ -997,7 +997,7 @@ acl_main_loop(__rte_unused void *dummy) uint64_t prev_tsc, diff_tsc, cur_tsc; int i, nb_rx; uint16_t portid; - uint8_t queueid; + uint16_t queueid; struct lcore_conf *qconf; int socketid; const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) @@ -1020,7 +1020,7 @@ acl_main_loop(__rte_unused void *dummy) portid = qconf->rx_queue_list[i].port_id; queueid = qconf->rx_queue_list[i].queue_id; RTE_LOG(INFO, L3FWD, - " -- lcoreid=%u portid=%u rxqueueid=%hhu\n", + " -- lcoreid=%u portid=%u rxqueueid=%hu\n", lcore_id, portid, queueid); } diff --git a/examples/l3fwd/l3fwd_em.c b/examples/l3fwd/l3fwd_em.c index 40e102b38a..cd2bb4a4bb 100644 --- a/examples/l3fwd/l3fwd_em.c +++ b/examples/l3fwd/l3fwd_em.c @@ -586,7 +586,7 @@ em_main_loop(__rte_unused void *dummy) unsigned lcore_id; uint64_t prev_tsc, diff_tsc, cur_tsc; int i, nb_rx; - uint8_t queueid; + uint16_t queueid; uint16_t portid; struct lcore_conf *qconf; const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / @@ -609,7 +609,7 @@ em_main_loop(__rte_unused void *dummy) portid = qconf->rx_queue_list[i].port_id; queueid = qconf->rx_queue_list[i].queue_id; RTE_LOG(INFO, L3FWD, - " -- lcoreid=%u portid=%u rxqueueid=%hhu\n", + " -- lcoreid=%u portid=%u rxqueueid=%hu\n", lcore_id, portid, queueid); } diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h index 9aad358003..c6a4a89127 100644 --- a/examples/l3fwd/l3fwd_event.h +++ b/examples/l3fwd/l3fwd_event.h @@ -78,8 +78,8 @@ struct l3fwd_event_resources { uint8_t deq_depth; uint8_t has_burst; uint8_t enabled; - uint8_t eth_rx_queues; uint8_t vector_enabled; + uint16_t eth_rx_queues; uint16_t vector_size; uint64_t vector_tmo_ns; }; diff --git a/examples/l3fwd/l3fwd_fib.c b/examples/l3fwd/l3fwd_fib.c index 6a21984415..7da55f707a 100644 --- a/examples/l3fwd/l3fwd_fib.c +++ b/examples/l3fwd/l3fwd_fib.c @@ -186,7 +186,7 @@ fib_main_loop(__rte_unused void *dummy) uint64_t prev_tsc, diff_tsc, cur_tsc; int i, nb_rx; uint16_t portid; - uint8_t queueid; + uint16_t queueid; struct lcore_conf *qconf; const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / US_PER_S * BURST_TX_DRAIN_US; @@ -208,7 +208,7 @@ fib_main_loop(__rte_unused void *dummy) portid = qconf->rx_queue_list[i].port_id; queueid = qconf->rx_queue_list[i].queue_id; RTE_LOG(INFO, L3FWD, - " -- lcoreid=%u portid=%u rxqueueid=%hhu\n", + " -- lcoreid=%u portid=%u rxqueueid=%hu\n", lcore_id, portid, queueid); } diff --git a/examples/l3fwd/l3fwd_lpm.c b/examples/l3fwd/l3fwd_lpm.c index a484a33089..01d38bc69c 100644 --- a/examples/l3fwd/l3fwd_lpm.c +++ b/examples/l3fwd/l3fwd_lpm.c @@ -148,8 +148,7 @@ lpm_main_loop(__rte_unused void *dummy) unsigned lcore_id; uint64_t prev_tsc, diff_tsc, cur_tsc; int i, nb_rx; - uint16_t portid; - uint8_t queueid; + uint16_t portid, queueid; struct lcore_conf *qconf; const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / US_PER_S * BURST_TX_DRAIN_US; @@ -171,7 +170,7 @@ lpm_main_loop(__rte_unused void *dummy) portid = qconf->rx_queue_list[i].port_id; queueid = qconf->r
[PATCH v3 2/6] examples/l3fwd-power: fix lcore ID restriction
Currently the config option allows lcore IDs up to 255, irrespective of RTE_MAX_LCORES and needs to be fixed. The patch allows config options based on DPDK config. Fixes: f88e7c175a68 ("examples/l3fwd-power: add high/regular perf cores options") Cc: radu.nico...@intel.com Cc: sta...@dpdk.org Signed-off-by: Sivaprasad Tummala --- examples/l3fwd-power/main.c | 57 examples/l3fwd-power/main.h | 4 +-- examples/l3fwd-power/perf_core.c | 10 +++--- 3 files changed, 35 insertions(+), 36 deletions(-) diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c index f4adcf41b5..d0f3c332ee 100644 --- a/examples/l3fwd-power/main.c +++ b/examples/l3fwd-power/main.c @@ -214,7 +214,7 @@ enum freq_scale_hint_t struct lcore_rx_queue { uint16_t port_id; - uint8_t queue_id; + uint16_t queue_id; enum freq_scale_hint_t freq_up_hint; uint32_t zero_rx_packet_count; uint32_t idle_hint; @@ -838,7 +838,7 @@ sleep_until_rx_interrupt(int num, int lcore) struct rte_epoll_event event[num]; int n, i; uint16_t port_id; - uint8_t queue_id; + uint16_t queue_id; void *data; if (status[lcore].wakeup) { @@ -850,9 +850,9 @@ sleep_until_rx_interrupt(int num, int lcore) n = rte_epoll_wait(RTE_EPOLL_PER_THREAD, event, num, 10); for (i = 0; i < n; i++) { data = event[i].epdata.data; - port_id = ((uintptr_t)data) >> CHAR_BIT; + port_id = ((uintptr_t)data) >> (sizeof(uint16_t) * CHAR_BIT); queue_id = ((uintptr_t)data) & - RTE_LEN2MASK(CHAR_BIT, uint8_t); + RTE_LEN2MASK((sizeof(uint16_t) * CHAR_BIT), uint16_t); RTE_LOG(INFO, L3FWD_POWER, "lcore %u is waked up from rx interrupt on" " port %d queue %d\n", @@ -867,7 +867,7 @@ static void turn_on_off_intr(struct lcore_conf *qconf, bool on) { int i; struct lcore_rx_queue *rx_queue; - uint8_t queue_id; + uint16_t queue_id; uint16_t port_id; for (i = 0; i < qconf->n_rx_queue; ++i) { @@ -887,7 +887,7 @@ static void turn_on_off_intr(struct lcore_conf *qconf, bool on) static int event_register(struct lcore_conf *qconf) { struct lcore_rx_queue *rx_queue; - uint8_t queueid; + uint16_t queueid; uint16_t portid; uint32_t data; int ret; @@ -897,7 +897,7 @@ static int event_register(struct lcore_conf *qconf) rx_queue = &(qconf->rx_queue_list[i]); portid = rx_queue->port_id; queueid = rx_queue->queue_id; - data = portid << CHAR_BIT | queueid; + data = portid << (sizeof(uint16_t) * CHAR_BIT) | queueid; ret = rte_eth_dev_rx_intr_ctl_q(portid, queueid, RTE_EPOLL_PER_THREAD, @@ -917,8 +917,7 @@ static int main_intr_loop(__rte_unused void *dummy) unsigned int lcore_id; uint64_t prev_tsc, diff_tsc, cur_tsc; int i, j, nb_rx; - uint8_t queueid; - uint16_t portid; + uint16_t portid, queueid; struct lcore_conf *qconf; struct lcore_rx_queue *rx_queue; uint32_t lcore_rx_idle_count = 0; @@ -946,7 +945,7 @@ static int main_intr_loop(__rte_unused void *dummy) portid = qconf->rx_queue_list[i].port_id; queueid = qconf->rx_queue_list[i].queue_id; RTE_LOG(INFO, L3FWD_POWER, - " -- lcoreid=%u portid=%u rxqueueid=%hhu\n", + " -- lcoreid=%u portid=%u rxqueueid=%hu\n", lcore_id, portid, queueid); } @@ -1083,8 +1082,7 @@ main_telemetry_loop(__rte_unused void *dummy) unsigned int lcore_id; uint64_t prev_tsc, diff_tsc, cur_tsc, prev_tel_tsc; int i, j, nb_rx; - uint8_t queueid; - uint16_t portid; + uint16_t portid, queueid; struct lcore_conf *qconf; struct lcore_rx_queue *rx_queue; uint64_t ep_nep[2] = {0}, fp_nfp[2] = {0}; @@ -1114,7 +1112,7 @@ main_telemetry_loop(__rte_unused void *dummy) portid = qconf->rx_queue_list[i].port_id; queueid = qconf->rx_queue_list[i].queue_id; RTE_LOG(INFO, L3FWD_POWER, " -- lcoreid=%u portid=%u " - "rxqueueid=%hhu\n", lcore_id, portid, queueid); + "rxqueueid=%hu\n", lcore_id, portid, queueid); } while (!is_done()) { @@ -1205,8 +1203,7 @@ main_legacy_loop(__rte_unused void *dummy) uint64_t prev_tsc, diff_tsc, cur_tsc, tim_res_tsc, hz; uint64_t prev_tsc_power = 0, cur_tsc_power, diff_tsc_power; int i, j, nb_rx; - uint8_t queueid; - uint16_t portid; + uint16_t portid, queueid; struct lcore_conf *qconf;
[PATCH v3 3/6] examples/l3fwd-graph: fix lcore ID restriction
Currently the config option allows lcore IDs up to 255, irrespective of RTE_MAX_LCORES and needs to be fixed. The patch allows config options based on DPDK config. Fixes: 08bd1a174461 ("examples/l3fwd-graph: add graph-based l3fwd skeleton") Cc: ndabilpu...@marvell.com Cc: sta...@dpdk.org Signed-off-by: Sivaprasad Tummala --- examples/l3fwd-graph/main.c | 31 --- 1 file changed, 16 insertions(+), 15 deletions(-) diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c index 96cb1c81ff..ffb6900fee 100644 --- a/examples/l3fwd-graph/main.c +++ b/examples/l3fwd-graph/main.c @@ -90,7 +90,7 @@ static int pcap_trace_enable; struct lcore_rx_queue { uint16_t port_id; - uint8_t queue_id; + uint16_t queue_id; char node_name[RTE_NODE_NAMESIZE]; }; @@ -110,8 +110,8 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE]; struct lcore_params { uint16_t port_id; - uint8_t queue_id; - uint8_t lcore_id; + uint16_t queue_id; + uint16_t lcore_id; } __rte_cache_aligned; static struct lcore_params lcore_params_array[MAX_LCORE_PARAMS]; @@ -205,19 +205,19 @@ check_worker_model_params(void) static int check_lcore_params(void) { - uint8_t queue, lcore; + uint16_t queue; int socketid; - uint16_t i; + uint16_t i, lcore; for (i = 0; i < nb_lcore_params; ++i) { queue = lcore_params[i].queue_id; if (queue >= MAX_RX_QUEUE_PER_PORT) { - printf("Invalid queue number: %hhu\n", queue); + printf("Invalid queue number: %hu\n", queue); return -1; } lcore = lcore_params[i].lcore_id; if (!rte_lcore_is_enabled(lcore)) { - printf("Error: lcore %hhu is not enabled in lcore mask\n", + printf("Error: lcore %hu is not enabled in lcore mask\n", lcore); return -1; } @@ -228,7 +228,7 @@ check_lcore_params(void) } socketid = rte_lcore_to_socket_id(lcore); if ((socketid != 0) && (numa_on == 0)) { - printf("Warning: lcore %hhu is on socket %d with numa off\n", + printf("Warning: lcore %hu is on socket %d with numa off\n", lcore, socketid); } } @@ -257,7 +257,7 @@ check_port_config(void) return 0; } -static uint8_t +static uint16_t get_port_n_rx_queues(const uint16_t port) { int queue = -1; @@ -275,14 +275,14 @@ get_port_n_rx_queues(const uint16_t port) } } - return (uint8_t)(++queue); + return (uint16_t)(++queue); } static int init_lcore_rx_queues(void) { uint16_t i, nb_rx_queue; - uint8_t lcore; + uint16_t lcore; for (i = 0; i < nb_lcore_params; ++i) { lcore = lcore_params[i].lcore_id; @@ -448,11 +448,11 @@ parse_config(const char *q_arg) } lcore_params_array[nb_lcore_params].port_id = - (uint8_t)int_fld[FLD_PORT]; + (uint16_t)int_fld[FLD_PORT]; lcore_params_array[nb_lcore_params].queue_id = - (uint8_t)int_fld[FLD_QUEUE]; + (uint16_t)int_fld[FLD_QUEUE]; lcore_params_array[nb_lcore_params].lcore_id = - (uint8_t)int_fld[FLD_LCORE]; + (uint16_t)int_fld[FLD_LCORE]; ++nb_lcore_params; } lcore_params = lcore_params_array; @@ -1011,7 +1011,8 @@ main(int argc, char **argv) "ethdev_tx-*", "pkt_drop", }; - uint8_t nb_rx_queue, queue, socketid; + uint8_t socketid; + uint16_t nb_rx_queue, queue; struct rte_graph_param graph_conf; struct rte_eth_dev_info dev_info; uint32_t nb_ports, nb_conf = 0; -- 2.25.1
[PATCH v3 4/6] examples/ipsec-secgw: fix lcore ID restriction
Currently the config option allows lcore IDs up to 255, irrespective of RTE_MAX_LCORES and needs to be fixed. The patch allows config options based on DPDK config. Fixes: d299106e8e31 ("examples/ipsec-secgw: add IPsec sample application") Cc: sergio.gonzalez.mon...@intel.com Cc: sta...@dpdk.org Signed-off-by: Sivaprasad Tummala Acked-by: Konstantin Ananyev --- examples/ipsec-secgw/event_helper.h | 2 +- examples/ipsec-secgw/ipsec-secgw.c | 32 ++--- examples/ipsec-secgw/ipsec.c| 2 +- examples/ipsec-secgw/ipsec.h| 2 +- examples/ipsec-secgw/ipsec_worker.c | 10 - 5 files changed, 23 insertions(+), 25 deletions(-) diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index dfb81bfcf1..9923700f03 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -102,7 +102,7 @@ struct eh_event_link_info { /**< Event port ID */ uint8_t eventq_id; /**< Event queue to be linked to the port */ - uint8_t lcore_id; + uint16_t lcore_id; /**< Lcore to be polling on this port */ }; diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index bf98d2618b..f03a93259c 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -220,8 +220,8 @@ static const char *cfgfile; struct lcore_params { uint16_t port_id; - uint8_t queue_id; - uint8_t lcore_id; + uint16_t queue_id; + uint16_t lcore_id; } __rte_cache_aligned; static struct lcore_params lcore_params_array[MAX_LCORE_PARAMS]; @@ -696,8 +696,7 @@ ipsec_poll_mode_worker(void) uint32_t lcore_id; uint64_t prev_tsc, diff_tsc, cur_tsc; int32_t i, nb_rx; - uint16_t portid; - uint8_t queueid; + uint16_t portid, queueid; struct lcore_conf *qconf; int32_t rc, socket_id; const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) @@ -789,8 +788,7 @@ int check_flow_params(uint16_t fdir_portid, uint8_t fdir_qid) { uint16_t i; - uint16_t portid; - uint8_t queueid; + uint16_t portid, queueid; for (i = 0; i < nb_lcore_params; ++i) { portid = lcore_params_array[i].port_id; @@ -810,7 +808,7 @@ check_flow_params(uint16_t fdir_portid, uint8_t fdir_qid) static int32_t check_poll_mode_params(struct eh_conf *eh_conf) { - uint8_t lcore; + uint16_t lcore; uint16_t portid; uint16_t i; int32_t socket_id; @@ -829,13 +827,13 @@ check_poll_mode_params(struct eh_conf *eh_conf) for (i = 0; i < nb_lcore_params; ++i) { lcore = lcore_params[i].lcore_id; if (!rte_lcore_is_enabled(lcore)) { - printf("error: lcore %hhu is not enabled in " + printf("error: lcore %hu is not enabled in " "lcore mask\n", lcore); return -1; } socket_id = rte_lcore_to_socket_id(lcore); if (socket_id != 0 && numa_on == 0) { - printf("warning: lcore %hhu is on socket %d " + printf("warning: lcore %hu is on socket %d " "with numa off\n", lcore, socket_id); } @@ -852,7 +850,7 @@ check_poll_mode_params(struct eh_conf *eh_conf) return 0; } -static uint8_t +static uint16_t get_port_nb_rx_queues(const uint16_t port) { int32_t queue = -1; @@ -863,14 +861,14 @@ get_port_nb_rx_queues(const uint16_t port) lcore_params[i].queue_id > queue) queue = lcore_params[i].queue_id; } - return (uint8_t)(++queue); + return (uint16_t)(++queue); } static int32_t init_lcore_rx_queues(void) { uint16_t i, nb_rx_queue; - uint8_t lcore; + uint16_t lcore; for (i = 0; i < nb_lcore_params; ++i) { lcore = lcore_params[i].lcore_id; @@ -1051,6 +1049,8 @@ parse_config(const char *q_arg) char *str_fld[_NUM_FLD]; int32_t i; uint32_t size; + uint16_t max_fld[_NUM_FLD] = {RTE_MAX_ETHPORTS, + USHRT_MAX, RTE_MAX_LCORE}; nb_lcore_params = 0; @@ -1071,7 +1071,7 @@ parse_config(const char *q_arg) for (i = 0; i < _NUM_FLD; i++) { errno = 0; int_fld[i] = strtoul(str_fld[i], &end, 0); - if (errno != 0 || end == str_fld[i] || int_fld[i] > 255) + if (errno != 0 || end == str_fld[i] || int_fld[i] > max_fld[i]) return -1; } if (nb_lcore_params >= MAX_LCORE_PARAMS) { @@ -1080,11 +1080,11 @@ parse_config(const char *q_arg)
[PATCH v3 5/6] examples/qos_sched: fix lcore ID restriction
Currently the config option allows lcore IDs up to 255, irrespective of RTE_MAX_LCORES and needs to be fixed. The patch allows config options based on DPDK config. Fixes: de3cfa2c9823 ("sched: initial import") Cc: sta...@dpdk.org Signed-off-by: Sivaprasad Tummala --- examples/qos_sched/args.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/examples/qos_sched/args.c b/examples/qos_sched/args.c index e97273152a..22fe76eeb5 100644 --- a/examples/qos_sched/args.c +++ b/examples/qos_sched/args.c @@ -182,10 +182,10 @@ app_parse_flow_conf(const char *conf_str) pconf->rx_port = vals[0]; pconf->tx_port = vals[1]; - pconf->rx_core = (uint8_t)vals[2]; - pconf->wt_core = (uint8_t)vals[3]; + pconf->rx_core = (uint16_t)vals[2]; + pconf->wt_core = (uint16_t)vals[3]; if (ret == 5) - pconf->tx_core = (uint8_t)vals[4]; + pconf->tx_core = (uint16_t)vals[4]; else pconf->tx_core = pconf->wt_core; -- 2.25.1
[PATCH v3 6/6] examples/vm_power_manager: fix lcore ID restriction
Currently the config option allows lcore IDs up to 255, irrespective of RTE_MAX_LCORES and needs to be fixed. The patch allows config options based on DPDK config. Fixes: 0e8f47491f09 ("examples/vm_power: add command to query CPU frequency") Cc: marcinx.hajkow...@intel.com Cc: sta...@dpdk.org Signed-off-by: Sivaprasad Tummala --- examples/vm_power_manager/guest_cli/vm_power_cli_guest.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c b/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c index 94bfbbaf78..a586853a76 100644 --- a/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c +++ b/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c @@ -401,7 +401,7 @@ check_response_cmd(unsigned int lcore_id, int *result) struct cmd_set_cpu_freq_result { cmdline_fixed_string_t set_cpu_freq; - uint8_t lcore_id; + uint16_t lcore_id; cmdline_fixed_string_t cmd; }; @@ -444,7 +444,7 @@ cmdline_parse_token_string_t cmd_set_cpu_freq = set_cpu_freq, "set_cpu_freq"); cmdline_parse_token_num_t cmd_set_cpu_freq_core_num = TOKEN_NUM_INITIALIZER(struct cmd_set_cpu_freq_result, - lcore_id, RTE_UINT8); + lcore_id, RTE_UINT16); cmdline_parse_token_string_t cmd_set_cpu_freq_cmd_cmd = TOKEN_STRING_INITIALIZER(struct cmd_set_cpu_freq_result, cmd, "up#down#min#max#enable_turbo#disable_turbo"); -- 2.25.1
[PATCH v3 0/6] fix lcore ID restriction
With modern CPUs, it is possible to have higher CPU count thus we can have higher RTE_MAX_LCORES. In DPDK sample applications, the current config lcore options are hard limited to 255. The patchset fixes these constraints by allowing all lcore IDs up to RTE_MAX_LCORES. Sivaprasad Tummala (6): examples/l3fwd: fix lcore ID restriction examples/l3fwd-power: fix lcore ID restriction examples/l3fwd-graph: fix lcore ID restriction examples/ipsec-secgw: fix lcore ID restriction examples/qos_sched: fix lcore ID restriction examples/vm_power_manager: fix lcore ID restriction examples/ipsec-secgw/event_helper.h | 2 +- examples/ipsec-secgw/ipsec-secgw.c| 32 +-- examples/ipsec-secgw/ipsec.c | 2 +- examples/ipsec-secgw/ipsec.h | 2 +- examples/ipsec-secgw/ipsec_worker.c | 10 ++-- examples/l3fwd-graph/main.c | 31 +- examples/l3fwd-power/main.c | 57 +-- examples/l3fwd-power/main.h | 4 +- examples/l3fwd-power/perf_core.c | 10 ++-- examples/l3fwd/l3fwd.h| 2 +- examples/l3fwd/l3fwd_acl.c| 4 +- examples/l3fwd/l3fwd_em.c | 4 +- examples/l3fwd/l3fwd_event.h | 2 +- examples/l3fwd/l3fwd_fib.c| 4 +- examples/l3fwd/l3fwd_lpm.c| 5 +- examples/l3fwd/main.c | 36 ++-- examples/qos_sched/args.c | 6 +- .../guest_cli/vm_power_cli_guest.c| 4 +- 18 files changed, 109 insertions(+), 108 deletions(-) -- 2.25.1
[PATCH] config/x86: config support for AMD EPYC processors
On x86 platforms, max lcores are limited to 128 by default. On AMD EPYC processors, this limit was adjusted for native builds in the previous patch. https://patches.dpdk.org/project/dpdk/patch/ 20230925151027.558546-1-sivaprasad.tumm...@amd.com/ As agreed earlier in mailing list, this patch adjusts the limit for specific AMD EPYC target/cross builds. Signed-off-by: Sivaprasad Tummala --- config/x86/meson.build | 10 +- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/config/x86/meson.build b/config/x86/meson.build index 5355731cef..f2df4c2003 100644 --- a/config/x86/meson.build +++ b/config/x86/meson.build @@ -91,13 +91,21 @@ epyc_zen_cores = { '__znver1__':128 } -if get_option('platform') == 'native' +cpu_instruction_set = get_option('cpu_instruction_set') +if cpu_instruction_set == 'native' foreach m:epyc_zen_cores.keys() if cc.get_define(m, args: machine_args) != '' dpdk_conf.set('RTE_MAX_LCORE', epyc_zen_cores[m]) break endif endforeach +else +foreach m:epyc_zen_cores.keys() +if m.contains(cpu_instruction_set) +dpdk_conf.set('RTE_MAX_LCORE', epyc_zen_cores[m]) + break +endif +endforeach endif dpdk_conf.set('RTE_MAX_NUMA_NODES', 32) -- 2.25.1
22.11.4 patches review and test
Hi all, Here is a list of patches targeted for stable release 22.11.4. The planned date for the final release is 5th January. Please help with testing and validation of your use cases and report any issues/results with reply-all to this mail. For the final release the fixes and reported validations will be added to the release notes. A release candidate tarball can be found at: https://dpdk.org/browse/dpdk-stable/tag/?id=v22.11.4-rc3 These patches are located at branch 22.11 of dpdk-stable repo: https://dpdk.org/browse/dpdk-stable/ Thanks. Xueming Li --- Aakash Sasidharan (2): event/cnxk: fix return values for capability API test/event: fix crypto null device creation Abdullah Sevincer (3): bus/pci: add PASID control event/dlb2: disable PASID event/dlb2: fix disable PASID Akhil Goyal (2): common/cnxk: fix different size bit operations net/cnxk: fix uninitialized variable Alex Vesker (1): net/mlx5/hws: fix field copy bind Alexander Kozyrev (3): net/mlx5/hws: fix integrity bits level net/mlx5: fix MPRQ stride size check ethdev: fix ESP packet type description Amit Prakash Shukla (4): common/cnxk: fix DPI memzone name dma/cnxk: fix device state dma/cnxk: fix device reconfigure dma/cnxk: fix chunk buffer failure return code Anatoly Burakov (1): test: fix named test macro Anoob Joseph (2): cryptodev: add missing doc for security context doc: replace code blocks with includes in security guide Artemy Kovalyov (1): mem: fix deadlock with multiprocess Ashwin Sekhar T K (2): mempool/cnxk: fix alloc from non-EAL threads common/cnxk: fix aura disable handling Beilei Xing (1): net/i40e: fix FDIR queue receives broadcast packets Bing Zhao (3): net/mlx5: fix flow workspace double free in Windows net/mlx5: fix shared Rx queue list management net/mlx5: fix LACP redirection in Rx domain Brian Dooley (4): test/crypto: fix IV in some vectors test/crypto: skip some synchronous tests with CPU crypto doc: update kernel module entry in QAT guide examples/ipsec-secgw: fix partial overflow Bruce Richardson (8): crypto/ipsec_mb: add dependency check for cross build event/sw: remove obsolete comment net/i40e: fix buffer leak on Rx reconfiguration eventdev: fix device pointer for vdev-based devices eventdev: fix missing driver names in info struct ethdev: fix function name in comment event/dlb2: fix name check in self-test event/dlb2: fix missing queue ordering capability flag Chaoyong He (5): net/nfp: fix crash on close net/nfp: fix reconfigure logic in PF initialization net/nfp: fix reconfigure logic in VF initialization net/nfp: fix link status interrupt net/nfp: fix reconfigure logic of set MAC address Chengwen Feng (2): net/hns3: fix traffic management thread safety net/hns3: fix traffic management dump text alignment Christian Ehrhardt (1): config: fix RISC-V native build Ciara Power (2): crypto/qat: fix raw API null algorithm digest crypto/openssl: fix memory leaks in asym session Dariusz Sosnowski (8): net/mlx5: fix jump ipool entry size net/mlx5: fix flow thread safety flag for HWS common/mlx5: fix controller index parsing net/mlx5: fix missing flow rules for external SQ net/mlx5: fix use after free on Rx queue start net/mlx5: fix hairpin queue unbind net/mlx5: fix hairpin queue states net/mlx5: fix offset size in conntrack flow action David Christensen (1): net/tap: use MAC address parse API instead of local parser David Marchand (22): ci: fix race on container image name mempool: fix default ops for an empty mempool crypto/dpaa2_sec: fix debug prints crypto/dpaa_sec: fix debug prints eventdev: fix symbol export for port maintenance common/cnxk: remove dead Meson code app/bbdev: fix link with NXP LA12XX net/iavf: fix checksum offloading net/iavf: fix Tx debug net/iavf: remove log from Tx prepare function net/iavf: fix TSO with big segments net/ice: remove log from Tx prepare function net/ice: fix TSO with big segments net/mlx5: fix leak in sysfs port name translation net/bonding: fix link status callback stop bus/ifpga: fix driver header dependency net/tap: fix L4 checksum offloading net/tap: fix IPv4 checksum offloading net/iavf: fix indent in Tx path doc: remove restriction on ixgbe vector support doc: fix some ordered lists doc: remove number of commands in vDPA guide Dengdui Huang (14): net/hns3: fix VF default MAC modified when set failed net/hns3: fix error code for multicast resource net/hns3: fix flushing multicast MAC address app/testpmd: fix help string net/hns3: fix unchecked Rx free th
RE: [PATCH] config/x86: config support for AMD EPYC processors
> From: Sivaprasad Tummala [mailto:sivaprasad.tumm...@amd.com] > Sent: Wednesday, 20 December 2023 08.11 > > On x86 platforms, max lcores are limited to 128 by default. > > On AMD EPYC processors, this limit was adjusted for native > builds in the previous patch. > https://patches.dpdk.org/project/dpdk/patch/ > 20230925151027.558546-1-sivaprasad.tumm...@amd.com/ > > As agreed earlier in mailing list, this patch adjusts the limit > for specific AMD EPYC target/cross builds. > > Signed-off-by: Sivaprasad Tummala > --- [...] > +foreach m:epyc_zen_cores.keys() > +if m.contains(cpu_instruction_set) > +dpdk_conf.set('RTE_MAX_LCORE', epyc_zen_cores[m]) > + break The indentation of "break" uses a mix of tab and spaces, and should be fixed. Acked-by: Morten Brørup