Re: [dpdk-dev] [RFC] ethdev: improve link speed to string
> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Min Hu (Connor) > Sent: Friday, 17 September 2021 02.44 > > Agree with you. Thanks Andrew > > 在 2021/9/16 16:21, Andrew Rybchenko 写道: > > On 9/16/21 11:16 AM, Min Hu (Connor) wrote: > >> Hi, Andrew, > >> > >> 在 2021/9/16 14:22, Andrew Rybchenko 写道: > >>> On 9/16/21 5:56 AM, Min Hu (Connor) wrote: > Currently, link speed to string only supports specific speeds, > like 10M, > 100M, 1G etc. > > This patch expands support for any link speed which is over 1M and > one > decimal place will kept for display at most. > > Signed-off-by: Min Hu (Connor) > --- > lib/ethdev/rte_ethdev.c | 34 +- > 1 file changed, 17 insertions(+), 17 deletions(-) > > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c > index daf5ca9242..1d3b960305 100644 > --- a/lib/ethdev/rte_ethdev.c > +++ b/lib/ethdev/rte_ethdev.c > @@ -2750,24 +2750,24 @@ rte_eth_link_get_nowait(uint16_t port_id, > struct rte_eth_link *eth_link) > const char * > rte_eth_link_speed_to_str(uint32_t link_speed) > { > - switch (link_speed) { > - case ETH_SPEED_NUM_NONE: return "None"; > - case ETH_SPEED_NUM_10M: return "10 Mbps"; > - case ETH_SPEED_NUM_100M: return "100 Mbps"; > - case ETH_SPEED_NUM_1G: return "1 Gbps"; > - case ETH_SPEED_NUM_2_5G: return "2.5 Gbps"; > - case ETH_SPEED_NUM_5G: return "5 Gbps"; > - case ETH_SPEED_NUM_10G: return "10 Gbps"; > - case ETH_SPEED_NUM_20G: return "20 Gbps"; > - case ETH_SPEED_NUM_25G: return "25 Gbps"; > - case ETH_SPEED_NUM_40G: return "40 Gbps"; > - case ETH_SPEED_NUM_50G: return "50 Gbps"; > - case ETH_SPEED_NUM_56G: return "56 Gbps"; > - case ETH_SPEED_NUM_100G: return "100 Gbps"; > - case ETH_SPEED_NUM_200G: return "200 Gbps"; > - case ETH_SPEED_NUM_UNKNOWN: return "Unknown"; > - default: return "Invalid"; > +#define SPEED_STRING_LEN 16 > + static char name[SPEED_STRING_LEN]; > >>> > >>> NACK > >>> > >>> Nothing good will happen if you try to use the function to > >>> print two different link speeds in one log message. > >> You are right. > >> And use malloc for "name" will result in memory leakage, which is > also > >> not a good option. > >> > >> BTW, do you think if we need to modify the function > >> "rte_eth_link_speed_to_str"? > > > > IMHO it would be more pain than gain in this case. If ETH_SPEED_NUM_xyz values was an enum instead of #define, the default case could be removed from this switch, and the compiler would emit a warning if a new ETH_SPEED_NUM_xyz was introduced without adding a case for it in this function. -Morten
Re: [dpdk-dev] [RFC] mempool: implement index-based per core cache
> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Honnappa > Nagarahalli > Sent: Monday, 4 October 2021 18.36 > > > > > > > > > > > Current mempool per core cache implementation is based on > pointer > > > > > For most architectures, each pointer consumes 64b Replace it > with > > > > > index-based implementation, where in each buffer is addressed > by > > > > > (pool address + index) I like Dharmik's suggestion very much. CPU cache is a critical and limited resource. DPDK has a tendency of using pointers where indexes could be used instead. I suppose pointers provide the additional flexibility of mixing entries from different memory pools, e.g. multiple mbuf pools. > > > > > > > > I don't think it is going to work: > > > > On 64-bit systems difference between pool address and it's elem > > > > address could be bigger than 4GB. > > > Are you talking about a case where the memory pool size is more > than 4GB? > > > > That is one possible scenario. That could be solved by making the index an element index instead of a pointer offset: address = (pool address + index * element size). > > Another possibility - user populates mempool himself with some > external > > memory by calling rte_mempool_populate_iova() directly. > Is the concern that IOVA might not be contiguous for all the memory > used by the mempool? > > > I suppose such situation can even occur even with normal > > rte_mempool_create(), though it should be a really rare one. > All in all, this feature needs to be configurable during compile time.
[dpdk-dev] [PATCH 6/6] devbind: add Kunpeng DMA to dmadev category
add Kunpeng DMA device ID to dmadev category. Signed-off-by: Chengwen Feng --- usertools/dpdk-devbind.py | 6 +- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py index bb00f43702..a74a68ed82 100755 --- a/usertools/dpdk-devbind.py +++ b/usertools/dpdk-devbind.py @@ -68,10 +68,14 @@ intel_ntb_icx = {'Class': '06', 'Vendor': '8086', 'Device': '347e', 'SVendor': None, 'SDevice': None} +hisilicon_dma = {'Class': '08', 'Vendor': '19e5', 'Device': 'a122', + 'SVendor': None, 'SDevice': None} + network_devices = [network_class, cavium_pkx, avp_vnic, ifpga_class] baseband_devices = [acceleration_class] crypto_devices = [encryption_class, intel_processor_class] -dma_devices = [intel_idxd_spr, intel_ioat_bdw, intel_ioat_icx, intel_ioat_skx] +dma_devices = [intel_idxd_spr, intel_ioat_bdw, intel_ioat_icx, intel_ioat_skx, + hisilicon_dma] eventdev_devices = [cavium_sso, cavium_tim, intel_dlb, octeontx2_sso] mempool_devices = [cavium_fpa, octeontx2_npa] compress_devices = [cavium_zip] -- 2.33.0
[dpdk-dev] [PATCH 5/6] dma/hisilicon: support multi-process
This patch add multi-process support for Kunpeng DMA devices. Signed-off-by: Chengwen Feng --- drivers/dma/hisilicon/hisi_dmadev.c | 21 + 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/drivers/dma/hisilicon/hisi_dmadev.c b/drivers/dma/hisilicon/hisi_dmadev.c index d03967cae3..05066b4d0e 100644 --- a/drivers/dma/hisilicon/hisi_dmadev.c +++ b/drivers/dma/hisilicon/hisi_dmadev.c @@ -392,8 +392,10 @@ hisi_dma_stop(struct rte_dma_dev *dev) static int hisi_dma_close(struct rte_dma_dev *dev) { - /* The dmadev already stopped */ - hisi_dma_free_iomem(dev->data->dev_private); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + /* The dmadev already stopped */ + hisi_dma_free_iomem(dev->data->dev_private); + } return 0; } @@ -815,11 +817,13 @@ hisi_dma_create(struct rte_pci_device *pci_dev, uint8_t queue_id, hw->cq_head_reg = hisi_dma_queue_regaddr(hw, HISI_DMA_QUEUE_CQ_HEAD_REG); - ret = hisi_dma_reset_hw(hw); - if (ret) { - HISI_DMA_LOG(ERR, "%s init device fail!", name); - (void)rte_dma_pmd_release(name); - return -EIO; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + ret = hisi_dma_reset_hw(hw); + if (ret) { + HISI_DMA_LOG(ERR, "%s init device fail!", name); + (void)rte_dma_pmd_release(name); + return -EIO; + } } dev->state = RTE_DMA_DEV_READY; @@ -872,7 +876,8 @@ hisi_dma_probe(struct rte_pci_driver *pci_drv __rte_unused, return ret; HISI_DMA_LOG(DEBUG, "%s read PCI revision: 0x%x", name, revision); - hisi_dma_init_gbl(pci_dev->mem_resource[2].addr, revision); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + hisi_dma_init_gbl(pci_dev->mem_resource[2].addr, revision); for (i = 0; i < HISI_DMA_MAX_HW_QUEUES; i++) { ret = hisi_dma_create(pci_dev, i, revision); -- 2.33.0
[dpdk-dev] [PATCH 0/6] dma: add hisilicon DMA driver
This patch set add hisilicon DMA driver. Chengwen Feng (6): dma/hisilicon: add device probe and remove functions dma/hisilicon: add dmadev instances create and destroy dma/hisilicon: add control path functions dma/hisilicon: add data path functions dma/hisilicon: support multi-process devbind: add Kunpeng DMA to dmadev category MAINTAINERS| 5 + doc/guides/dmadevs/hisilicon.rst | 41 ++ doc/guides/dmadevs/index.rst | 1 + doc/guides/rel_notes/release_21_11.rst | 4 + drivers/dma/hisilicon/hisi_dmadev.c| 925 + drivers/dma/hisilicon/hisi_dmadev.h| 236 +++ drivers/dma/hisilicon/meson.build | 7 + drivers/dma/hisilicon/version.map | 3 + drivers/dma/meson.build| 1 + usertools/dpdk-devbind.py | 6 +- 10 files changed, 1228 insertions(+), 1 deletion(-) create mode 100644 doc/guides/dmadevs/hisilicon.rst create mode 100644 drivers/dma/hisilicon/hisi_dmadev.c create mode 100644 drivers/dma/hisilicon/hisi_dmadev.h create mode 100644 drivers/dma/hisilicon/meson.build create mode 100644 drivers/dma/hisilicon/version.map -- 2.33.0
[dpdk-dev] [PATCH 4/6] dma/hisilicon: add data path functions
This patch add data path functions for Kunpeng DMA devices. Signed-off-by: Chengwen Feng --- drivers/dma/hisilicon/hisi_dmadev.c | 206 drivers/dma/hisilicon/hisi_dmadev.h | 16 +++ 2 files changed, 222 insertions(+) diff --git a/drivers/dma/hisilicon/hisi_dmadev.c b/drivers/dma/hisilicon/hisi_dmadev.c index bcdcf4de4b..d03967cae3 100644 --- a/drivers/dma/hisilicon/hisi_dmadev.c +++ b/drivers/dma/hisilicon/hisi_dmadev.c @@ -529,6 +529,206 @@ hisi_dma_dump(const struct rte_dma_dev *dev, FILE *f) return 0; } +static int +hisi_dma_copy(void *dev_private, uint16_t vchan, +rte_iova_t src, rte_iova_t dst, +uint32_t length, uint64_t flags) +{ + struct hisi_dma_dev *hw = dev_private; + struct hisi_dma_sqe *sqe = &hw->sqe[hw->sq_tail]; + + RTE_SET_USED(vchan); + + if (((hw->sq_tail + 1) & hw->sq_depth_mask) == hw->sq_head) + return -ENOSPC; + + sqe->dw0 = rte_cpu_to_le_32(SQE_OPCODE_M2M); + sqe->dw1 = 0; + sqe->dw2 = 0; + sqe->length = rte_cpu_to_le_32(length); + sqe->src_addr = rte_cpu_to_le_64(src); + sqe->dst_addr = rte_cpu_to_le_64(dst); + hw->sq_tail = (hw->sq_tail + 1) & hw->sq_depth_mask; + hw->submitted++; + + if (flags & RTE_DMA_OP_FLAG_FENCE) + sqe->dw0 |= rte_cpu_to_le_32(SQE_FENCE_FLAG); + if (flags & RTE_DMA_OP_FLAG_SUBMIT) + rte_write32(rte_cpu_to_le_32(hw->sq_tail), hw->sq_tail_reg); + + return hw->ridx++; +} + +static int +hisi_dma_submit(void *dev_private, uint16_t vchan) +{ + struct hisi_dma_dev *hw = dev_private; + + RTE_SET_USED(vchan); + rte_write32(rte_cpu_to_le_32(hw->sq_tail), hw->sq_tail_reg); + + return 0; +} + +static inline void +hisi_dma_scan_cq(struct hisi_dma_dev *hw) +{ + volatile struct hisi_dma_cqe *cqe; + uint16_t csq_head = hw->cq_sq_head; + uint16_t cq_head = hw->cq_head; + uint16_t count = 0; + uint64_t misc; + + while (true) { + cqe = &hw->cqe[cq_head]; + misc = cqe->misc; + misc = rte_le_to_cpu_64(misc); + if (FIELD_GET(CQE_VALID_B, misc) != hw->cqe_vld) + break; + + csq_head = FIELD_GET(CQE_SQ_HEAD_MASK, misc); + if (unlikely(misc & CQE_STATUS_MASK)) + hw->status[csq_head] = FIELD_GET(CQE_STATUS_MASK, +misc); + + count++; + cq_head++; + if (cq_head == hw->cq_depth) { + hw->cqe_vld = !hw->cqe_vld; + cq_head = 0; + } + } + + if (count == 0) + return; + + hw->cq_head = cq_head; + hw->cq_sq_head = (csq_head + 1) & hw->sq_depth_mask; + hw->cqs_completed += count; + if (hw->cqs_completed >= HISI_DMA_CQ_RESERVED) { + rte_write32(rte_cpu_to_le_32(cq_head), hw->cq_head_reg); + hw->cqs_completed = 0; + } +} + +static inline uint16_t +hisi_dma_calc_cpls(struct hisi_dma_dev *hw, const uint16_t nb_cpls) +{ + uint16_t cpl_num; + + if (hw->cq_sq_head >= hw->sq_head) + cpl_num = hw->cq_sq_head - hw->sq_head; + else + cpl_num = hw->sq_depth_mask + 1 - hw->sq_head + hw->cq_sq_head; + + if (cpl_num > nb_cpls) + cpl_num = nb_cpls; + + return cpl_num; +} + +static uint16_t +hisi_dma_completed(void *dev_private, + uint16_t vchan, const uint16_t nb_cpls, + uint16_t *last_idx, bool *has_error) +{ + struct hisi_dma_dev *hw = dev_private; + uint16_t sq_head = hw->sq_head; + uint16_t cpl_num, i; + + RTE_SET_USED(vchan); + hisi_dma_scan_cq(hw); + + cpl_num = hisi_dma_calc_cpls(hw, nb_cpls); + for (i = 0; i < cpl_num; i++) { + if (hw->status[sq_head]) { + *has_error = true; + break; + } + sq_head = (sq_head + 1) & hw->sq_depth_mask; + } + if (i > 0) { + hw->cridx += i; + *last_idx = hw->cridx - 1; + hw->sq_head = sq_head; + } + hw->completed += i; + + return i; +} + +static enum rte_dma_status_code +hisi_dma_convert_status(uint16_t status) +{ + switch (status) { + case HISI_DMA_STATUS_SUCCESS: + return RTE_DMA_STATUS_SUCCESSFUL; + case HISI_DMA_STATUS_INVALID_OPCODE: + return RTE_DMA_STATUS_INVALID_OPCODE; + case HISI_DMA_STATUS_INVALID_LENGTH: + return RTE_DMA_STATUS_INVALID_LENGTH; + case HISI_DMA_STATUS_USER_ABORT: + return RTE_DMA_STATUS_USER_ABORT; + case HISI_DMA_STATUS_REMOTE_READ_ERROR: + case HISI_DMA_STATUS_AXI_READ_ERROR: + return RTE_DMA_STATUS_BUS_READ_ER
[dpdk-dev] [PATCH 2/6] dma/hisilicon: add dmadev instances create and destroy
This patch add dmadev instances create during the PCI probe, and destroy them during the PCI remove. Internal structures and HW definitions was also included. Signed-off-by: Chengwen Feng --- doc/guides/dmadevs/hisilicon.rst| 10 ++ drivers/dma/hisilicon/hisi_dmadev.c | 212 +++- drivers/dma/hisilicon/hisi_dmadev.h | 97 + 3 files changed, 318 insertions(+), 1 deletion(-) diff --git a/doc/guides/dmadevs/hisilicon.rst b/doc/guides/dmadevs/hisilicon.rst index 4cbaac4204..65138a8365 100644 --- a/doc/guides/dmadevs/hisilicon.rst +++ b/doc/guides/dmadevs/hisilicon.rst @@ -19,3 +19,13 @@ Device Setup Kunpeng DMA devices will need to be bound to a suitable DPDK-supported user-space IO driver such as ``vfio-pci`` in order to be used by DPDK. + +Device Probing and Initialization +~ + +Once probed successfully, the device will appear as four ``dmadev`` which can be +accessed using API from the ``rte_dmadev`` library. + +The name of the ``dmadev`` created is like "B:D.F-chX", e.g. DMA :7b:00.0 +will create four ``dmadev``, the 1st ``dmadev`` name is "7b:00.0-ch0", and the +2nd ``dmadev`` name is "7b:00.0-ch1". diff --git a/drivers/dma/hisilicon/hisi_dmadev.c b/drivers/dma/hisilicon/hisi_dmadev.c index e6fb8a0fc8..b8369e7e71 100644 --- a/drivers/dma/hisilicon/hisi_dmadev.c +++ b/drivers/dma/hisilicon/hisi_dmadev.c @@ -6,7 +6,9 @@ #include #include +#include #include +#include #include #include #include @@ -30,6 +32,141 @@ RTE_LOG_REGISTER_DEFAULT(hisi_dma_logtype, INFO); #define HISI_DMA_ERR(hw, fmt, args...) \ HISI_DMA_LOG_RAW(hw, ERR, fmt, ## args) +static uint32_t +hisi_dma_queue_base(struct hisi_dma_dev *hw) +{ + if (hw->reg_layout == HISI_DMA_REG_LAYOUT_HIP08) + return HISI_DMA_HIP08_QUEUE_BASE; + else + return 0; +} + +static void +hisi_dma_write_reg(void *base, uint32_t off, uint32_t val) +{ + rte_write32(rte_cpu_to_le_32(val), + (volatile void *)((char *)base + off)); +} + +static void +hisi_dma_write_dev(struct hisi_dma_dev *hw, uint32_t off, uint32_t val) +{ + hisi_dma_write_reg(hw->io_base, off, val); +} + +static void +hisi_dma_write_queue(struct hisi_dma_dev *hw, uint32_t qoff, uint32_t val) +{ + uint32_t off = hisi_dma_queue_base(hw) + + hw->queue_id * HISI_DMA_QUEUE_REGION_SIZE + qoff; + hisi_dma_write_dev(hw, off, val); +} + +static uint32_t +hisi_dma_read_reg(void *base, uint32_t off) +{ + uint32_t val = rte_read32((volatile void *)((char *)base + off)); + return rte_le_to_cpu_32(val); +} + +static uint32_t +hisi_dma_read_dev(struct hisi_dma_dev *hw, uint32_t off) +{ + return hisi_dma_read_reg(hw->io_base, off); +} + +static uint32_t +hisi_dma_read_queue(struct hisi_dma_dev *hw, uint32_t qoff) +{ + uint32_t off = hisi_dma_queue_base(hw) + + hw->queue_id * HISI_DMA_QUEUE_REGION_SIZE + qoff; + return hisi_dma_read_dev(hw, off); +} + +static void +hisi_dma_update_bit(struct hisi_dma_dev *hw, uint32_t off, uint32_t pos, + bool set) +{ + uint32_t tmp = hisi_dma_read_dev(hw, off); + uint32_t mask = 1u << pos; + tmp = set ? tmp | mask : tmp & ~mask; + hisi_dma_write_dev(hw, off, tmp); +} + +static void +hisi_dma_update_queue_bit(struct hisi_dma_dev *hw, uint32_t qoff, uint32_t pos, + bool set) +{ + uint32_t tmp = hisi_dma_read_queue(hw, qoff); + uint32_t mask = 1u << pos; + tmp = set ? tmp | mask : tmp & ~mask; + hisi_dma_write_queue(hw, qoff, tmp); +} + +#define hisi_dma_poll_hw_state(hw, val, cond, sleep_us, timeout_us) ({ \ + uint32_t timeout = 0; \ + while (timeout++ <= (timeout_us)) { \ + (val) = hisi_dma_read_queue(hw, HISI_DMA_QUEUE_FSM_REG); \ + if (cond) \ + break; \ + rte_delay_us(sleep_us); \ + } \ + (cond) ? 0 : -ETIME; \ +}) + +static int +hisi_dma_reset_hw(struct hisi_dma_dev *hw) +{ +#define POLL_SLEEP_US 100 +#define POLL_TIMEOUT_US1 + + uint32_t tmp; + int ret; + + hisi_dma_update_queue_bit(hw, HISI_DMA_QUEUE_CTRL0_REG, + HISI_DMA_QUEUE_CTRL0_PAUSE_B, true); + hisi_dma_update_queue_bit(hw, HISI_DMA_QUEUE_CTRL0_REG, + HISI_DMA_QUEUE_CTRL0_EN_B, false); + + ret = hisi_dma_poll_hw_state(hw, tmp, + FIELD_GET(HISI_DMA_QUEUE_FSM_STS_M, tmp) != HISI_DMA_STATE_RUN, + POLL_SLEEP_US, POLL_TIMEOUT_US); + if (ret) { + HISI_DMA_ERR(hw, "disable dma timeout!"); + return ret; + } + + hisi_dma_update_queue_bit(hw, HISI_DMA_QUEUE_CTRL1_REG, + HISI_DMA_QUEUE_CTRL1_RESET_B, true); + hisi_dma_write_queue(hw, HISI_DMA_QUEUE_SQ_TAIL_REG, 0); +
[dpdk-dev] [PATCH 3/6] dma/hisilicon: add control path functions
This patch add control path functions for Kunpeng DMA devices. Signed-off-by: Chengwen Feng --- doc/guides/dmadevs/hisilicon.rst| 10 + drivers/dma/hisilicon/hisi_dmadev.c | 385 drivers/dma/hisilicon/hisi_dmadev.h | 99 +++ 3 files changed, 494 insertions(+) diff --git a/doc/guides/dmadevs/hisilicon.rst b/doc/guides/dmadevs/hisilicon.rst index 65138a8365..24bae86bdc 100644 --- a/doc/guides/dmadevs/hisilicon.rst +++ b/doc/guides/dmadevs/hisilicon.rst @@ -29,3 +29,13 @@ accessed using API from the ``rte_dmadev`` library. The name of the ``dmadev`` created is like "B:D.F-chX", e.g. DMA :7b:00.0 will create four ``dmadev``, the 1st ``dmadev`` name is "7b:00.0-ch0", and the 2nd ``dmadev`` name is "7b:00.0-ch1". + +Device Configuration +~ + +Kunpeng DMA configuration requirements: + +* ``ring_size`` must be a power of two, between 32 and 8192. +* Only one ``vchan`` is supported per ``dmadev``. +* Silent mode is not supported. +* The transfer direction must be set to ``RTE_DMA_DIR_MEM_TO_MEM``. diff --git a/drivers/dma/hisilicon/hisi_dmadev.c b/drivers/dma/hisilicon/hisi_dmadev.c index b8369e7e71..bcdcf4de4b 100644 --- a/drivers/dma/hisilicon/hisi_dmadev.c +++ b/drivers/dma/hisilicon/hisi_dmadev.c @@ -10,6 +10,8 @@ #include #include #include +#include +#include #include #include @@ -41,6 +43,14 @@ hisi_dma_queue_base(struct hisi_dma_dev *hw) return 0; } +static volatile void * +hisi_dma_queue_regaddr(struct hisi_dma_dev *hw, uint32_t qoff) +{ + uint32_t off = hisi_dma_queue_base(hw) + + hw->queue_id * HISI_DMA_QUEUE_REGION_SIZE + qoff; + return (volatile void *)((char *)hw->io_base + off); +} + static void hisi_dma_write_reg(void *base, uint32_t off, uint32_t val) { @@ -103,6 +113,15 @@ hisi_dma_update_queue_bit(struct hisi_dma_dev *hw, uint32_t qoff, uint32_t pos, hisi_dma_write_queue(hw, qoff, tmp); } +static void +hisi_dma_update_queue_mbit(struct hisi_dma_dev *hw, uint32_t qoff, + uint32_t mask, bool set) +{ + uint32_t tmp = hisi_dma_read_queue(hw, qoff); + tmp = set ? tmp | mask : tmp & ~mask; + hisi_dma_write_queue(hw, qoff, tmp); +} + #define hisi_dma_poll_hw_state(hw, val, cond, sleep_us, timeout_us) ({ \ uint32_t timeout = 0; \ while (timeout++ <= (timeout_us)) { \ @@ -154,6 +173,45 @@ hisi_dma_reset_hw(struct hisi_dma_dev *hw) return 0; } +static void +hisi_dma_init_hw(struct hisi_dma_dev *hw) +{ + hisi_dma_write_queue(hw, HISI_DMA_QUEUE_SQ_BASE_L_REG, +lower_32_bits(hw->sqe_iova)); + hisi_dma_write_queue(hw, HISI_DMA_QUEUE_SQ_BASE_H_REG, +upper_32_bits(hw->sqe_iova)); + hisi_dma_write_queue(hw, HISI_DMA_QUEUE_CQ_BASE_L_REG, +lower_32_bits(hw->cqe_iova)); + hisi_dma_write_queue(hw, HISI_DMA_QUEUE_CQ_BASE_H_REG, +upper_32_bits(hw->cqe_iova)); + hisi_dma_write_queue(hw, HISI_DMA_QUEUE_SQ_DEPTH_REG, +hw->sq_depth_mask); + hisi_dma_write_queue(hw, HISI_DMA_QUEUE_CQ_DEPTH_REG, hw->cq_depth - 1); + hisi_dma_write_queue(hw, HISI_DMA_QUEUE_SQ_TAIL_REG, 0); + hisi_dma_write_queue(hw, HISI_DMA_QUEUE_CQ_HEAD_REG, 0); + hisi_dma_write_queue(hw, HISI_DMA_QUEUE_ERR_INT_NUM0_REG, 0); + hisi_dma_write_queue(hw, HISI_DMA_QUEUE_ERR_INT_NUM1_REG, 0); + hisi_dma_write_queue(hw, HISI_DMA_QUEUE_ERR_INT_NUM2_REG, 0); + + if (hw->reg_layout == HISI_DMA_REG_LAYOUT_HIP08) { + hisi_dma_write_queue(hw, HISI_DMA_HIP08_QUEUE_ERR_INT_NUM3_REG, +0); + hisi_dma_write_queue(hw, HISI_DMA_HIP08_QUEUE_ERR_INT_NUM4_REG, +0); + hisi_dma_write_queue(hw, HISI_DMA_HIP08_QUEUE_ERR_INT_NUM5_REG, +0); + hisi_dma_write_queue(hw, HISI_DMA_HIP08_QUEUE_ERR_INT_NUM6_REG, +0); + hisi_dma_update_queue_bit(hw, HISI_DMA_QUEUE_CTRL0_REG, + HISI_DMA_HIP08_QUEUE_CTRL0_ERR_ABORT_B, false); + hisi_dma_update_queue_mbit(hw, HISI_DMA_QUEUE_INT_STATUS_REG, + HISI_DMA_HIP08_QUEUE_INT_MASK_M, true); + hisi_dma_update_queue_mbit(hw, + HISI_DMA_HIP08_QUEUE_INT_MASK_REG, + HISI_DMA_HIP08_QUEUE_INT_MASK_M, true); + } +} + static void hisi_dma_init_gbl(void *pci_bar, uint8_t revision) { @@ -176,6 +234,301 @@ hisi_dma_reg_layout(uint8_t revision) return HISI_DMA_REG_LAYOUT_INVALID; } +static void +hisi_dma_zero_iomem(struct hisi_dma_dev *hw) +{ + memset(hw->iomz->addr, 0, hw->iomz_sz); +} + +static int +hisi_dma_alloc_iomem(struct hisi_dma_dev *hw, uint16
[dpdk-dev] [PATCH 1/6] dma/hisilicon: add device probe and remove functions
Add the basic device probe and remove functions and initial documentation for new hisilicon DMA drivers. Maintainers update is also included in this patch. Signed-off-by: Chengwen Feng --- MAINTAINERS| 5 ++ doc/guides/dmadevs/hisilicon.rst | 21 + doc/guides/dmadevs/index.rst | 1 + doc/guides/rel_notes/release_21_11.rst | 4 + drivers/dma/hisilicon/hisi_dmadev.c| 119 + drivers/dma/hisilicon/hisi_dmadev.h| 24 + drivers/dma/hisilicon/meson.build | 7 ++ drivers/dma/hisilicon/version.map | 3 + drivers/dma/meson.build| 1 + 9 files changed, 185 insertions(+) create mode 100644 doc/guides/dmadevs/hisilicon.rst create mode 100644 drivers/dma/hisilicon/hisi_dmadev.c create mode 100644 drivers/dma/hisilicon/hisi_dmadev.h create mode 100644 drivers/dma/hisilicon/meson.build create mode 100644 drivers/dma/hisilicon/version.map diff --git a/MAINTAINERS b/MAINTAINERS index 0e5951f8f1..1567f7b695 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1206,6 +1206,11 @@ M: Conor Walsh F: drivers/dma/ioat/ F: doc/guides/dmadevs/ioat.rst +Hisilicon DMA +M: Chengwen Feng +F: drivers/dma/hisilicon +F: doc/guides/dmadevs/hisilicon.rst + RegEx Drivers - diff --git a/doc/guides/dmadevs/hisilicon.rst b/doc/guides/dmadevs/hisilicon.rst new file mode 100644 index 00..4cbaac4204 --- /dev/null +++ b/doc/guides/dmadevs/hisilicon.rst @@ -0,0 +1,21 @@ +.. SPDX-License-Identifier: BSD-3-Clause +Copyright(c) 2021 HiSilicon Limited. + +HISILICON Kunpeng DMA Driver + + +Kunpeng SoC has an internal DMA unit which can be used by application to +accelerate data copies. The DMA PF function supports multiple DMA channels. + + +Supported Kunpeng SoCs +-- + +* Kunpeng 920 + + +Device Setup +- + +Kunpeng DMA devices will need to be bound to a suitable DPDK-supported +user-space IO driver such as ``vfio-pci`` in order to be used by DPDK. diff --git a/doc/guides/dmadevs/index.rst b/doc/guides/dmadevs/index.rst index 20476039a5..6b04276524 100644 --- a/doc/guides/dmadevs/index.rst +++ b/doc/guides/dmadevs/index.rst @@ -13,3 +13,4 @@ an application through DMA API. idxd ioat + hisilicon diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 502cc5ceb2..00a45475be 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -86,6 +86,10 @@ New Features driver for Intel IOAT devices such as Crystal Beach DMA (CBDMA) on Ice Lake, Skylake and Broadwell. This device driver can be used through the generic dmadev API. +* **Added hisilicon dmadev driver implementation.** + The hisilicon dmadev driver provide device drivers for the Kunpeng's DMA devices. + This device driver can be used through the generic dmadev API. + * **Added support to get all MAC addresses of a device.** Added ``rte_eth_macaddrs_get`` to allow user to retrieve all Ethernet diff --git a/drivers/dma/hisilicon/hisi_dmadev.c b/drivers/dma/hisilicon/hisi_dmadev.c new file mode 100644 index 00..e6fb8a0fc8 --- /dev/null +++ b/drivers/dma/hisilicon/hisi_dmadev.c @@ -0,0 +1,119 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 HiSilicon Limited + */ + +#include +#include + +#include +#include +#include +#include +#include + +#include "hisi_dmadev.h" + +RTE_LOG_REGISTER_DEFAULT(hisi_dma_logtype, INFO); +#define HISI_DMA_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, hisi_dma_logtype, \ + "%s(): " fmt "\n", __func__, ##args) +#define HISI_DMA_LOG_RAW(hw, level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, hisi_dma_logtype, \ + "%s %s(): " fmt "\n", (hw)->data->dev_name, \ + __func__, ##args) +#define HISI_DMA_DEBUG(hw, fmt, args...) \ + HISI_DMA_LOG_RAW(hw, DEBUG, fmt, ## args) +#define HISI_DMA_INFO(hw, fmt, args...) \ + HISI_DMA_LOG_RAW(hw, INFO, fmt, ## args) +#define HISI_DMA_WARN(hw, fmt, args...) \ + HISI_DMA_LOG_RAW(hw, WARNING, fmt, ## args) +#define HISI_DMA_ERR(hw, fmt, args...) \ + HISI_DMA_LOG_RAW(hw, ERR, fmt, ## args) + +static uint8_t +hisi_dma_reg_layout(uint8_t revision) +{ + if (revision == HISI_DMA_REVISION_HIP08B) + return HISI_DMA_REG_LAYOUT_HIP08; + else + return HISI_DMA_REG_LAYOUT_INVALID; +} + +static void +hisi_dma_gen_pci_device_name(const struct rte_pci_device *pci_dev, +char *name, size_t size) +{ + memset(name, 0, size); + (void)snprintf(name, size, "%x:%x.%x", +pci_dev->addr.bus, pci_dev->addr.devid, +pci_dev->addr.function); +} + +static int +hisi_dma_check_revision(struct rte_pci_device *pci_dev, const char *name, + uint8_t *out_revisio
Re: [dpdk-dev] [PATCH v1] test/crypto: fix: test vectors for zuc 256 bit key
Hi Pablo, Tried the test vector zuc256_test_case_auth_1 and the digest did not match with the generated digest in our platform. As per spec, IV[i] for i = 17 to 24 are 6-bit string occupying the 6 least signi cant bits of a byte. But in the vectors, The values in the IV(byte -17 to 24) are > 0x3f. Could you please elaborate how these bytes are considered for generation of digest. Regards Sagar From: De Lara Guarch, Pablo Sent: 29 October 2021 18:07 To: Vidya Sagar Velumuri ; Ankur Dwivedi ; Anoob Joseph ; Tejasree Kondoj ; Nithin Kumar Dabilpuram ; Akhil Goyal ; Doherty, Declan Cc: dev@dpdk.org Subject: [EXT] RE: [dpdk-dev] [PATCH v1] test/crypto: fix: test vectors for zuc 256 bit key External Email -- Hi Vidya, > -Original Message- > From: dev On Behalf Of Vidya Sagar Velumuri > Sent: Wednesday, October 27, 2021 9:41 AM > To: adwiv...@marvell.com; ano...@marvell.com; ktejas...@marvell.com; > ndabilpu...@marvell.com; gak...@marvell.com; Doherty, Declan > > Cc: dev@dpdk.org > Subject: [dpdk-dev] [PATCH v1] test/crypto: fix: test vectors for zuc 256 bit > key > > Fix the IV and MAC in the test vectors added for zuc 256-bit key > > Fixes: fa5bf9345d4e (test/crypto: add ZUC cases with 256-bit keys) > > Signed-off-by: Vidya Sagar Velumuri The new vectors are failing for us. Could you check if the ones we added work for you? Thanks, Pablo
Re: [dpdk-dev] [PATCH] common/cnxk: add telemetry endpoints to sso
On Thu, Sep 2, 2021 at 1:23 PM wrote: > > From: Pavan Nikhilesh > > Add telemetry endpoints for sso > sso -> SSO Please rebase for-main]dell[dpdk-next-eventdev] $ git pw series apply 18616 Applying: common/cnxk: add telemetry endpoints to sso error: sha1 information is lacking or useless (drivers/common/cnxk/meson.build). error: could not build fake ancestor hint: Use 'git am --show-current-patch=diff' to see the failed patch Patch failed at 0001 common/cnxk: add telemetry endpoints to sso When you have resolved this problem, run "git am --continue". If you prefer to skip this patch, run "git am --skip" instead.
Re: [dpdk-dev] [PATCH 1/3] event/cnxk: fix packet Tx overflow
On Mon, Oct 4, 2021 at 2:07 PM wrote: > > From: Pavan Nikhilesh > > The transmit loop incorrectly assumes that nb_mbufs is always > a multiple of 4 when transmitting an event vector. The max > size of the vector might not be reached and pushed out early > due to timeout. > > Fixes: 761a321acf91 ("event/cnxk: support vectorized Tx event fast path") > > Signed-off-by: Pavan Nikhilesh Please rebase [for-main]dell[dpdk-next-eventdev] $ git pw series apply 19356 Applying: event/cnxk: fix packet Tx overflow Applying: event/cnxk: reduce workslot memory consumption error: sha1 information is lacking or useless (drivers/event/cnxk/cnxk_eventdev.c). error: could not build fake ancestor hint: Use 'git am --show-current-patch=diff' to see the failed patch Patch failed at 0002 event/cnxk: reduce workslot memory consumption When you have resolved this problem, run "git am --continue". If you prefer to skip this patch, run "git am --skip" instead. To restore the original branch and stop patching, run "git am --abort". > --- > Depends-on: series-18614 ("add SSO XAQ pool create and free") > > drivers/event/cnxk/cn10k_worker.h | 180 +- > 1 file changed, 77 insertions(+), 103 deletions(-) > > diff --git a/drivers/event/cnxk/cn10k_worker.h > b/drivers/event/cnxk/cn10k_worker.h > index 1255662b6c..657ab91ac8 100644 > --- a/drivers/event/cnxk/cn10k_worker.h > +++ b/drivers/event/cnxk/cn10k_worker.h > @@ -7,10 +7,10 @@ > > #include > > +#include "cn10k_cryptodev_ops.h" > #include "cnxk_ethdev.h" > #include "cnxk_eventdev.h" > #include "cnxk_worker.h" > -#include "cn10k_cryptodev_ops.h" > > #include "cn10k_ethdev.h" > #include "cn10k_rx.h" > @@ -237,18 +237,16 @@ cn10k_sso_hws_get_work(struct cn10k_sso_hws *ws, struct > rte_event *ev, > > cq_w1 = *(uint64_t *)(gw.u64[1] + 8); > > - sa_base = cnxk_nix_sa_base_get(port, > - lookup_mem); > + sa_base = > + cnxk_nix_sa_base_get(port, > lookup_mem); > sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1); > > - mbuf = > (uint64_t)nix_sec_meta_to_mbuf_sc(cq_w1, > - sa_base, (uintptr_t)&iova, > - &loff, (struct rte_mbuf > *)mbuf, > - d_off); > + mbuf = (uint64_t)nix_sec_meta_to_mbuf_sc( > + cq_w1, sa_base, (uintptr_t)&iova, > &loff, > + (struct rte_mbuf *)mbuf, d_off); > if (loff) > roc_npa_aura_op_free(m->pool->pool_id, > 0, iova); > - > } > > gw.u64[0] = CNXK_CLR_SUB_EVENT(gw.u64[0]); > @@ -396,6 +394,56 @@ cn10k_sso_hws_xtract_meta(struct rte_mbuf *m, > txq_data[m->port][rte_event_eth_tx_adapter_txq_get(m)]; > } > > +static __rte_always_inline void > +cn10k_sso_tx_one(struct rte_mbuf *m, uint64_t *cmd, uint16_t lmt_id, > +uintptr_t lmt_addr, uint8_t sched_type, uintptr_t base, > +const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT], > +const uint32_t flags) > +{ > + uint8_t lnum = 0, loff = 0, shft = 0; > + struct cn10k_eth_txq *txq; > + uintptr_t laddr; > + uint16_t segdw; > + uintptr_t pa; > + bool sec; > + > + txq = cn10k_sso_hws_xtract_meta(m, txq_data); > + cn10k_nix_tx_skeleton(txq, cmd, flags); > + /* Perform header writes before barrier > +* for TSO > +*/ > + if (flags & NIX_TX_OFFLOAD_TSO_F) > + cn10k_nix_xmit_prepare_tso(m, flags); > + > + cn10k_nix_xmit_prepare(m, cmd, flags, txq->lso_tun_fmt, &sec); > + > + laddr = lmt_addr; > + /* Prepare CPT instruction and get nixtx addr if > +* it is for CPT on same lmtline. > +*/ > + if (flags & NIX_TX_OFFLOAD_SECURITY_F && sec) > + cn10k_nix_prep_sec(m, cmd, &laddr, lmt_addr, &lnum, &loff, > + &shft, txq->sa_base, flags); > + > + /* Move NIX desc to LMT/NIXTX area */ > + cn10k_nix_xmit_mv_lmt_base(laddr, cmd, flags); > + > + if (flags & NIX_TX_MULTI_SEG_F) > + segdw = cn10k_nix_prepare_mseg(m, (uint64_t *)laddr, flags); > + else > + segdw = cn10k_nix_tx_ext_subs(flags) + 2; > + > + if (flags & NIX_TX_OFFLOAD_SECURITY_F && sec) > + pa = txq->cpt_io_addr | 3 << 4; > + else > + pa = txq->io_addr | ((segdw - 1) << 4); > + > + if (!sched_type) > + roc_sso_hws_head_wait(base + SSOW_LF_GWS_
Re: [dpdk-dev] [PATCH v2] vhost: remove async dma map status
Hi Maxime, >-Original Message- >From: Maxime Coquelin >Sent: Friday, October 29, 2021 6:36 PM >To: Ding, Xuan ; dev@dpdk.org; Xia, Chenbo > >Cc: Hu, Jiayu ; Burakov, Anatoly > >Subject: Re: [PATCH v2] vhost: remove async dma map status > > > >On 10/27/21 12:00, Xuan Ding wrote: >> Async dma map status flag was added to prevent the unnecessary unmap >> when DMA devices bound to kernel driver. This brings maintenance cost >> for a lot of code. This patch removes the dma map status by using >> rte_errno instead. >> >> This patch relies on the following patch to fix a partial unmap check >> in vfio unmapping API. >> [1] https://www.mail-archive.com/dev@dpdk.org/msg226464.html >> >> Cc: anatoly.bura...@intel.com >> >> Signed-off-by: Xuan Ding >> --- >> v2: >> * Fix a typo in commit log. >> --- >> lib/vhost/vhost.h | 3 -- >> lib/vhost/vhost_user.c | 70 -- >> 2 files changed, 13 insertions(+), 60 deletions(-) >> > > >Applied to dpdk-next-virtio/main with title fixed. >Please run check-git-log script next time. Thanks for your fix. I ran the script first but got no warning... I will be more careful to check the format next time. BTW, should the CC be removed in the commit log? Regards, Xuan > >Thanks, >Maxime
Re: [dpdk-dev] [PATCH 1/3] eventdev: allow for event devices requiring maintenance
On 2021-10-29 17:17, Jerin Jacob wrote: > On Fri, Oct 29, 2021 at 8:33 PM Mattias Rönnblom > wrote: >> On 2021-10-29 16:38, Jerin Jacob wrote: >>> On Tue, Oct 26, 2021 at 11:02 PM Mattias Rönnblom >>> wrote: Extend Eventdev API to allow for event devices which require various forms of internal processing to happen, even when events are not enqueued to or dequeued from a port. PATCH v1: - Adapt to the move of fastpath function pointers out of rte_eventdev struct - Attempt to clarify how often the application is expected to call rte_event_maintain() - Add trace point RFC v2: - Change rte_event_maintain() return type to be consistent with the documentation. - Remove unused typedef from eventdev_pmd.h. Signed-off-by: Mattias Rönnblom Tested-by: Richard Eklycke Tested-by: Liron Himi --- +/** + * Maintain an event device. + * + * This function is only relevant for event devices which has the + * RTE_EVENT_DEV_CAP_REQUIRES_MAINT flag set. Such devices require the + * application to call rte_event_maintain() on a port during periods + * which it is neither enqueuing nor dequeuing events from that + * port. >>> # We need to add "by the same core". Right? As other core such as >>> service core can not call rte_event_maintain() >> >> Do you mean by the same lcore thread that "owns" (dequeues and enqueues >> to) the port? Yes. I thought that was implicit, since eventdev port are >> not MT safe. I'll try to figure out some wording that makes that more clear. > OK. > >> >>> # Also, Incase of Adapters enqueue() happens, right? If so, either >>> above text is not correct. >>> # @Erik Gabriel Carrillo @Jayatheerthan, Jay @Gujjar, Abhinandan S >>> Please review 3/3 patch on adapter change. >>> Let me know you folks are OK with change or not or need more time to >>> analyze. >>> >>> If it need only for the adapter subsystem then can we make it an >>> internal API between DSW and adapters? >> >> No, it's needed for any producer-only eventdev ports, including any such >> ports used by the application. > > In that case, the code path in testeventdev, eventdev_pipeline, etc needs > to be updated. I am worried about the performance impact for the drivers they > don't have such limitations. Applications that are using some other event device today, and don't care about DSW or potential future event devices requiringRTE_EVENT_DEV_CAP_REQUIRES_MAINT, won't be affected at all, except the ops struct will be 8 bytes larger. A rte_event_maintain() call on a device which doesn't need maintenance is just an inlined NULL compare on the ops struct field, which is frequently used and should be in a cache close to the core. In my benchmarks, I've been unable to measure any additional cost at all. I reviewed the test and example applications last time I attempted to upstream this patch set, and from what I remember there was nothing to update. Things might have changed and I might misremember, so I'll have a look again. What's important to keep in mind is that applications (DPDK tests, examples, user applications etc.) that have producer-only ports or otherwise potentially leave eventdev ports "unattended" don't work with DSW today, unless the take the measures described in the DSW documentation (which for example the eventdev adapters do not). So rte_event_maintain() will not break anything that's not already broken. > Why not have an additional config option in port_config which says > it is a producer-only port by an application and takes care of the driver. > > In the current adapters code, you are calling maintain() when enqueue > returns zero. rte_event_maintain() is called when no interaction with the event device has been done, during that service function call. That's the overall intention. In the RX adapter, zero flushed events can also mean that the RX adapter had buffered events it wanted to flush, but the event device didn't accept new events (i.e, back pressure). In that case, the rte_event_maintain() call is redundant, but harmless (both because it's very low overhead on DSW, and near-zero overhead on any other current event device). Plus, if you are back-pressured by the pipeline, RX is not the bottleneck so a tiny bit of extra overhead is not an issue. > In such a case, if the port is configured as producer and then > internally it can call maintain. To be able to perform maintenance (flushing, migration etc.), it needs cycles from the thread that "owns" the port. If the thread neither does enqueue (because it doesn't have anything to enqueue), nor dequeue, the driver will never get the chance to run. If DPDK had a delayed work mechanism that somehow could be tied to the "owning" port, then you could use that. But it doesn't. > Thoughts from other eventdev maintainers? > Cc+ @Van Haaren, Harry @Ric
[dpdk-dev] [v9] crypto/cnxk: add telemetry endpoints to cryptodev
Add telemetry endpoints to cnxk secure cryptodev capabilities. Signed-off-by: Gowrishankar Muthukrishnan --- v9: - moved rte_security_capability into rte_security lib telemetry. --- .../crypto/cnxk/cnxk_cryptodev_telemetry.c| 81 +++ drivers/crypto/cnxk/meson.build | 1 + 2 files changed, 82 insertions(+) create mode 100644 drivers/crypto/cnxk/cnxk_cryptodev_telemetry.c diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_telemetry.c b/drivers/crypto/cnxk/cnxk_cryptodev_telemetry.c new file mode 100644 index 00..43cde55bfc --- /dev/null +++ b/drivers/crypto/cnxk/cnxk_cryptodev_telemetry.c @@ -0,0 +1,81 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include +#include +#include + +#include + +#include "cnxk_cryptodev.h" + +#define CRYPTO_CAPS_SZ\ + (RTE_ALIGN_CEIL(sizeof(struct rte_cryptodev_capabilities), \ + sizeof(uint64_t)) /\ +sizeof(uint64_t)) + +static int +crypto_caps_array(struct rte_tel_data *d, + struct rte_cryptodev_capabilities *capabilities) +{ + const struct rte_cryptodev_capabilities *dev_caps; + uint64_t caps_val[CRYPTO_CAPS_SZ]; + unsigned int i = 0, j; + + rte_tel_data_start_array(d, RTE_TEL_U64_VAL); + + while ((dev_caps = &capabilities[i++])->op != + RTE_CRYPTO_OP_TYPE_UNDEFINED) { + memset(&caps_val, 0, CRYPTO_CAPS_SZ * sizeof(caps_val[0])); + rte_memcpy(caps_val, dev_caps, sizeof(capabilities[0])); + for (j = 0; j < CRYPTO_CAPS_SZ; j++) + rte_tel_data_add_array_u64(d, caps_val[j]); + } + + return i; +} + +static int +cryptodev_tel_handle_sec_caps(const char *cmd __rte_unused, const char *params, + struct rte_tel_data *d) +{ + struct rte_tel_data *sec_crypto_caps; + struct rte_cryptodev *dev; + struct cnxk_cpt_vf *vf; + int sec_crypto_caps_n; + int dev_id; + + if (!params || strlen(params) == 0 || !isdigit(*params)) + return -EINVAL; + + dev_id = strtol(params, NULL, 10); + if (!rte_cryptodev_is_valid_dev(dev_id)) + return -EINVAL; + + dev = rte_cryptodev_pmd_get_dev(dev_id); + if (!dev) { + plt_err("No cryptodev for id %d available", dev_id); + return -EINVAL; + } + + vf = dev->data->dev_private; + rte_tel_data_start_dict(d); + + /* Secure Crypto capabilities */ + sec_crypto_caps = rte_tel_data_alloc(); + sec_crypto_caps_n = crypto_caps_array(sec_crypto_caps, + vf->sec_crypto_caps); + rte_tel_data_add_dict_container(d, "sec_crypto_caps", + sec_crypto_caps, 0); + rte_tel_data_add_dict_int(d, "sec_crypto_caps_n", sec_crypto_caps_n); + + return 0; +} + +RTE_INIT(cnxk_cryptodev_init_telemetry) +{ + rte_telemetry_register_cmd("/cnxk/cryptodev/sec_caps", + cryptodev_tel_handle_sec_caps, + "Returns cryptodev capabilities. Parameters: int dev_id"); +} diff --git a/drivers/crypto/cnxk/meson.build b/drivers/crypto/cnxk/meson.build index 024109f7e9..2d78757bba 100644 --- a/drivers/crypto/cnxk/meson.build +++ b/drivers/crypto/cnxk/meson.build @@ -20,6 +20,7 @@ sources = files( 'cnxk_cryptodev_devargs.c', 'cnxk_cryptodev_ops.c', 'cnxk_cryptodev_sec.c', +'cnxk_cryptodev_telemetry.c', ) deps += ['bus_pci', 'common_cnxk', 'security', 'eventdev'] -- 2.25.1
[dpdk-dev] [v1] security: add telemetry endpoint for cryptodev security capabilities
Add telemetry endpoint for cryptodev security capabilities. Signed-off-by: Gowrishankar Muthukrishnan --- v1: - forked from patch 20009 "crypto/cnxk: add telemetry endpoints to cryptodev" to integrate changes in lib/rte_security itself. --- lib/security/rte_security.c | 98 + 1 file changed, 98 insertions(+) diff --git a/lib/security/rte_security.c b/lib/security/rte_security.c index fe81ed3e4c..068d855d9b 100644 --- a/lib/security/rte_security.c +++ b/lib/security/rte_security.c @@ -4,8 +4,10 @@ * Copyright (c) 2020 Samsung Electronics Co., Ltd All Rights Reserved */ +#include #include #include +#include #include "rte_compat.h" #include "rte_security.h" #include "rte_security_driver.h" @@ -203,3 +205,99 @@ rte_security_capability_get(struct rte_security_ctx *instance, return NULL; } + +static int +cryptodev_handle_dev_list(const char *cmd __rte_unused, + const char *params __rte_unused, + struct rte_tel_data *d) +{ + int dev_id; + + if (rte_cryptodev_count() < 1) + return -1; + + rte_tel_data_start_array(d, RTE_TEL_INT_VAL); + for (dev_id = 0; dev_id < RTE_CRYPTO_MAX_DEVS; dev_id++) + if (rte_cryptodev_is_valid_dev(dev_id) && + rte_cryptodev_get_sec_ctx(dev_id)) + rte_tel_data_add_array_int(d, dev_id); + + return 0; +} + +#define SEC_CAPS_SZ\ + (RTE_ALIGN_CEIL(sizeof(struct rte_security_capability), \ + sizeof(uint64_t)) / sizeof(uint64_t)) + +static int +sec_caps_array(struct rte_tel_data *d, + const struct rte_security_capability *capabilities) +{ + const struct rte_security_capability *dev_caps; + uint64_t caps_val[SEC_CAPS_SZ]; + unsigned int i = 0, j; + + rte_tel_data_start_array(d, RTE_TEL_U64_VAL); + + while ((dev_caps = &capabilities[i++])->action != + RTE_SECURITY_ACTION_TYPE_NONE) { + memset(&caps_val, 0, SEC_CAPS_SZ * sizeof(caps_val[0])); + rte_memcpy(caps_val, dev_caps, sizeof(capabilities[0])); + for (j = 0; j < SEC_CAPS_SZ; j++) + rte_tel_data_add_array_u64(d, caps_val[j]); + } + + return i; +} + +static int +security_handle_dev_caps(const char *cmd __rte_unused, const char *params, +struct rte_tel_data *d) +{ + const struct rte_security_capability *capabilities; + struct rte_security_ctx *sec_ctx; + struct rte_tel_data *sec_caps; + int sec_caps_n; + char *end_param; + int dev_id; + + if (!params || strlen(params) == 0 || !isdigit(*params)) + return -EINVAL; + + dev_id = strtoul(params, &end_param, 0); + if (*end_param != '\0') + CDEV_LOG_ERR("Extra parameters passed to command, ignoring"); + + if (!rte_cryptodev_is_valid_dev(dev_id)) + return -EINVAL; + + rte_tel_data_start_dict(d); + sec_caps = rte_tel_data_alloc(); + if (!sec_caps) + return -ENOMEM; + + sec_ctx = (struct rte_security_ctx *)rte_cryptodev_get_sec_ctx(dev_id); + if (!sec_ctx) + return -EINVAL; + + capabilities = rte_security_capabilities_get(sec_ctx); + if (!capabilities) + return -EINVAL; + + sec_caps_n = sec_caps_array(sec_caps, capabilities); + rte_tel_data_add_dict_container(d, "sec_caps", sec_caps, 0); + rte_tel_data_add_dict_int(d, "sec_caps_n", sec_caps_n); + + return 0; +} + +RTE_INIT(security_init_telemetry) +{ + rte_telemetry_register_cmd("/security/list", + cryptodev_handle_dev_list, + "Returns list of available crypto devices by IDs. No parameters."); + + rte_telemetry_register_cmd("/security/caps", + security_handle_dev_caps, + "Returns security capabilities for a cryptodev. Parameters: int dev_id"); +} -- 2.25.1
Re: [dpdk-dev] [PATCH v13 4/7] net/iavf: add iAVF IPsec inline crypto support
On Thu, Oct 28, 2021 at 6:21 PM Radu Nicolau wrote: > +static const struct rte_cryptodev_symmetric_capability * > +get_capability(struct iavf_security_ctx *iavf_sctx, > + uint32_t algo, uint32_t type) > +{ > + const struct rte_cryptodev_capabilities *capability; > + int i = 0; > + > + capability = &iavf_sctx->crypto_capabilities[i]; > + > + while (capability->op != RTE_CRYPTO_OP_TYPE_UNDEFINED) { > + if (capability->op == RTE_CRYPTO_OP_TYPE_SYMMETRIC && > + capability->sym.xform_type == type && > + capability->sym.cipher.algo == algo) > + return &capability->sym; > + /** try next capability */ > + capability = &iavf_crypto_capabilities[i++]; > + } > + > + return NULL; > +} As of cc13af13c8e6 ("net/ngbe: support Tx done cleanup"), next-net build is still KO for Windows: http://mails.dpdk.org/archives/test-report/2021-October/236938.html FAILED: drivers/libtmp_rte_net_iavf.a.p/net_iavf_iavf_ipsec_crypto.c.obj "clang" "-Idrivers\libtmp_rte_net_iavf.a.p" "-Idrivers" "-I..\drivers" "-Idrivers\net\iavf" "-I..\drivers\net\iavf" "-Idrivers\common\iavf" "-I..\drivers\common\iavf" "-Ilib\ethdev" "-I..\lib\ethdev" "-I." "-I.." "-Iconfig" "-I..\config" "-Ilib\eal\include" "-I..\lib\eal\include" "-Ilib\eal\windows\include" "-I..\lib\eal\windows\include" "-Ilib\eal\x86\include" "-I..\lib\eal\x86\include" "-Ilib\eal\common" "-I..\lib\eal\common" "-Ilib\eal" "-I..\lib\eal" "-Ilib\kvargs" "-I..\lib\kvargs" "-Ilib\net" "-I..\lib\net" "-Ilib\mbuf" "-I..\lib\mbuf" "-Ilib\mempool" "-I..\lib\mempool" "-Ilib\ring" "-I..\lib\ring" "-Ilib\metrics" "-I..\lib\metrics" "-Ilib\telemetry" "-I..\lib\telemetry" "-Ilib\meter" "-I..\lib\meter" "-Idrivers\bus\pci" "-I..\drivers\bus\pci" "-I..\drivers\bus\pci\windows" "-Ilib\pci" "-I..\lib\pci" "-Idrivers\bus\vdev" "-I..\drivers\bus\vdev" "-Ilib\security" "-I..\lib\security" "-Ilib\cryptodev" "-I..\lib\cryptodev" "-Ilib\rcu" "-I..\lib\rcu" "-Xclang" "-fcolor-diagnostics" "-pipe" "-D_FILE_OFFSET_BITS=64" "-Wall" "-Winvalid-pch" "-Werror" "-O3" "-include" "rte_config.h" "-Wextra" "-Wcast-qual" "-Wdeprecated" "-Wformat" "-Wformat-nonliteral" "-Wformat-security" "-Wmissing-declarations" "-Wmissing-prototypes" "-Wnested-externs" "-Wold-style-definition" "-Wpointer-arith" "-Wsign-compare" "-Wstrict-prototypes" "-Wundef" "-Wwrite-strings" "-Wno-address-of-packed-member" "-Wno-missing-field-initializers" "-D_GNU_SOURCE" "-D_WIN32_WINNT=0x0A00" "-D_CRT_SECURE_NO_WARNINGS" "-march=native" "-DALLOW_EXPERIMENTAL_API" "-DALLOW_INTERNAL_API" "-Wno-strict-aliasing" "-DCC_AVX2_SUPPORT" "-DCC_AVX512_SUPPORT" "-DRTE_LOG_DEFAULT_LOGTYPE=pmd.net.iavf" -MD -MQ drivers/libtmp_rte_net_iavf.a.p/net_iavf_iavf_ipsec_crypto.c.obj -MF "drivers\libtmp_rte_net_iavf.a.p\net_iavf_iavf_ipsec_crypto.c.obj.d" -o drivers/libtmp_rte_net_iavf.a.p/net_iavf_iavf_ipsec_crypto.c.obj "-c" ../drivers/net/iavf/iavf_ipsec_crypto.c ../drivers/net/iavf/iavf_ipsec_crypto.c:111:31: error: comparison of integers of different signs: 'const enum rte_crypto_sym_xform_type' and 'uint32_t' (aka 'unsigned int') [-Werror,-Wsign-compare] capability->sym.xform_type == type && ~~ ^ ../drivers/net/iavf/iavf_ipsec_crypto.c:112:32: error: comparison of integers of different signs: 'const enum rte_crypto_cipher_algorithm' and 'uint32_t' (aka 'unsigned int') [-Werror,-Wsign-compare] capability->sym.cipher.algo == algo) ~~~ ^ 2 errors generated. -- David Marchand
[dpdk-dev] [v2] security: add telemetry endpoint for cryptodev security capabilities
Add telemetry endpoint for cryptodev security capabilities. Signed-off-by: Gowrishankar Muthukrishnan --- v2: - updated doc and release notes --- doc/guides/prog_guide/rte_security.rst | 22 ++ doc/guides/rel_notes/release_21_11.rst | 5 ++ lib/security/rte_security.c| 98 ++ 3 files changed, 125 insertions(+) diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst index 46c9b51d1b..dbc6ef0783 100644 --- a/doc/guides/prog_guide/rte_security.rst +++ b/doc/guides/prog_guide/rte_security.rst @@ -728,3 +728,25 @@ it is only valid to have a single flow to map to that security session. +---++++-+ | Eth | -> ... -> | ESP | -> | END | +---++++-+ + + +Telemetry support +- + +The Security library has support for displaying Crypto device information +with respect to its Security capabilities. Telemetry commands that can be used +are shown below. + +#. Get the list of available Crypto devices by ID, that supports Security features:: + + --> /security/list + {"/security/list": [0, 1, 2, 3]} + +#. Get the security capabilities of a Crypto device:: + + --> /security/caps,0 +{"/security/caps": {"sec_caps": [], "sec_caps_n": }} + +For more information on how to use the Telemetry interface, see +the :doc:`../howto/telemetry`. diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 47cd67131e..88834d91d8 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -197,6 +197,11 @@ New Features * Added port representors support on SN1000 SmartNICs * Added flow API transfer proxy support +* **Added Telemetry callback to Security library.** + + Added Telemetry callback function to query security capabilities of + Crypto device. + * **Updated Marvell cnxk crypto PMD.** * Added AES-CBC SHA1-HMAC support in lookaside protocol (IPsec) for CN10K. diff --git a/lib/security/rte_security.c b/lib/security/rte_security.c index fe81ed3e4c..068d855d9b 100644 --- a/lib/security/rte_security.c +++ b/lib/security/rte_security.c @@ -4,8 +4,10 @@ * Copyright (c) 2020 Samsung Electronics Co., Ltd All Rights Reserved */ +#include #include #include +#include #include "rte_compat.h" #include "rte_security.h" #include "rte_security_driver.h" @@ -203,3 +205,99 @@ rte_security_capability_get(struct rte_security_ctx *instance, return NULL; } + +static int +cryptodev_handle_dev_list(const char *cmd __rte_unused, + const char *params __rte_unused, + struct rte_tel_data *d) +{ + int dev_id; + + if (rte_cryptodev_count() < 1) + return -1; + + rte_tel_data_start_array(d, RTE_TEL_INT_VAL); + for (dev_id = 0; dev_id < RTE_CRYPTO_MAX_DEVS; dev_id++) + if (rte_cryptodev_is_valid_dev(dev_id) && + rte_cryptodev_get_sec_ctx(dev_id)) + rte_tel_data_add_array_int(d, dev_id); + + return 0; +} + +#define SEC_CAPS_SZ\ + (RTE_ALIGN_CEIL(sizeof(struct rte_security_capability), \ + sizeof(uint64_t)) / sizeof(uint64_t)) + +static int +sec_caps_array(struct rte_tel_data *d, + const struct rte_security_capability *capabilities) +{ + const struct rte_security_capability *dev_caps; + uint64_t caps_val[SEC_CAPS_SZ]; + unsigned int i = 0, j; + + rte_tel_data_start_array(d, RTE_TEL_U64_VAL); + + while ((dev_caps = &capabilities[i++])->action != + RTE_SECURITY_ACTION_TYPE_NONE) { + memset(&caps_val, 0, SEC_CAPS_SZ * sizeof(caps_val[0])); + rte_memcpy(caps_val, dev_caps, sizeof(capabilities[0])); + for (j = 0; j < SEC_CAPS_SZ; j++) + rte_tel_data_add_array_u64(d, caps_val[j]); + } + + return i; +} + +static int +security_handle_dev_caps(const char *cmd __rte_unused, const char *params, +struct rte_tel_data *d) +{ + const struct rte_security_capability *capabilities; + struct rte_security_ctx *sec_ctx; + struct rte_tel_data *sec_caps; + int sec_caps_n; + char *end_param; + int dev_id; + + if (!params || strlen(params) == 0 || !isdigit(*params)) + return -EINVAL; + + dev_id = strtoul(params, &end_param, 0); + if (*end_param != '\0') + CDEV_LOG_ERR("Extra parameters passed to command, ignoring"); + + if (!rte_cryptodev_is_valid_dev(dev_id)) + return -EINVAL; + + rte_tel_data_start_dict(d); + sec_caps = rte_tel_data_alloc(); + if (!sec_caps) + return -ENOMEM; + + sec_ctx = (struct rte_security_ctx *)rte_cryptodev_get_sec_ctx(dev_id); +