Re: [dpdk-dev] [PATCH v2] bus/pci: fix TOCTOU issue
On Tue, Apr 2, 2019 at 8:51 PM Stephen Hemminger wrote: > Using access followed by open causes a static analysis warning > about Time of check versus Time of use. Also, access() and > open() have different UID permission checks. > > This is not a serious problem; but easy to fix by using errno instead. > > Coverity issue: 300870 > Fixes: 4a928ef9f611 ("bus/pci: enable write combining during mapping") > Signed-off-by: Stephen Hemminger > --- > v2 - add more CC to original mail, and rebase > > drivers/bus/pci/linux/pci_uio.c | 11 +-- > 1 file changed, 5 insertions(+), 6 deletions(-) > > diff --git a/drivers/bus/pci/linux/pci_uio.c > b/drivers/bus/pci/linux/pci_uio.c > index 09ecbb7aad25..0d1b9aa347ba 100644 > --- a/drivers/bus/pci/linux/pci_uio.c > +++ b/drivers/bus/pci/linux/pci_uio.c > @@ -314,12 +314,11 @@ pci_uio_map_resource_by_index(struct rte_pci_device > *dev, int res_idx, > loc->domain, loc->bus, loc->devid, > loc->function, res_idx); > > - if (access(devname, R_OK|W_OK) != -1) { > - fd = open(devname, O_RDWR); > - if (fd < 0) > - RTE_LOG(INFO, EAL, "%s cannot be mapped. " > - "Fall-back to non prefetchable > mode.\n", > - devname); > + fd = open(devname, O_RDWR); > + if (fd < 0 && errno != ENOENT) { > + RTE_LOG(INFO, EAL, "%s cannot be mapped. " > + "Fall-back to non prefetchable mode.\n", > + devname); > } > } > > Reviewed-by: David Marchand -- David Marchand
Re: [dpdk-dev] [PATCH] vfio: fix expanding DMA area in ppc64le
Before submitting further revisions, please check the documentation at http://doc.dpdk.org/guides/contributing/patches.html You are supposed to version your patches and prune old superseded patches in patchwork. Thanks. -- David Marchand On Thu, Jun 13, 2019 at 4:23 AM Takeshi Yoshimura wrote: > In ppc64le, expanding DMA areas always fail because we cannot remove > a DMA window. As a result, we cannot allocate more than one memseg in > ppc64le. This is because vfio_spapr_dma_mem_map() doesn't unmap all > the mapped DMA before removing the window. This patch fixes this > incorrect behavior. > > I added a global variable to track current window size since we do > not have better ways to get exact size of it than doing so. sPAPR > IOMMU seems not to provide any ways to get window size with ioctl > interfaces. rte_memseg_walk*() is currently used to calculate window > size, but it walks memsegs that are marked as used, not mapped. So, > we need to determine if a given memseg is mapped or not, otherwise > the ioctl reports errors due to attempting to unregister memory > addresses that are not registered. The global variable is excluded > in non-ppc64le binaries. > > Similar problems happen in user maps. We need to avoid attempting to > unmap the address that is given as the function's parameter. The > compaction of user maps prevents us from passing correct length for > unmapping DMA at the window recreation. So, I removed it in ppc64le. > > I also fixed the order of ioctl for unregister and unmap. The ioctl > for unregister sometimes report device busy errors due to the > existence of mapped area. > > Signed-off-by: Takeshi Yoshimura > --- > lib/librte_eal/linux/eal/eal_vfio.c | 154 +++- > 1 file changed, 103 insertions(+), 51 deletions(-) > > diff --git a/lib/librte_eal/linux/eal/eal_vfio.c > b/lib/librte_eal/linux/eal/eal_vfio.c > index f16c5c3c0..c1b275b56 100644 > --- a/lib/librte_eal/linux/eal/eal_vfio.c > +++ b/lib/librte_eal/linux/eal/eal_vfio.c > @@ -93,6 +93,7 @@ is_null_map(const struct user_mem_map *map) > return map->addr == 0 && map->iova == 0 && map->len == 0; > } > > +#ifndef RTE_ARCH_PPC_64 > /* we may need to merge user mem maps together in case of user > mapping/unmapping > * chunks of memory, so we'll need a comparator function to sort segments. > */ > @@ -126,6 +127,7 @@ user_mem_map_cmp(const void *a, const void *b) > > return 0; > } > +#endif > > /* adjust user map entry. this may result in shortening of existing map, > or in > * splitting existing map in two pieces. > @@ -162,6 +164,7 @@ adjust_map(struct user_mem_map *src, struct > user_mem_map *end, > } > } > > +#ifndef RTE_ARCH_PPC_64 > /* try merging two maps into one, return 1 if succeeded */ > static int > merge_map(struct user_mem_map *left, struct user_mem_map *right) > @@ -177,6 +180,7 @@ merge_map(struct user_mem_map *left, struct > user_mem_map *right) > > return 1; > } > +#endif > > static struct user_mem_map * > find_user_mem_map(struct user_mem_maps *user_mem_maps, uint64_t addr, > @@ -211,6 +215,16 @@ find_user_mem_map(struct user_mem_maps > *user_mem_maps, uint64_t addr, > return NULL; > } > > +#ifdef RTE_ARCH_PPC_64 > +/* Recreation of DMA window requires unregistering DMA memory. > + * Compaction confuses the logic and causes false reports in the > recreation. > + * For now, we do not compact user maps in ppc64le. > + */ > +static void > +compact_user_maps(__rte_unused struct user_mem_maps *user_mem_maps) > +{ > +} > +#else > /* this will sort all user maps, and merge/compact any adjacent maps */ > static void > compact_user_maps(struct user_mem_maps *user_mem_maps) > @@ -256,6 +270,7 @@ compact_user_maps(struct user_mem_maps *user_mem_maps) > user_mem_maps->n_maps = cur_idx; > } > } > +#endif > > static int > vfio_open_group_fd(int iommu_group_num) > @@ -1306,6 +1321,7 @@ vfio_type1_dma_map(int vfio_container_fd) > return rte_memseg_walk(type1_map, &vfio_container_fd); > } > > +#ifdef RTE_ARCH_PPC_64 > static int > vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t > iova, > uint64_t len, int do_map) > @@ -1357,14 +1373,6 @@ vfio_spapr_dma_do_map(int vfio_container_fd, > uint64_t vaddr, uint64_t iova, > } > > } else { > - ret = ioctl(vfio_container_fd, > - VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY, ®); > - if (ret) { > - RTE_LOG(ERR, EAL, " cannot unregister vaddr for > IOMMU, error %i (%s)\n", > - errno, strerror(errno)); > - return -1; > - } > - > memset(&dma_unmap, 0, sizeof(dma_unmap)); > dma_unmap.argsz = sizeof(struct > vfio_iommu_type1_dma_unmap); > dma_unmap.size = len; > @@ -1377,24 +1385,50 @@ vfio_spapr_dma_do_map(int vfio_cont
Re: [dpdk-dev] [PATCH v2] bus/pci: fix TOCTOU issue
14/06/2019 16:16, David Marchand: > On Tue, Apr 2, 2019 at 8:51 PM Stephen Hemminger > wrote: > > > Using access followed by open causes a static analysis warning > > about Time of check versus Time of use. Also, access() and > > open() have different UID permission checks. > > > > This is not a serious problem; but easy to fix by using errno instead. > > > > Coverity issue: 300870 > > Fixes: 4a928ef9f611 ("bus/pci: enable write combining during mapping") Cc: sta...@dpdk.org > > Signed-off-by: Stephen Hemminger > > > Reviewed-by: David Marchand Applied, thanks
Re: [dpdk-dev] [PATCH] eal: resort symbols in EXPERIMENTAL section
06/04/2019 05:30, Stephen Hemminger: > The symbols in the EXPERIMENTAL were close to alphabetic > order but running sort showed several mistakes. > > This has no impact on code, API, ABI or otherwise. > Purely for humans. > > Signed-off-by: Stephen Hemminger I don't think it's worth adding a layer of git history for this sort. I would prefer to leave it as is.
Re: [dpdk-dev] [PATCH] eal: resort symbols in EXPERIMENTAL section
On Fri, Jun 14, 2019 at 9:39 AM Thomas Monjalon wrote: > 06/04/2019 05:30, Stephen Hemminger: > > The symbols in the EXPERIMENTAL were close to alphabetic > > order but running sort showed several mistakes. > > > > This has no impact on code, API, ABI or otherwise. > > Purely for humans. > > > > Signed-off-by: Stephen Hemminger > > I don't think it's worth adding a layer of git history for this sort. > I would prefer to leave it as is. > > If this is about preferrence, I would prefer we have those symbols sorted per versions that introduced them ;-). Much easier to check and see if they are candidates for entering stable ABI. -- David Marchand
[dpdk-dev] [PATCH v2] vfio: fix expanding DMA area in ppc64le
In ppc64le, expanding DMA areas always fail because we cannot remove a DMA window. As a result, we cannot allocate more than one memseg in ppc64le. This is because vfio_spapr_dma_mem_map() doesn't unmap all the mapped DMA before removing the window. This patch fixes this incorrect behavior. I added a global variable to track current window size since we do not have better ways to get exact size of it than doing so. sPAPR IOMMU seems not to provide any ways to get window size with ioctl interfaces. rte_memseg_walk*() is currently used to calculate window size, but it walks memsegs that are marked as used, not mapped. So, we need to determine if a given memseg is mapped or not, otherwise the ioctl reports errors due to attempting to unregister memory addresses that are not registered. The global variable is excluded in non-ppc64le binaries. Similar problems happen in user maps. We need to avoid attempting to unmap the address that is given as the function's parameter. The compaction of user maps prevents us from passing correct length for unmapping DMA at the window recreation. So, I removed it in ppc64le. I also fixed the order of ioctl for unregister and unmap. The ioctl for unregister sometimes report device busy errors due to the existence of mapped area. Signed-off-by: Takeshi Yoshimura --- lib/librte_eal/linux/eal/eal_vfio.c | 154 +++- 1 file changed, 103 insertions(+), 51 deletions(-) diff --git a/lib/librte_eal/linux/eal/eal_vfio.c b/lib/librte_eal/linux/eal/eal_vfio.c index 6892a2c14..5587854b8 100644 --- a/lib/librte_eal/linux/eal/eal_vfio.c +++ b/lib/librte_eal/linux/eal/eal_vfio.c @@ -93,6 +93,7 @@ is_null_map(const struct user_mem_map *map) return map->addr == 0 && map->iova == 0 && map->len == 0; } +#ifndef RTE_ARCH_PPC_64 /* we may need to merge user mem maps together in case of user mapping/unmapping * chunks of memory, so we'll need a comparator function to sort segments. */ @@ -126,6 +127,7 @@ user_mem_map_cmp(const void *a, const void *b) return 0; } +#endif /* adjust user map entry. this may result in shortening of existing map, or in * splitting existing map in two pieces. @@ -162,6 +164,7 @@ adjust_map(struct user_mem_map *src, struct user_mem_map *end, } } +#ifndef RTE_ARCH_PPC_64 /* try merging two maps into one, return 1 if succeeded */ static int merge_map(struct user_mem_map *left, struct user_mem_map *right) @@ -177,6 +180,7 @@ merge_map(struct user_mem_map *left, struct user_mem_map *right) return 1; } +#endif static struct user_mem_map * find_user_mem_map(struct user_mem_maps *user_mem_maps, uint64_t addr, @@ -211,6 +215,16 @@ find_user_mem_map(struct user_mem_maps *user_mem_maps, uint64_t addr, return NULL; } +#ifdef RTE_ARCH_PPC_64 +/* Recreation of DMA window requires unregistering DMA memory. + * Compaction confuses the logic and causes false reports in the recreation. + * For now, we do not compact user maps in ppc64le. + */ +static void +compact_user_maps(__rte_unused struct user_mem_maps *user_mem_maps) +{ +} +#else /* this will sort all user maps, and merge/compact any adjacent maps */ static void compact_user_maps(struct user_mem_maps *user_mem_maps) @@ -256,6 +270,7 @@ compact_user_maps(struct user_mem_maps *user_mem_maps) user_mem_maps->n_maps = cur_idx; } } +#endif static int vfio_open_group_fd(int iommu_group_num) @@ -1306,6 +1321,7 @@ vfio_type1_dma_map(int vfio_container_fd) return rte_memseg_walk(type1_map, &vfio_container_fd); } +#ifdef RTE_ARCH_PPC_64 static int vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, uint64_t len, int do_map) @@ -1357,14 +1373,6 @@ vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, } } else { - ret = ioctl(vfio_container_fd, - VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY, ®); - if (ret) { - RTE_LOG(ERR, EAL, " cannot unregister vaddr for IOMMU, error %i (%s)\n", - errno, strerror(errno)); - return -1; - } - memset(&dma_unmap, 0, sizeof(dma_unmap)); dma_unmap.argsz = sizeof(struct vfio_iommu_type1_dma_unmap); dma_unmap.size = len; @@ -1377,24 +1385,50 @@ vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, errno, strerror(errno)); return -1; } + + ret = ioctl(vfio_container_fd, + VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY, ®); + if (ret) { + RTE_LOG(ERR, EAL, " cannot unregister vaddr for IOMMU, error %i (%s)\n", + errno, strerror(errno)); + return -1; +
Re: [dpdk-dev] [PATCH] examples/multi_process - fix crash in mp_client with sparse ports
On Tue, Jun 4, 2019 at 2:05 AM Stephen Hemminger wrote: > From: Stephen Hemminger > > The mp_client crashes if run on Azure or any system where ethdev > ports are owned. In that case, the tx_buffer and tx_stats for the > real port were initialized correctly, but the wrong port was used. > > For example if the server has Ports 3 and 5. Then calling > rte_eth_tx_buffer_flush on any other buffer will dereference null > because the tx buffer for that port was not allocated. > > Fixes: e2366e74e029 ("examples: use buffered Tx") > Signed-off-by: Stephen Hemminger > --- > examples/multi_process/client_server_mp/mp_client/client.c | 7 --- > 1 file changed, 4 insertions(+), 3 deletions(-) > > diff --git a/examples/multi_process/client_server_mp/mp_client/client.c > b/examples/multi_process/client_server_mp/mp_client/client.c > index c23dd3f378f7..c1d2d975b717 100644 > --- a/examples/multi_process/client_server_mp/mp_client/client.c > +++ b/examples/multi_process/client_server_mp/mp_client/client.c > @@ -246,15 +246,16 @@ main(int argc, char *argv[]) > > for (;;) { > uint16_t i, rx_pkts; > - uint16_t port; > > rx_pkts = rte_ring_dequeue_burst(rx_ring, pkts, > PKT_READ_SIZE, NULL); > > if (unlikely(rx_pkts == 0)){ > if (need_flush) > - for (port = 0; port < ports->num_ports; > port++) { > - sent = > rte_eth_tx_buffer_flush(ports->id[port], client_id, > + for (i = 0; i < ports->num_ports; i++) { > + uint16_t port = ports->id[i]' > Syntax error. + > + sent = > rte_eth_tx_buffer_flush(ports, client_id, > Not sure passing ports is intended. tx_buffer[port]); > if (unlikely(sent)) > tx_stats->tx[port] += sent; > > -- David Marchand
Re: [dpdk-dev] eal/pci: Improve automatic selection of IOVA mode
On Mon, Jun 3, 2019 at 6:44 PM Walker, Benjamin wrote: > On Mon, 2019-06-03 at 12:48 +0200, David Marchand wrote: > > Hello, > > > > On Thu, May 30, 2019 at 7:48 PM Ben Walker > wrote: > > > In SPDK, not all drivers are registered with DPDK at start up time. > > > Previously, that meant DPDK always chose to set itself up in IOVA_PA > > > mode. Instead, when the correct iova choice is unclear based on the > > > devices and drivers known to DPDK at start up time, use other > heuristics > > > (such as whether /proc/self/pagemap is accessible) to make a better > > > choice. > > > > > > This enables SPDK to run as an unprivileged user again without > requiring > > > users to explicitly set the iova mode on the command line. > > > > > > > Interesting, I got a bz on something similar the day you sent this > patchset ;- > > ) > > > > > > - When a dpdk process is started, either it has access to physical > addresses > > or not, and this won't change for the rest of its life. > > Your fix on defaulting to VA based on a rte_eal_using_phys_addrs() check > makes > > sense to me. > > It is the most encountered situation when running ovs as non root on > recent > > kernels. > > > > > > - However, I fail to see the need for all of this detection code wrt > drivers > > and devices. > > > > On one side of the equation, when dpdk starts, it checks physical address > > availability. > > On the other side of the equation, we have the drivers that will be > invoked > > when probing devices (either at dpdk init, or when hotplugging a device). > > > > At this point, the probing call should check the driver requirement wrt > to the > > kernel driver the device is attached to. > > If this requirement is not fulfilled, then the probing fails. > > > > > > - This leaves the --iova-va forcing option. > > Why do we need it? > > If we don't have access to physical addresses, no choice but run in VA > mode. > > If we have access to physical addresses, the only case would be that you > want > > to downgrade from PA to VA. > > But well, your process can still access it, not sure what the benefit is. > > All of the complexity here, at least as far as I understand it, stems from > supporting hot insert of devices. This is very important to SPDK because > storage > devices get hot inserted all the time, so we very much appreciate that > DPDK has > put in so much effort in this area and continues to accept our patches to > improve it. I know hot insert is not nearly as important for network > devices. > > When DPDK starts up, it needs to select whether to use virtual addresses or > physical addresses in its memory maps. It can do that by answering the > following > questions: > > 1. Does the system only have buses that support an IOMMU? > 2. Is the IOMMU sufficiently fast for the use case? > 3. Will all of the devices that will be used with DPDK throughout the > application's lifetime work with an IOMMU? > > If these three things are true, then the best choice is to use virtual > addresses > in the memory translations. However, if any of the above are not true it > needs > to fall back to physical addresses. > > #1 is checked by simply asking all of the buses, which are known up front. > #2 is > just assumed to be true. But #3 is not possible to check fully because of > hot > insert. > > The code currently approximates the #3 check by looking at the devices > present > at initialization time. If a device exists that's bound to vfio-pci, and no > other devices exist that are bound to a uio driver, and DPDK has a > registered > driver that's actually going to load against the vfio-pci devices, then it > will > elect to use virtual addresses. This is purely a heuristic - it's not a > definitive answer because the user could later hot insert a device that > gets > bound to uio. > > The user, of course, knows the answer to which addressing scheme to use > typically. For example, these checks assume #2 is true, but there may be > hardware implementations where it is not and the user wants to force > physical > addresses. Or the user may know that they are going to hot insert a device > at > run time that doesn't work with the IOMMU. That's why it's important to > maintain > the ability for the user to override the default heuristic's decision via > the > command line. > > My patch series is simply improving the heuristic in a few ways. First, > previously each bus when queried would return either virtual or physical > addresses as its choice. However, often the bus just does not have enough > information to formulate any preference at all (and PCI was defaulting to > physical addresses in this case). Instead, I made it so that the bus can > return > that it doesn't care, which pushes the decision up to a higher level. That > higher level then makes the decision by checking whether it can access > /proc/self/pagemap. Second, I narrowed the uio check such that physical > addresses will only be selected if a device bound to uio exists and there > is
Re: [dpdk-dev] [EXT] Re: [PATCH 00/39] adding eventmode helper library
Hi Mattias, > A more extensive description of the purpose of the eventmode helper > library would be helpful. > > Is this supposed to be a generic framework for real-world > applications, or only something to simplify DPDK the implementation of > DPDK example programs and similar? This is intended as a generic framework, but the initial targets would be limited to DPDK example applications. For any application to use an event device for dynamic load balancing, it has to configure the event device and the adapters. Configuring the adapters would involve providing various parameters based on which the dynamic scheduling should happen. But requiring the application to do all this configuration would make the application complicated as well as the same code has to be repeated for a new application. Event mode helper tries to solve that. All the complex configuration would be implemented by the helper library and the helper library would provide a default conf as well. These patches facilitate event mode configuration in a easy to use manner. My idea is that, for a poll mode DPDK example to operate in event mode, a couple of helper functions and a lean worker thread should suffice. So even complex DPDK examples and real world applications will benefit from this helper library. We plan to propose a change to ipsec-secgw to operate in event mode once this proposal is merged. I'll update the cover-letter to add above details when sending v2. Thanks, Anoob > -Original Message- > From: dev On Behalf Of Mattias Rönnblom > Sent: Tuesday, June 11, 2019 4:14 PM > To: Jerin Jacob Kollanukkaran ; Anoob Joseph > ; Nikhil Rao ; Erik Gabriel > Carrillo ; Abhinandan Gujjar > ; Bruce Richardson > ; Pablo de Lara > > Cc: Narayana Prasad Raju Athreya ; dev@dpdk.org; > Lukas Bartosik ; Pavan Nikhilesh Bhagavatula > ; Hemant Agrawal > ; Nipun Gupta ; Harry > van Haaren ; Liang Ma > > Subject: [EXT] Re: [dpdk-dev] [PATCH 00/39] adding eventmode helper > library > > External Email > > -- > On 2019-06-07 11:48, Jerin Jacob Kollanukkaran wrote: > >> -Original Message- > >> From: Anoob Joseph > >> Sent: Monday, June 3, 2019 11:02 PM > >> To: Jerin Jacob Kollanukkaran ; Nikhil Rao > >> ; Erik Gabriel Carrillo > >> ; Abhinandan Gujjar > >> ; Bruce Richardson > >> ; Pablo de Lara > >> > >> Cc: Anoob Joseph ; Narayana Prasad Raju Athreya > >> ; dev@dpdk.org; Lukas Bartosik > >> ; Pavan Nikhilesh Bhagavatula > >> ; Hemant Agrawal > ; > >> Nipun Gupta ; Harry van Haaren > >> ; Mattias Rönnblom > >> ; Liang Ma > >> Subject: [PATCH 00/39] adding eventmode helper library > >> > >> This series adds support for eventmode helper library and l2fwd-event > >> application. > >> > >> First 13 patches creates a new l2fwd application (l2fwd-event). Minor > >> code reorganization is done to faciliate seamless integration of > eventmode. > >> > >> Next 22 patches adds eventmode helper library. This library abstracts > >> the configuration of event device & Rx-Tx event adapters. The library > >> can be extended to allow users to control all the configuration > >> exposed by adapters and eth device. > >> > >> Last 4 patches implements eventmode in l2fwd-event application. With > >> event device and adapters, fine tuned threads (based on dev > >> capabilities) can be drafted to maximize performance. Eventmode > >> library facilitates this and l2fwd-event demonstrates this usage. > >> > >> With the introduction of eventmode helper library, any poll mode > >> application can be converted to an eventmode application with simple > >> steps, enabling multi-core scaling and dynamic load balancing to > >> various example applications. > > > > > > Anyone planning to review this changes? > > I will spend time to review this. Requesting the review from other > eventdev stake holders. > > > > A more extensive description of the purpose of the eventmode helper > library would be helpful. > > Is this supposed to be a generic framework for real-world applications, or > only something to simplify DPDK the implementation of DPDK example > programs and similar?
[dpdk-dev] [PATCH v2 1/3] kni: refuse to initialise when IOVA is not PA
If a forced iova-mode has been passed at init, kni is not supposed to work. Fixes: 075b182b54ce ("eal: force IOVA to a particular mode") Cc: sta...@dpdk.org Signed-off-by: David Marchand --- lib/librte_kni/rte_kni.c | 5 + 1 file changed, 5 insertions(+) diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c index a0f1e37..a6bf323 100644 --- a/lib/librte_kni/rte_kni.c +++ b/lib/librte_kni/rte_kni.c @@ -97,6 +97,11 @@ enum kni_ops_status { int rte_kni_init(unsigned int max_kni_ifaces __rte_unused) { + if (rte_eal_iova_mode() != RTE_IOVA_PA) { + RTE_LOG(ERR, KNI, "KNI requires IOVA as PA\n"); + return -1; + } + /* Check FD and open */ if (kni_fd < 0) { kni_fd = open("/dev/" KNI_DEVICE, O_RDWR); -- 1.8.3.1
[dpdk-dev] [PATCH v2 0/3] Improve automatic selection of IOVA mode
In SPDK, not all drivers are registered with DPDK at start up time. Previously, that meant DPDK always chose to set itself up in IOVA_PA mode. Instead, when the correct iova choice is unclear based on the devices and drivers known to DPDK at start up time, use other heuristics (such as whether /proc/self/pagemap is accessible) to make a better choice. This enables SPDK to run as an unprivileged user again without requiring users to explicitly set the iova mode on the command line. Changelog since v1: - I took over the series following experiments and discussions with Ben and others, squashed Ben patches as two patches focusing on the main issues, - introduced a fix on KNI, - on the EAL bits, - added log on which IOVA mode has been selected, - updated BSD EAL, - in Linux EAL, moved KNI special case after IOVA selection, - in Linux EAL, added check on forced mode wrt physical addresses availability, - on the PCI bus driver bits, - enforced the checks in the common code of the PCI bus, - added debug logs to track why a iova mode has been chosen per device, - added BSD part, - in Linux part, checked that VFIO is enabled, - in Linux part, defaulted to DC if a driver supports both PA and VA, -- David Marchand Ben Walker (2): eal: compute IOVA mode based on PA availability bus/pci: only consider usable devices to select IOVA mode David Marchand (1): kni: refuse to initialise when IOVA is not PA drivers/bus/pci/bsd/pci.c | 9 +- drivers/bus/pci/linux/pci.c | 191 +--- drivers/bus/pci/pci_common.c| 65 +++ drivers/bus/pci/private.h | 8 ++ lib/librte_eal/common/eal_common_bus.c | 4 - lib/librte_eal/common/include/rte_bus.h | 2 +- lib/librte_eal/freebsd/eal/eal.c| 10 +- lib/librte_eal/linux/eal/eal.c | 38 +-- lib/librte_eal/linux/eal/eal_memory.c | 46 ++-- lib/librte_kni/rte_kni.c| 5 + 10 files changed, 187 insertions(+), 191 deletions(-) -- 1.8.3.1
[dpdk-dev] [PATCH v2 3/3] bus/pci: only consider usable devices to select IOVA mode
From: Ben Walker When selecting the preferred IOVA mode of the pci bus, the current heuristic ("are devices bound?", "are devices bound to UIO?", "are pmd drivers supporting IOVA as VA?" etc..) should honor the device white/blacklist so that an unwanted device does not impact the decision. There is no reason to consider a device which has no driver available. This applies to all OS, so implements this in common code then call a OS specific callback. On Linux side: - the VFIO special considerations should be evaluated only if VFIO support is built, - there is no strong requirement on using VA rather than PA if a driver supports VA, so defaulting to DC in such a case. Signed-off-by: Ben Walker Signed-off-by: David Marchand --- drivers/bus/pci/bsd/pci.c| 9 +- drivers/bus/pci/linux/pci.c | 191 --- drivers/bus/pci/pci_common.c | 65 +++ drivers/bus/pci/private.h| 8 ++ 4 files changed, 131 insertions(+), 142 deletions(-) diff --git a/drivers/bus/pci/bsd/pci.c b/drivers/bus/pci/bsd/pci.c index c7b90cb..a2de709 100644 --- a/drivers/bus/pci/bsd/pci.c +++ b/drivers/bus/pci/bsd/pci.c @@ -376,13 +376,14 @@ return -1; } -/* - * Get iommu class of PCI devices on the bus. - */ enum rte_iova_mode -rte_pci_get_iommu_class(void) +pci_device_iova_mode(const struct rte_pci_driver *pdrv __rte_unused, +const struct rte_pci_device *pdev) { /* Supports only RTE_KDRV_NIC_UIO */ + if (pdev->kdrv != RTE_KDRV_NIC_UIO) + RTE_LOG(DEBUG, EAL, "Unsupported kernel driver? Defaulting to IOVA as 'PA'\n"); + return RTE_IOVA_PA; } diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c index b931cf9..33c8ea7 100644 --- a/drivers/bus/pci/linux/pci.c +++ b/drivers/bus/pci/linux/pci.c @@ -500,95 +500,14 @@ return -1; } -/* - * Is pci device bound to any kdrv - */ -static inline int -pci_one_device_is_bound(void) -{ - struct rte_pci_device *dev = NULL; - int ret = 0; - - FOREACH_DEVICE_ON_PCIBUS(dev) { - if (dev->kdrv == RTE_KDRV_UNKNOWN || - dev->kdrv == RTE_KDRV_NONE) { - continue; - } else { - ret = 1; - break; - } - } - return ret; -} - -/* - * Any one of the device bound to uio - */ -static inline int -pci_one_device_bound_uio(void) -{ - struct rte_pci_device *dev = NULL; - struct rte_devargs *devargs; - int need_check; - - FOREACH_DEVICE_ON_PCIBUS(dev) { - devargs = dev->device.devargs; - - need_check = 0; - switch (rte_pci_bus.bus.conf.scan_mode) { - case RTE_BUS_SCAN_WHITELIST: - if (devargs && devargs->policy == RTE_DEV_WHITELISTED) - need_check = 1; - break; - case RTE_BUS_SCAN_UNDEFINED: - case RTE_BUS_SCAN_BLACKLIST: - if (devargs == NULL || - devargs->policy != RTE_DEV_BLACKLISTED) - need_check = 1; - break; - } - - if (!need_check) - continue; - - if (dev->kdrv == RTE_KDRV_IGB_UIO || - dev->kdrv == RTE_KDRV_UIO_GENERIC) { - return 1; - } - } - return 0; -} - -/* - * Any one of the device has iova as va - */ -static inline int -pci_one_device_has_iova_va(void) -{ - struct rte_pci_device *dev = NULL; - struct rte_pci_driver *drv = NULL; - - FOREACH_DRIVER_ON_PCIBUS(drv) { - if (drv && drv->drv_flags & RTE_PCI_DRV_IOVA_AS_VA) { - FOREACH_DEVICE_ON_PCIBUS(dev) { - if ((dev->kdrv == RTE_KDRV_VFIO || -dev->kdrv == RTE_KDRV_NIC_MLX) && - rte_pci_match(drv, dev)) - return 1; - } - } - } - return 0; -} - #if defined(RTE_ARCH_X86) static bool -pci_one_device_iommu_support_va(struct rte_pci_device *dev) +pci_one_device_iommu_support_va(const struct rte_pci_device *dev) { #define VTD_CAP_MGAW_SHIFT 16 #define VTD_CAP_MGAW_MASK (0x3fULL << VTD_CAP_MGAW_SHIFT) #define X86_VA_WIDTH 47 /* From Documentation/x86/x86_64/mm.txt */ - struct rte_pci_addr *addr = &dev->addr; + const struct rte_pci_addr *addr = &dev->addr; char filename[PATH_MAX]; FILE *fp; uint64_t mgaw, vtd_cap_reg = 0; @@ -632,80 +551,76 @@ } #elif defined(RTE_ARCH_PPC_64) static bool -pci_one_device_iommu_support_va(__rte_unused struct rte_pci_device *dev) +pci_one_device_iommu_support_va(__rte_unused const struct rte_pci_device *dev) { return false; }
[dpdk-dev] [PATCH v2 2/3] eal: compute IOVA mode based on PA availability
From: Ben Walker Currently, if the bus selects IOVA as PA, the memory init can fail when lacking access to physical addresses. This can be quite hard for normal users to understand what is wrong since this is the default behavior. Catch this situation earlier in eal init by validating physical addresses availability, or select IOVA when no clear preferrence had been expressed. The bus code is changed so that it reports when it does not care about the IOVA mode and let the eal init decide. In Linux implementation, rework rte_eal_using_phys_addrs() so that it can be called earlier but still avoid a circular dependency with rte_mem_virt2phys(). In FreeBSD implementation, rte_eal_using_phys_addrs() always returns false, so the detection part is left as is. If librte_kni is compiled in and the KNI kmod is loaded, - if the buses requested VA, force to PA if physical addresses are available as it was done before, - else, keep iova as VA, KNI init will fail later. Signed-off-by: Ben Walker Signed-off-by: David Marchand --- lib/librte_eal/common/eal_common_bus.c | 4 --- lib/librte_eal/common/include/rte_bus.h | 2 +- lib/librte_eal/freebsd/eal/eal.c| 10 +-- lib/librte_eal/linux/eal/eal.c | 38 +-- lib/librte_eal/linux/eal/eal_memory.c | 46 + 5 files changed, 51 insertions(+), 49 deletions(-) diff --git a/lib/librte_eal/common/eal_common_bus.c b/lib/librte_eal/common/eal_common_bus.c index c8f1901..77f1be1 100644 --- a/lib/librte_eal/common/eal_common_bus.c +++ b/lib/librte_eal/common/eal_common_bus.c @@ -237,10 +237,6 @@ enum rte_iova_mode mode |= bus->get_iommu_class(); } - if (mode != RTE_IOVA_VA) { - /* Use default IOVA mode */ - mode = RTE_IOVA_PA; - } return mode; } diff --git a/lib/librte_eal/common/include/rte_bus.h b/lib/librte_eal/common/include/rte_bus.h index 4faf2d2..90fe4e9 100644 --- a/lib/librte_eal/common/include/rte_bus.h +++ b/lib/librte_eal/common/include/rte_bus.h @@ -392,7 +392,7 @@ struct rte_bus *rte_bus_find(const struct rte_bus *start, rte_bus_cmp_t cmp, /** * Get the common iommu class of devices bound on to buses available in the - * system. The default mode is PA. + * system. RTE_IOVA_DC means that no preferrence has been expressed. * * @return * enum rte_iova_mode value. diff --git a/lib/librte_eal/freebsd/eal/eal.c b/lib/librte_eal/freebsd/eal/eal.c index 4eaa531..231f1dc 100644 --- a/lib/librte_eal/freebsd/eal/eal.c +++ b/lib/librte_eal/freebsd/eal/eal.c @@ -689,13 +689,19 @@ static void rte_eal_init_alert(const char *msg) /* if no EAL option "--iova-mode=", use bus IOVA scheme */ if (internal_config.iova_mode == RTE_IOVA_DC) { /* autodetect the IOVA mapping mode (default is RTE_IOVA_PA) */ - rte_eal_get_configuration()->iova_mode = - rte_bus_get_iommu_class(); + enum rte_iova_mode iova_mode = rte_bus_get_iommu_class(); + + if (iova_mode == RTE_IOVA_DC) + iova_mode = RTE_IOVA_PA; + rte_eal_get_configuration()->iova_mode = iova_mode; } else { rte_eal_get_configuration()->iova_mode = internal_config.iova_mode; } + RTE_LOG(INFO, EAL, "Selected IOVA mode '%s'\n", + rte_eal_iova_mode() == RTE_IOVA_PA ? "PA" : "VA"); + if (internal_config.no_hugetlbfs == 0) { /* rte_config isn't initialized yet */ ret = internal_config.process_type == RTE_PROC_PRIMARY ? diff --git a/lib/librte_eal/linux/eal/eal.c b/lib/librte_eal/linux/eal/eal.c index 3e1d6eb..785ed2b 100644 --- a/lib/librte_eal/linux/eal/eal.c +++ b/lib/librte_eal/linux/eal/eal.c @@ -948,6 +948,7 @@ static void rte_eal_init_alert(const char *msg) static char logid[PATH_MAX]; char cpuset[RTE_CPU_AFFINITY_STR_LEN]; char thread_name[RTE_MAX_THREAD_NAME_LEN]; + bool phys_addrs; /* checks if the machine is adequate */ if (!rte_cpu_is_supported()) { @@ -1035,25 +1036,46 @@ static void rte_eal_init_alert(const char *msg) return -1; } + phys_addrs = rte_eal_using_phys_addrs() != 0; + /* if no EAL option "--iova-mode=", use bus IOVA scheme */ if (internal_config.iova_mode == RTE_IOVA_DC) { - /* autodetect the IOVA mapping mode (default is RTE_IOVA_PA) */ - rte_eal_get_configuration()->iova_mode = - rte_bus_get_iommu_class(); + /* autodetect the IOVA mapping mode */ + enum rte_iova_mode iova_mode = rte_bus_get_iommu_class(); + if (iova_mode == RTE_IOVA_DC) { + iova_mode = phys_addrs ? RTE_IOVA_PA : RTE_IOVA_VA; + RTE_LOG(DEBUG, EAL, + "Buses did no
Re: [dpdk-dev] [PATCH v2 1/3] net/ice: enable switch filter
Hi, xiao > -Original Message- > From: Wang, Xiao W > Sent: Thursday, June 13, 2019 4:24 PM > To: Yang, Qiming ; dev@dpdk.org > Cc: Zhao1, Wei > Subject: RE: [dpdk-dev] [PATCH v2 1/3] net/ice: enable switch filter > > Hi, > > > -Original Message- > > From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Qiming Yang > > Sent: Wednesday, June 12, 2019 3:50 PM > > To: dev@dpdk.org > > Cc: Zhao1, Wei > > Subject: [dpdk-dev] [PATCH v2 1/3] net/ice: enable switch filter > > > > From: wei zhao > > > > The patch enables the backend of rte_flow. It transfers rte_flow_xxx > > to device specific data structure and configures packet process > > engine's binary classifier > > (switch) properly. > > > > Signed-off-by: Wei Zhao > > --- > > drivers/net/ice/Makefile| 1 + > > drivers/net/ice/ice_ethdev.h| 6 + > > drivers/net/ice/ice_switch_filter.c | 502 > > > > drivers/net/ice/ice_switch_filter.h | 28 ++ > > drivers/net/ice/meson.build | 3 +- > > 5 files changed, 539 insertions(+), 1 deletion(-) create mode 100644 > > drivers/net/ice/ice_switch_filter.c > > create mode 100644 drivers/net/ice/ice_switch_filter.h > > > > diff --git a/drivers/net/ice/Makefile b/drivers/net/ice/Makefile index > > 0e5c55e..b10d826 100644 > > --- a/drivers/net/ice/Makefile > > +++ b/drivers/net/ice/Makefile > > @@ -60,6 +60,7 @@ ifeq ($(CONFIG_RTE_ARCH_X86), y) > > SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_rxtx_vec_sse.c endif > > > > +SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_switch_filter.c > > ifeq ($(findstring > > RTE_MACHINE_CPUFLAG_AVX2,$(CFLAGS)),RTE_MACHINE_CPUFLAG_AVX2) > > CC_AVX2_SUPPORT=1 > > else > > diff --git a/drivers/net/ice/ice_ethdev.h > > b/drivers/net/ice/ice_ethdev.h index 1385afa..67a358a 100644 > > --- a/drivers/net/ice/ice_ethdev.h > > +++ b/drivers/net/ice/ice_ethdev.h > > @@ -234,6 +234,12 @@ struct ice_vsi { > > bool offset_loaded; > > }; > > > > +/* Struct to store flow created. */ > > +struct rte_flow { > > + TAILQ_ENTRY(rte_flow) node; > > +void *rule; > > +}; > > + > > struct ice_pf { > > struct ice_adapter *adapter; /* The adapter this PF associate to */ > > struct ice_vsi *main_vsi; /* pointer to main VSI structure */ diff > > --git a/drivers/net/ice/ice_switch_filter.c > > b/drivers/net/ice/ice_switch_filter.c > > new file mode 100644 > > index 000..e679675 > > --- /dev/null > > +++ b/drivers/net/ice/ice_switch_filter.c > > @@ -0,0 +1,502 @@ > > SPDX-License-Identifier missing. Ok, Updated in v3 > > > +#include > > +#include > > +#include > > +#include > > +#include > > +#include > > +#include > > + > > +#include > > +#include > > +#include > > +#include > > +#include > > +#include > > +#include > > +#include > > + > > +#include "ice_logs.h" > > +#include "base/ice_type.h" > > +#include "ice_switch_filter.h" > > + > > +static int > > +ice_parse_switch_filter( > > + const struct rte_flow_item pattern[], > > + const struct rte_flow_action actions[], > > + struct rte_flow_error *error, > > + struct ice_adv_rule_info *rule_info, > > + struct ice_adv_lkup_elem **lkup_list, > > + uint16_t *lkups_num) > > +{ > > + const struct rte_flow_item *item = pattern; > > + enum rte_flow_item_type item_type; > > + const struct rte_flow_item_eth *eth_spec, *eth_mask; > > + const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask; > > + const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask; > > + const struct rte_flow_item_tcp *tcp_spec, *tcp_mask; > > + const struct rte_flow_item_udp *udp_spec, *udp_mask; > > + const struct rte_flow_item_sctp *sctp_spec, *sctp_mask; > > + const struct rte_flow_item_nvgre *nvgre_spec, *nvgre_mask; > > + const struct rte_flow_item_vxlan *vxlan_spec, *vxlan_mask; > > + struct ice_adv_lkup_elem *list; > > + uint16_t i, j, t = 0; > > + uint16_t item_num = 0; > > + enum ice_sw_tunnel_type tun_type = ICE_NON_TUN; > > + > > + for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { > > + if (item->type == RTE_FLOW_ITEM_TYPE_ETH || > > + item->type == RTE_FLOW_ITEM_TYPE_IPV4 || > > + item->type == RTE_FLOW_ITEM_TYPE_IPV6 || > > + item->type == RTE_FLOW_ITEM_TYPE_UDP || > > + item->type == RTE_FLOW_ITEM_TYPE_TCP || > > + item->type == RTE_FLOW_ITEM_TYPE_SCTP || > > + item->type == RTE_FLOW_ITEM_TYPE_VXLAN || > > + item->type == RTE_FLOW_ITEM_TYPE_NVGRE) > > + item_num++; > > + } > > + > > + list = rte_zmalloc(NULL, item_num * sizeof(*list), 0); > > + if (!list) { > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_ITEM, actions, > > + "no memory malloc"); > > {RTE_FLOW_ERROR_TYPE_ITEM_NUM,
Re: [dpdk-dev] [EXT] [PATCH] cryptodev: extend api of asymmetric crypto by sessionless
Hi Shally, Thanks for your feedback. > -Original Message- > From: Shally Verma [mailto:shal...@marvell.com] > Sent: Wednesday, June 5, 2019 2:17 PM > To: Kusztal, ArkadiuszX ; dev@dpdk.org > Cc: akhil.go...@nxp.com; Trahe, Fiona ; > shally.ve...@caviumnetworks.com > Subject: RE: [EXT] [PATCH] cryptodev: extend api of asymmetric crypto by > sessionless > > > > > -Original Message- > > From: Arek Kusztal > > Sent: Tuesday, June 4, 2019 1:14 AM > > To: dev@dpdk.org > > Cc: akhil.go...@nxp.com; fiona.tr...@intel.com; > > shally.ve...@caviumnetworks.com; Arek Kusztal > > > > Subject: [EXT] [PATCH] cryptodev: extend api of asymmetric crypto by > > sessionless > > > > External Email > > > > -- > > Asymmetric cryptography algorithms may more likely use sessionless API > > so there is need to extend API. > > > > Signed-off-by: Arek Kusztal > > --- > > lib/librte_cryptodev/rte_crypto_asym.h | 2 ++ > > 1 file changed, 2 insertions(+) > > > > diff --git a/lib/librte_cryptodev/rte_crypto_asym.h > > b/lib/librte_cryptodev/rte_crypto_asym.h > > index 8672f21..5d69692 100644 > > --- a/lib/librte_cryptodev/rte_crypto_asym.h > > +++ b/lib/librte_cryptodev/rte_crypto_asym.h > > @@ -503,6 +503,8 @@ struct rte_crypto_dsa_op_param { struct > > rte_crypto_asym_op { > > struct rte_cryptodev_asym_session *session; > > /**< Handle for the initialised session context */ > > + struct rte_crypto_asym_xform *xform; > > + /**< Session-less API crypto operation parameters */ > > [Shally] Ack to this change. But is this all that is needed to support > sessionless? Do you have working poc with sessionless? > [AK] xform holds to get working. Crypto_op holds sess_type >From our side for now we not intend to store any user information in session >at all. For sure not private keys, any other information is small enough comparing to asymmetric crypto computation time that it has no gain to allocate session for it. > Thanks > Shally > > > > > __extension__ > > union { > > -- > > 2.7.4
Re: [dpdk-dev] [EXT] [PATCH] cryptodev: extend api of asymmetric crypto by sessionless
> -Original Message- > From: Kusztal, ArkadiuszX > Sent: Friday, June 14, 2019 12:21 PM > To: 'Shally Verma' ; dev@dpdk.org > Cc: akhil.go...@nxp.com; Trahe, Fiona ; > shally.ve...@caviumnetworks.com > Subject: RE: [EXT] [PATCH] cryptodev: extend api of asymmetric crypto by > sessionless > > Hi Shally, > > Thanks for your feedback. > > > -Original Message- > > From: Shally Verma [mailto:shal...@marvell.com] > > Sent: Wednesday, June 5, 2019 2:17 PM > > To: Kusztal, ArkadiuszX ; dev@dpdk.org > > Cc: akhil.go...@nxp.com; Trahe, Fiona ; > > shally.ve...@caviumnetworks.com > > Subject: RE: [EXT] [PATCH] cryptodev: extend api of asymmetric crypto > > by sessionless > > > > > > > > > -Original Message- > > > From: Arek Kusztal > > > Sent: Tuesday, June 4, 2019 1:14 AM > > > To: dev@dpdk.org > > > Cc: akhil.go...@nxp.com; fiona.tr...@intel.com; > > > shally.ve...@caviumnetworks.com; Arek Kusztal > > > > > > Subject: [EXT] [PATCH] cryptodev: extend api of asymmetric crypto by > > > sessionless > > > > > > External Email > > > > > > > > > -- Asymmetric cryptography algorithms may more likely use > > > sessionless API so there is need to extend API. > > > > > > Signed-off-by: Arek Kusztal > > > --- > > > lib/librte_cryptodev/rte_crypto_asym.h | 2 ++ > > > 1 file changed, 2 insertions(+) > > > > > > diff --git a/lib/librte_cryptodev/rte_crypto_asym.h > > > b/lib/librte_cryptodev/rte_crypto_asym.h > > > index 8672f21..5d69692 100644 > > > --- a/lib/librte_cryptodev/rte_crypto_asym.h > > > +++ b/lib/librte_cryptodev/rte_crypto_asym.h > > > @@ -503,6 +503,8 @@ struct rte_crypto_dsa_op_param { struct > > > rte_crypto_asym_op { > > > struct rte_cryptodev_asym_session *session; > > > /**< Handle for the initialised session context */ > > > + struct rte_crypto_asym_xform *xform; > > > + /**< Session-less API crypto operation parameters */ > > > > [Shally] Ack to this change. But is this all that is needed to support > > sessionless? Do you have working poc with sessionless? > > > > [AK] > xform holds to get working. Crypto_op holds sess_type From our side for > now we not intend to store any user information in session at all. > For sure not private keys, any other information is small enough comparing > to asymmetric crypto computation time that it has no gain to allocate session > for it. > [AK] Sorry, I had to fix bad writing. rte_crypto_asym_xform holds enough information, and rte_crypto_op holds sess_type. > > > Thanks > > Shally > > > > > > > > __extension__ > > > union { > > > -- > > > 2.7.4
Re: [dpdk-dev] [EXT] [PATCH] cryptodev: extend api of asymmetric crypto by sessionless
> -Original Message- > From: Kusztal, ArkadiuszX > Sent: Friday, June 14, 2019 3:55 PM > To: Shally Verma ; dev@dpdk.org > Cc: akhil.go...@nxp.com; Trahe, Fiona ; > shally.ve...@caviumnetworks.com > Subject: RE: [EXT] [PATCH] cryptodev: extend api of asymmetric crypto by > sessionless > ... > > > [Shally] Ack to this change. But is this all that is needed to > > > support sessionless? Do you have working poc with sessionless? > > > > > > > [AK] > > xform holds to get working. Crypto_op holds sess_type From our side > > for now we not intend to store any user information in session at all. > > For sure not private keys, any other information is small enough > > comparing to asymmetric crypto computation time that it has no gain to > > allocate session for it. > > > [AK] Sorry, I had to fix bad writing. > rte_crypto_asym_xform holds enough information, and rte_crypto_op holds > sess_type. Can you submit example app working on sessionless ? > > > > > Thanks > > > Shally > > > > > > > > > > > __extension__ > > > > union { > > > > -- > > > > 2.7.4
[dpdk-dev] [PATCH v2] eventdev: change Rx adapter callback and stats structure
Replace the mbuf pointer array in the event eth Rx adapter callback with an event array. Using an event array allows the application to change attributes of the events enqueued by the SW adapter. The callback can drop packets and populate a callback argument with the number of dropped packets. Add a Rx adapter stats field to keep track of the total number of dropped packets. Signed-off-by: Nikhil Rao --- lib/librte_eventdev/rte_event_eth_rx_adapter.h | 82 +- lib/librte_eventdev/rte_event_eth_rx_adapter.c | 39 +++- MAINTAINERS| 2 +- doc/guides/rel_notes/release_19_08.rst | 13 +++- lib/librte_eventdev/Makefile | 2 +- lib/librte_eventdev/rte_eventdev_version.map | 4 +- 6 files changed, 80 insertions(+), 62 deletions(-) v1: * add implementation to RFC v2: * Bump library version * Combine patch 1 & 2 into a single patch (single library version bump) * Mention API change in release notes * Remove __rte_experimental tag * Remove EXPERIMENTAL status for eventdev diff --git a/lib/librte_eventdev/rte_event_eth_rx_adapter.h b/lib/librte_eventdev/rte_event_eth_rx_adapter.h index beab870..99b55a8 100644 --- a/lib/librte_eventdev/rte_event_eth_rx_adapter.h +++ b/lib/librte_eventdev/rte_event_eth_rx_adapter.h @@ -66,16 +66,17 @@ * For SW based packet transfers, i.e., when the * RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT is not set in the adapter's * capabilities flags for a particular ethernet device, the service function - * temporarily enqueues mbufs to an event buffer before batch enqueuing these + * temporarily enqueues events to an event buffer before batch enqueuing these * to the event device. If the buffer fills up, the service function stops * dequeuing packets from the ethernet device. The application may want to * monitor the buffer fill level and instruct the service function to - * selectively buffer packets. The application may also use some other + * selectively buffer events. The application may also use some other * criteria to decide which packets should enter the event device even when - * the event buffer fill level is low. The - * rte_event_eth_rx_adapter_cb_register() function allows the - * application to register a callback that selects which packets to enqueue - * to the event device. + * the event buffer fill level is low or may want to enqueue packets to an + * internal event port. The rte_event_eth_rx_adapter_cb_register() function + * allows the application to register a callback that selects which packets are + * enqueued to the event device by the SW adapter. The callback interface is + * event based so the callback can also modify the event data if it needs to. */ #ifdef __cplusplus @@ -173,9 +174,6 @@ struct rte_event_eth_rx_adapter_queue_conf { }; /** - * @warning - * @b EXPERIMENTAL: this API may change without prior notice - * * A structure used to retrieve statistics for an eth rx adapter instance. */ struct rte_event_eth_rx_adapter_stats { @@ -187,6 +185,8 @@ struct rte_event_eth_rx_adapter_stats { /**< Eventdev enqueue count */ uint64_t rx_enq_retry; /**< Eventdev enqueue retry count */ + uint64_t rx_dropped; + /**< Received packet dropped count */ uint64_t rx_enq_start_ts; /**< Rx enqueue start timestamp */ uint64_t rx_enq_block_cycles; @@ -204,16 +204,25 @@ struct rte_event_eth_rx_adapter_stats { }; /** - * @warning - * @b EXPERIMENTAL: this API may change without prior notice * * Callback function invoked by the SW adapter before it continues - * to process packets. The callback is passed the size of the enqueue + * to process events. The callback is passed the size of the enqueue * buffer in the SW adapter and the occupancy of the buffer. The - * callback can use these values to decide which mbufs should be - * enqueued to the event device. If the return value of the callback - * is less than nb_mbuf then the SW adapter uses the return value to - * enqueue enq_mbuf[] to the event device. + * callback can use these values to decide which events are + * enqueued to the event device by the SW adapter. The callback may + * also enqueue events internally using its own event port. The SW + * adapter populates the event information based on the Rx queue + * configuration in the adapter. The callback can modify the this event + * information for the events to be enqueued by the SW adapter. + * + * The callback return value is the number of events from the + * beginning of the event array that are to be enqueued by + * the SW adapter. It is the callback's responsibility to arrange + * these events at the beginning of the array, if these events are + * not contiguous in the original array. The *nb_dropped* parameter is + * a pointer to the number of events dropped by the callback, this + * number is used by the adapter to indicate the number of dropped packets + * as part of its st
Re: [dpdk-dev] [EXT] [PATCH] cryptodev: extend api of asymmetric crypto by sessionless
> -Original Message- > From: Shally Verma [mailto:shal...@marvell.com] > Sent: Friday, June 14, 2019 1:23 PM > To: Kusztal, ArkadiuszX ; dev@dpdk.org > Cc: akhil.go...@nxp.com; Trahe, Fiona ; > shally.ve...@caviumnetworks.com > Subject: RE: [EXT] [PATCH] cryptodev: extend api of asymmetric crypto by > sessionless > > > > > -Original Message- > > From: Kusztal, ArkadiuszX > > Sent: Friday, June 14, 2019 3:55 PM > > To: Shally Verma ; dev@dpdk.org > > Cc: akhil.go...@nxp.com; Trahe, Fiona ; > > shally.ve...@caviumnetworks.com > > Subject: RE: [EXT] [PATCH] cryptodev: extend api of asymmetric crypto > > by sessionless > > > ... > > > > [Shally] Ack to this change. But is this all that is needed to > > > > support sessionless? Do you have working poc with sessionless? > > > > > > > > > > [AK] > > > xform holds to get working. Crypto_op holds sess_type From our side > > > for now we not intend to store any user information in session at all. > > > For sure not private keys, any other information is small enough > > > comparing to asymmetric crypto computation time that it has no gain > > > to allocate session for it. > > > > > [AK] Sorry, I had to fix bad writing. > > rte_crypto_asym_xform holds enough information, and rte_crypto_op > > holds sess_type. > > Can you submit example app working on sessionless ? [AK] Sure, I will. > > > > > > > > Thanks > > > > Shally > > > > > > > > > > > > > > __extension__ > > > > > union { > > > > > -- > > > > > 2.7.4
[dpdk-dev] [PATCH] test/eal: add ut cases for in-memory and single-file-segment
Added unit test case for eal command line 'in-memory' option which will cover below functions. get_seg_memfd() test_memfd_create() pagesz_flags() Added unit test case for eal command line 'single-file-segments' option which will cover resize_hugefile(). Signed-off-by: Pallantla Poornima --- app/test/test_eal_flags.c | 69 +++ 1 file changed, 69 insertions(+) diff --git a/app/test/test_eal_flags.c b/app/test/test_eal_flags.c index 9112c96d0..2b2cccaec 100644 --- a/app/test/test_eal_flags.c +++ b/app/test/test_eal_flags.c @@ -978,6 +978,7 @@ test_file_prefix(void) *mem mode */ char prefix[PATH_MAX] = ""; + char tmp[PATH_MAX]; #ifdef RTE_EXEC_ENV_FREEBSD return 0; @@ -1010,6 +1011,26 @@ test_file_prefix(void) const char *argv4[] = {prgname, "-c", "1", "-n", "2", "-m", DEFAULT_MEM_SIZE, "--file-prefix=" memtest2 }; + /* primary process with inmemory mode */ + const char * const argv5[] = {prgname, "-c", "1", "-n", "2", "-m", + DEFAULT_MEM_SIZE, "--in-memory" }; + + /* primary process with memtest1 and inmemory mode */ + const char * const argv6[] = {prgname, "-c", "1", "-n", "2", "-m", + DEFAULT_MEM_SIZE, "--in-memory", + "--file-prefix=" memtest1 }; + + snprintf(tmp, PATH_MAX, "--file-prefix=%s", prefix); + + /* primary process with parent file-prefix and inmemory mode */ + const char * const argv7[] = {prgname, "-c", "1", "-n", "2", "-m", + DEFAULT_MEM_SIZE, "--in-memory", tmp}; + + /* primary process with memtest1 and single-file-segments mode */ + const char * const argv8[] = {prgname, "-c", "1", "-n", "2", "-m", + DEFAULT_MEM_SIZE, "--single-file-segments", + "--file-prefix=" memtest1 }; + /* check if files for current prefix are present */ if (process_hugefiles(prefix, HUGEPAGE_CHECK_EXISTS) != 1) { printf("Error - hugepage files for %s were not created!\n", prefix); @@ -1130,6 +1151,54 @@ test_file_prefix(void) return -1; } + /* this process will run in in-memory mode, so it should not leave any +* hugepage files behind. +*/ + + /* test case to check eal-options with --in-memory mode */ + if (launch_proc(argv5) != 0) { + printf("Error - failed to run with in-memory mode\n"); + return -1; + } + + /*test case to check eal-options with --in-memory mode with +* custom file-prefix. +*/ + if (launch_proc(argv6) != 0) { + printf("Error - failed to run with in-memory mode\n"); + return -1; + } + + /* check if hugefiles for memtest1 are present */ + if (process_hugefiles(memtest1, HUGEPAGE_CHECK_EXISTS) != 0) { + printf("Error - hugepage files for %s were created and not deleted!\n", + memtest1); + return -1; + } + + /* test case to check eal-options with --in-memory mode with +* parent file-prefix. +*/ + if (launch_proc(argv7) != 0) { + printf("Error - failed to run with --file-prefix=%s\n", prefix); + return -1; + } + + /* this process will run in single file mode, so it should not leave any +* hugepage files behind. +*/ + if (launch_proc(argv8) != 0) { + printf("Error - failed to run with single-file-segments mode\n"); + return -1; + } + + /* check if hugefiles for memtest1 are present */ + if (process_hugefiles(memtest1, HUGEPAGE_CHECK_EXISTS) != 0) { + printf("Error - hugepage files for %s were not deleted!\n", + memtest1); + return -1; + } + return 0; } -- 2.17.2
[dpdk-dev] device reset handling with igb_uio
Hi, I have some question on igb_uio. >From the below function call traces, vfio-pci module frees/allocates msi-x vector table as part of interrupt disable/enable. Where as igb-uio module, only masks/unmasks the msi-x interrupt. Does this mean, when using igb_uio, device can't undergo reset which clears MSI-X vector table? How to handle device reset with igb_uio? igb-uio: rte_intr_disable->uio_intr_disable->igbuio_pci_irqcontrol->pci_msi_mask_irq rte_intr_enable->uio_intr_enable->igbuio_pci_irqcontrol->pci_msi_unmask_irq igbuio_pci_open->igbuio_pci_enable_interrupts->pci_alloc_irq_vectors/request_irq igbuio_pci_release->igbuio_pci_disable_interrupts->free_irq->pci_free_irq_vectors vfio-pci: rte_intr_disable->vfio_disable_msix->vfio_pci_ioctl->vfio_msi_disable->pci_free_irq_vectors rte_intr_enable->vfio_enable_msix->vfio_pci_ioctl->vfio_msi_enable->pci_alloc_irq_vectors/vfio_msi_set_vector_signal->request_irq Regards -Santosh
[dpdk-dev] [PATCH] examples: modify error message for ip pipeline
From: Agalya Babu RadhaKrishnan Added help command in error message for ip pipeline commands. Signed-off-by: Agalya Babu RadhaKrishnan --- examples/ip_pipeline/cli.c | 12 ++-- 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/examples/ip_pipeline/cli.c b/examples/ip_pipeline/cli.c index 309b2936e..8a651bbbc 100644 --- a/examples/ip_pipeline/cli.c +++ b/examples/ip_pipeline/cli.c @@ -30,12 +30,12 @@ #define MSG_OUT_OF_MEMORY "Not enough memory.\n" #define MSG_CMD_UNKNOWN "Unknown command \"%s\".\n" -#define MSG_CMD_UNIMPLEM"Command \"%s\" not implemented.\n" -#define MSG_ARG_NOT_ENOUGH "Not enough arguments for command \"%s\".\n" -#define MSG_ARG_TOO_MANY"Too many arguments for command \"%s\".\n" -#define MSG_ARG_MISMATCH"Wrong number of arguments for command \"%s\".\n" -#define MSG_ARG_NOT_FOUND "Argument \"%s\" not found.\n" -#define MSG_ARG_INVALID "Invalid value for argument \"%s\".\n" +#define MSG_CMD_UNIMPLEM"Command \"%s\" not implemented. Try help \n" +#define MSG_ARG_NOT_ENOUGH "Not enough arguments for command \"%s\". Try help \n" +#define MSG_ARG_TOO_MANY"Too many arguments for command \"%s\". Try help \n" +#define MSG_ARG_MISMATCH"Wrong number of arguments for command \"%s\". Try help \n" +#define MSG_ARG_NOT_FOUND "Argument \"%s\" not found. Try help \n" +#define MSG_ARG_INVALID "Invalid value for argument \"%s\". Try help \n" #define MSG_FILE_ERR"Error in file \"%s\" at line %u.\n" #define MSG_FILE_NOT_ENOUGH "Not enough rules in file \"%s\".\n" #define MSG_CMD_FAIL"Command \"%s\" failed.\n" -- 2.14.1
[dpdk-dev] [PATCH 19.08 v3 1/2] net/pcap: use a struct to pass user options
The argument lists on some of the device creation functions are quite large. Using a struct to hold the user options parsed in 'pmd_pcap_probe' will allow for cleaner function calls and definitions. Adding user options will also be easier. Signed-off-by: Cian Ferriter --- drivers/net/pcap/rte_eth_pcap.c | 51 ++--- 1 file changed, 35 insertions(+), 16 deletions(-) diff --git a/drivers/net/pcap/rte_eth_pcap.c b/drivers/net/pcap/rte_eth_pcap.c index 10277b9b6..c35f501cf 100644 --- a/drivers/net/pcap/rte_eth_pcap.c +++ b/drivers/net/pcap/rte_eth_pcap.c @@ -101,6 +101,14 @@ struct pmd_devargs { int phy_mac; }; +struct pmd_devargs_all { + struct pmd_devargs rx_queues; + struct pmd_devargs tx_queues; + int single_iface; + unsigned int is_tx_pcap; + unsigned int is_tx_iface; +}; + static const char *valid_arguments[] = { ETH_PCAP_RX_PCAP_ARG, ETH_PCAP_TX_PCAP_ARG, @@ -1061,11 +1069,14 @@ eth_pcap_update_mac(const char *if_name, struct rte_eth_dev *eth_dev, static int eth_from_pcaps_common(struct rte_vdev_device *vdev, - struct pmd_devargs *rx_queues, const unsigned int nb_rx_queues, - struct pmd_devargs *tx_queues, const unsigned int nb_tx_queues, + struct pmd_devargs_all *devargs_all, struct pmd_internals **internals, struct rte_eth_dev **eth_dev) { struct pmd_process_private *pp; + struct pmd_devargs *rx_queues = &devargs_all->rx_queues; + struct pmd_devargs *tx_queues = &devargs_all->tx_queues; + const unsigned int nb_rx_queues = rx_queues->num_of_queue; + const unsigned int nb_tx_queues = tx_queues->num_of_queue; unsigned int i; /* do some parameter checking */ @@ -1103,16 +1114,15 @@ eth_from_pcaps_common(struct rte_vdev_device *vdev, static int eth_from_pcaps(struct rte_vdev_device *vdev, - struct pmd_devargs *rx_queues, const unsigned int nb_rx_queues, - struct pmd_devargs *tx_queues, const unsigned int nb_tx_queues, - int single_iface, unsigned int using_dumpers) + struct pmd_devargs_all *devargs_all) { struct pmd_internals *internals = NULL; struct rte_eth_dev *eth_dev = NULL; + struct pmd_devargs *rx_queues = &devargs_all->rx_queues; + int single_iface = devargs_all->single_iface; int ret; - ret = eth_from_pcaps_common(vdev, rx_queues, nb_rx_queues, - tx_queues, nb_tx_queues, &internals, ð_dev); + ret = eth_from_pcaps_common(vdev, devargs_all, &internals, ð_dev); if (ret < 0) return ret; @@ -1134,7 +1144,8 @@ eth_from_pcaps(struct rte_vdev_device *vdev, eth_dev->rx_pkt_burst = eth_pcap_rx; - if (using_dumpers) + /* Assign tx ops. */ + if (devargs_all->is_tx_pcap) eth_dev->tx_pkt_burst = eth_pcap_tx_dumper; else eth_dev->tx_pkt_burst = eth_pcap_tx; @@ -1147,15 +1158,20 @@ static int pmd_pcap_probe(struct rte_vdev_device *dev) { const char *name; - unsigned int is_rx_pcap = 0, is_tx_pcap = 0; + unsigned int is_rx_pcap = 0; struct rte_kvargs *kvlist; struct pmd_devargs pcaps = {0}; struct pmd_devargs dumpers = {0}; struct rte_eth_dev *eth_dev = NULL; struct pmd_internals *internal; - int single_iface = 0; int ret; + struct pmd_devargs_all devargs_all = { + .single_iface = 0, + .is_tx_pcap = 0, + .is_tx_iface = 0, + }; + name = rte_vdev_device_name(dev); PMD_LOG(INFO, "Initializing pmd_pcap for %s", name); @@ -1202,7 +1218,7 @@ pmd_pcap_probe(struct rte_vdev_device *dev) dumpers.phy_mac = pcaps.phy_mac; - single_iface = 1; + devargs_all.single_iface = 1; pcaps.num_of_queue = 1; dumpers.num_of_queue = 1; @@ -1231,10 +1247,11 @@ pmd_pcap_probe(struct rte_vdev_device *dev) * We check whether we want to open a TX stream to a real NIC or a * pcap file */ - is_tx_pcap = rte_kvargs_count(kvlist, ETH_PCAP_TX_PCAP_ARG) ? 1 : 0; + devargs_all.is_tx_pcap = + rte_kvargs_count(kvlist, ETH_PCAP_TX_PCAP_ARG) ? 1 : 0; dumpers.num_of_queue = 0; - if (is_tx_pcap) + if (devargs_all.is_tx_pcap) ret = rte_kvargs_process(kvlist, ETH_PCAP_TX_PCAP_ARG, &open_tx_pcap, &dumpers); else @@ -1276,7 +1293,7 @@ pmd_pcap_probe(struct rte_vdev_device *dev) eth_dev->process_private = pp; eth_dev->rx_pkt_burst = eth_pcap_rx; - if (is_tx_pcap) + if (devargs_all.is_tx_pcap) eth_dev->tx_pkt_burst = eth_pcap_tx_dumper; else eth_dev->tx_pkt_burst =
[dpdk-dev] [PATCH 19.08 v3 2/2] net/pcap: enable infinitely rxing a pcap file
It can be useful to use pcap files for some rudimental performance At a high level, this works by creaing a ring of sufficient size to store the packets in the pcap file passed to the application. When the rx function for this mode is called, packets are dequeued from the ring for use by the application and also enqueued back on to the ring to be "received" again. A tx_drop mode is also added since transmitting to a tx_pcap file isn't desirable at a high traffic rate. Jumbo frames are not supported in this mode. When filling the ring at rx queue setup time, the presence of multi segment mbufs is checked for. The PMD will exit on detection of these multi segment mbufs. Signed-off-by: Cian Ferriter --- v3 changes: * Update PCAP docs: * State that 'infinite_rx' should only be provided once per device. * Drop use of tx_drop and mention its use through not providing a txq. * Remove the tx_drop option and related args processing. * Notify user when tx_drop is being used. * Change infinite_rx from an 'int' to 'unsigned int' in pmd_internals struct. * Add pmd_devargs_all struct to pass args from pmd_probe to ethdev creation. * Improve args parsing of infinite_rx so -1 doesn't enable feature. * Change order of tx ops assignment so tx_drop is default case. * Move args parsing of infinite_rx inside 'is_rx_pcap' check. * Only notify user of 'infinite_rx' state if arg has been passed on cmdline. * Call eth_dev_close for 'infinite_rx' cleanup rather than duplicating code. doc/guides/nics/pcap_ring.rst | 20 +++ drivers/net/pcap/rte_eth_pcap.c | 234 ++-- 2 files changed, 246 insertions(+), 8 deletions(-) diff --git a/doc/guides/nics/pcap_ring.rst b/doc/guides/nics/pcap_ring.rst index c1ef9196b..a2e9523b3 100644 --- a/doc/guides/nics/pcap_ring.rst +++ b/doc/guides/nics/pcap_ring.rst @@ -106,6 +106,26 @@ Runtime Config Options --vdev 'net_pcap0,iface=eth0,phy_mac=1' +- Use the RX PCAP file to infinitely receive packets + + In case ``rx_pcap=`` configuration is set, user may want to use the selected PCAP file for rudimental + performance testing. This can be done with a ``devarg`` ``infinite_rx``, for example:: + + --vdev 'net_pcap0,rx_pcap=file_rx.pcap,infinite_rx=1,tx_drop=1' + + When this mode is used, it is recommended to drop all packets on transmit by not providing a tx_pcap or tx_iface. + + This option is device wide, so all queues on a device will either have this enabled or disabled. + This option should only be provided once per device. + +- Drop all packets on transmit + + The user may want to drop all packets on tx for a device. This can be done by not providing a tx_pcap or tx_iface, for example:: + + --vdev 'net_pcap0,rx_pcap=file_rx.pcap' + + In this case, one tx drop queue is created for each rxq on that device. + Examples of Usage ^ diff --git a/drivers/net/pcap/rte_eth_pcap.c b/drivers/net/pcap/rte_eth_pcap.c index c35f501cf..b5bd800f0 100644 --- a/drivers/net/pcap/rte_eth_pcap.c +++ b/drivers/net/pcap/rte_eth_pcap.c @@ -39,6 +39,7 @@ #define ETH_PCAP_TX_IFACE_ARG "tx_iface" #define ETH_PCAP_IFACE_ARG"iface" #define ETH_PCAP_PHY_MAC_ARG "phy_mac" +#define ETH_PCAP_INFINITE_RX_ARG "infinite_rx" #define ETH_PCAP_ARG_MAXLEN64 @@ -64,6 +65,9 @@ struct pcap_rx_queue { struct queue_stat rx_stat; char name[PATH_MAX]; char type[ETH_PCAP_ARG_MAXLEN]; + + /* Contains pre-generated packets to be looped through */ + struct rte_ring *pkts; }; struct pcap_tx_queue { @@ -82,6 +86,7 @@ struct pmd_internals { int if_index; int single_iface; int phy_mac; + unsigned int infinite_rx; }; struct pmd_process_private { @@ -107,6 +112,7 @@ struct pmd_devargs_all { int single_iface; unsigned int is_tx_pcap; unsigned int is_tx_iface; + unsigned int infinite_rx; }; static const char *valid_arguments[] = { @@ -117,6 +123,7 @@ static const char *valid_arguments[] = { ETH_PCAP_TX_IFACE_ARG, ETH_PCAP_IFACE_ARG, ETH_PCAP_PHY_MAC_ARG, + ETH_PCAP_INFINITE_RX_ARG, NULL }; @@ -186,6 +193,43 @@ eth_pcap_gather_data(unsigned char *data, struct rte_mbuf *mbuf) } } +static uint16_t +eth_pcap_rx_infinite(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) +{ + int i; + struct pcap_rx_queue *pcap_q = queue; + uint32_t rx_bytes = 0; + + if (unlikely(nb_pkts == 0)) + return 0; + + if (rte_pktmbuf_alloc_bulk(pcap_q->mb_pool, bufs, nb_pkts) != 0) + return 0; + + for (i = 0; i < nb_pkts; i++) { + struct rte_mbuf *pcap_buf; + int err = rte_ring_dequeue(pcap_q->pkts, (void **)&pcap_buf); + if (err) + return i; + + rte_memcpy(rte_pktmbuf_mtod(bufs[i], void *), + rte_pktmbuf_mtod(pcap_buf, void *), +
Re: [dpdk-dev] [PATCH 19.08 v2] net/pcap: enable infinitely rxing a pcap file
Hi Ferruh, Thanks for the review. I've send a v3 and responded to your comments below. Thanks, Cian > -Original Message- > From: Yigit, Ferruh > Sent: 12 June 2019 15:10 > To: Ferriter, Cian ; Richardson, Bruce > ; Mcnamara, John > ; Kovacevic, Marko > > Cc: dev@dpdk.org > Subject: Re: [PATCH 19.08 v2] net/pcap: enable infinitely rxing a pcap file > > On 6/5/2019 1:46 PM, Ferriter, Cian wrote: > > Adding in my changelog at the top of this, since I forgot to add it in the > original mail: > > > > v2: > > * Rework the method of filling the ring to infinitely rx from > > * Avoids potential huge allocation of mbufs > > * Removes double allocation of mbufs used during queue setup > > * rename count_packets_in_pcaps to count_packets_in_pcap > > * initialize pcap_pkt_count in count_packets_in_pcap > > * use RTE_PMD_REGISTER_PARAM_STRING <0|1> rather than > > * replace calls to rte_panic with proper error returning > > * count rx and tx stat bytes in pcap_rx_infinite and tx_drop > > * make internals->infinite_rx = infinite_rx assignment unconditional > > * add cleanup for infinite_rx in eth_dev_close and pmd_pcap_remove > > * add cleanup when multi seg mbufs are found > > * add some clarifications to the documentation update > > > >> -Original Message- > >> From: Ferriter, Cian > >> Sent: 05 June 2019 12:56 > >> To: Richardson, Bruce ; Yigit, Ferruh > >> ; Mcnamara, John ; > >> Kovacevic, Marko > >> Cc: dev@dpdk.org; Ferriter, Cian > >> Subject: [PATCH 19.08 v2] net/pcap: enable infinitely rxing a pcap > >> file > >> > >> It can be useful to use pcap files for some rudimental performance > testing. > >> This patch enables this functionality in the pcap driver. > >> > >> At a high level, this works by creaing a ring of sufficient size to > >> store the packets in the pcap file passed to the application. When > >> the rx function for this mode is called, packets are dequeued from > >> the ring for use by the application and also enqueued back on to the ring > to be "received" again. > >> > >> A tx_drop mode is also added since transmitting to a tx_pcap file > >> isn't desirable at a high traffic rate. > >> > >> Jumbo frames are not supported in this mode. When filling the ring at > >> rx queue setup time, the presence of multi segment mbufs is checked > for. > >> The PMD will exit on detection of these multi segment mbufs. > >> > >> Signed-off-by: Cian Ferriter > >> --- > >> doc/guides/nics/pcap_ring.rst | 19 +++ > >> drivers/net/pcap/rte_eth_pcap.c | 268 > >> ++-- > >> 2 files changed, 277 insertions(+), 10 deletions(-) > >> > >> diff --git a/doc/guides/nics/pcap_ring.rst > >> b/doc/guides/nics/pcap_ring.rst index c1ef9196b..b272e6fe3 100644 > >> --- a/doc/guides/nics/pcap_ring.rst > >> +++ b/doc/guides/nics/pcap_ring.rst > >> @@ -106,6 +106,25 @@ Runtime Config Options > >> > >> --vdev 'net_pcap0,iface=eth0,phy_mac=1' > >> > >> +- Use the RX PCAP file to infinitely receive packets > >> + > >> + In case ``rx_pcap=`` configuration is set, user may want to use the > >> + selected PCAP file for rudimental performance testing. This can be > >> + done > >> with a ``devarg`` ``infinite_rx``, for example:: > >> + > >> + --vdev 'net_pcap0,rx_pcap=file_rx.pcap,infinite_rx=1,tx_drop=1' > > Can be good to highlight that this flag is not per queue, but should be > provided once (explictly once since code checks it) per Rx. > Added to the docs in the next version. > >> + > >> + When this mode is used, it is recommended to use the ``tx_drop`` > >> ``devarg``. > >> + > >> + This option is device wide, so all queues on a device will either > >> + have this > >> enabled or disabled. > >> + > >> +- Drop all packets on transmit > >> + > >> + The user may want to drop all packets on tx for a device. This can > >> + be done > >> with the ``tx_drop`` ``devarg``, for example:: > >> + > >> + --vdev 'net_pcap0,rx_pcap=file_rx.pcap,tx_drop=1' > >> + > >> + One tx drop queue is created for each rxq on that device. > > Can we drop the ``tx_drop`` completely? > > What happens when no 'tx_pcap' or 'tx_iface' provided at all, to imply the > tx_drop? > This sound like nice default behavior to have. I've updated the latest version to implement this and I've removed the tx_drop args parsing and related doc section. > <...> > > >> @@ -1105,7 +1290,8 @@ static int > >> eth_from_pcaps(struct rte_vdev_device *vdev, > >>struct pmd_devargs *rx_queues, const unsigned int > nb_rx_queues, > >>struct pmd_devargs *tx_queues, const unsigned int > nb_tx_queues, > >> - int single_iface, unsigned int using_dumpers) > >> + int single_iface, unsigned int using_dumpers, > >> + unsigned int infinite_rx, unsigned int tx_drop) > > > The argument list is keep increasing. What happens is 'pmd_pcap_probe()' > processes the user input (devargs) and passes the processed output to this > function to create ethdev. > What do you think g
Re: [dpdk-dev] [PATCH] eal: resort symbols in EXPERIMENTAL section
On 6/14/2019 8:44 AM, David Marchand wrote: > On Fri, Jun 14, 2019 at 9:39 AM Thomas Monjalon wrote: > >> 06/04/2019 05:30, Stephen Hemminger: >>> The symbols in the EXPERIMENTAL were close to alphabetic >>> order but running sort showed several mistakes. >>> >>> This has no impact on code, API, ABI or otherwise. >>> Purely for humans. >>> >>> Signed-off-by: Stephen Hemminger >> >> I don't think it's worth adding a layer of git history for this sort. >> I would prefer to leave it as is. >> >> > If this is about preferrence, I would prefer we have those symbols sorted > per versions that introduced them ;-). > Much easier to check and see if they are candidates for entering stable ABI. > Not bad idea, +1 from my side J
Re: [dpdk-dev] [PATCH v5 0/8] bnxt patchset
On 6/4/2019 5:07 PM, Ferruh Yigit wrote: > On 5/29/2019 10:02 PM, Lance Richardson wrote: >> This patchset adds the following: >> 1) Support for vector mode TX and RX. >> 2) HWRM API update (split into multiple patches). >> 3) Fixes for RSS reta update and query. >> >> It also updates the release notes. >> >> v2: >> * Squashed patches 3 and 4 from v1 patchset. >> * Added Meson build support for vector mode PMD. >> * Dropped two unnecessary code style changes from patch 3. >> >> v3: >> * Squashed three RSS RETA fix patches into one. >> * Eliminated RTE_LIBRTE_BNXT_INC_VECTOR configuration flag. >> * Addressed commit log issues from check-git-log.sh. >> * Made subject line consistent for HWRM API update patches. >> * Separated release notes updates to keep update with associated >> patch (except for the first update for which the associated patch >> has already been applied). >> >> v4: >> * The change squashed from patch 4 into patch 3 from the v1 >> patchset should have been squashed into patch 2; fixed in v4. >> >> v5: >> * Corrected commit IDs and added Fixes: tags to patch 01. >> >> Ajit Khaparde (5): >> net/bnxt: update release notes for bnxt >> net/bnxt: fix RSS RETA indirection table ops >> net/bnxt: update HWRM API to version 1.10.0.19 >> net/bnxt: update HWRM API to version 1.10.0.48 >> net/bnxt: update HWRM API to version 1.10.0.74 >> >> Lance Richardson (3): >> net/bnxt: move Tx bd checking to header file >> net/bnxt: compute and store scattered Rx status >> net/bnxt: implement vector mode driver > > Series applied to dpdk-next-net/master, thanks. > There were some checkpatch typo warnings, fixed them in next-net before merged into the master, FYI. Please double check the final commits, changes: net/bnxt: update HWRM API to version 1.10.0.19 WARNING:TYPO_SPELLING: 'preceed' may be misspelled - perhaps 'precede'? WARNING:TYPO_SPELLING: 'occured' may be misspelled - perhaps 'occurred'? WARNING:TYPO_SPELLING: 'auxillary' may be misspelled - perhaps 'auxiliary'? net/bnxt: update HWRM API to version 1.10.0.48 WARNING:TYPO_SPELLING: 'occured' may be misspelled - perhaps 'occurred'?
Re: [dpdk-dev] [PATCH v4 07/11] net/hinic/base: add various headers
On 6/12/2019 3:24 PM, Xuanziyang (William, Chip Application Design Logic and Hardware Development Dept IT_Products & Solutions) wrote: > >> On 6/6/2019 12:06 PM, Ziyang Xuan wrote: >>> Add various headers that define mgmt commands, cmdq commands, rx >> data >>> structures, tx data structures and basic defines for use in the code. >>> >>> Signed-off-by: Ziyang Xuan >> >> <...> >> >>> +#define PMD_DRV_LOG(level, fmt, args...) \ >>> + rte_log(RTE_LOG_ ## level, hinic_logtype, \ >>> + HINIC_DRIVER_NAME": " fmt "\n", ##args) >>> + >>> +#define HINIC_ASSERT_EN >>> + >>> +#ifdef HINIC_ASSERT_EN >>> +#define HINIC_ASSERT(exp) \ >>> + do {\ >>> + if (!(exp)) { \ >>> + rte_panic("line%d\tassert \"" #exp "\" failed\n", \ >>> + __LINE__);\ >>> + } \ >>> + } while (0) >>> +#else >>> +#define HINIC_ASSERT(exp) do {} while (0) >>> +#endif >> >> So you are enabling asserting by default? Which can cause "rte_panic()" ? >> >> Please make sure asserting is disabled by default, and please tie this to the >> "CONFIG_RTE_ENABLE_ASSERT" config option. So it that option is disabled >> hinic also should disable the assertions. > > I checked the places where use rte_panic, most of them can use code logic to > guarantee correctness. And I have referenced other PMDs like mlx5, they use > rte_panic directly but use custom encapsulation, so I delete custom > encapsulation above and the most rte_panic usage, and use directly like mlx5. > > Is it OK? > Also does it make enable 'HINIC_ASSERT' when global 'CONFIG_RTE_ENABLE_ASSERT' config enabled?
Re: [dpdk-dev] [dpdk-stable] [PATCH] app/testpmd: fix offloads overwrite by default configuration
On 6/12/2019 2:17 AM, Zhao1, Wei wrote: > > >> -Original Message- >> From: Yigit, Ferruh >> Sent: Tuesday, June 11, 2019 10:37 PM >> To: Zhao1, Wei ; dev@dpdk.org >> Cc: sta...@dpdk.org; Peng, Yuan ; Lu, Wenzhuo >> ; Kevin Traynor >> Subject: Re: [dpdk-stable] [PATCH] app/testpmd: fix offloads overwrite by >> default configuration >> >> On 5/9/2019 8:20 AM, Wei Zhao wrote: >>> There is an error in function rxtx_port_config(), which may overwrite >>> offloads configuration get from function launch_args_parse() when run >>> testpmd app. So rxtx_port_config() should do "or" for port offloads. >>> >>> Fixes: d44f8a485f5d ("app/testpmd: enable per queue configure") >>> cc: sta...@dpdk.org >>> >>> Signed-off-by: Wei Zhao >>> --- >>> app/test-pmd/testpmd.c | 5 + >>> 1 file changed, 5 insertions(+) >>> >>> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index >>> 6fbfd29..f0061d9 100644 >>> --- a/app/test-pmd/testpmd.c >>> +++ b/app/test-pmd/testpmd.c >>> @@ -2809,9 +2809,12 @@ static void >>> rxtx_port_config(struct rte_port *port) { >>> uint16_t qid; >>> + uint64_t offloads; >>> >>> for (qid = 0; qid < nb_rxq; qid++) { >>> + offloads = port->rx_conf[qid].offloads; >>> port->rx_conf[qid] = port->dev_info.default_rxconf; >>> + port->rx_conf[qid].offloads |= offloads; >> >> While talking with Kevin, he pointed out the error in this code. >> >> We are updating queue level offloads, with whatever in the 'offloads' and it >> can >> be non-queue level offloads in it, next time ethdev API called these values >> are >> caught by the API checks and causing an error. >> >> It looks like port level offload flags needs to be masted out before writing >> to >> queue level 'offloads' variable. > > > By the way, this error in not introduced in this patch, it seems has exist > long before this patch. > This patch is just fix for overwrite problem. I disagree, writing 'offloads' to "rx_conf[].offloads" without checking if they queue offloads or not causing this problem. And that write introduced in this patch. > > > >> >>> >>> /* Check if any Rx parameters have been passed */ >>> if (rx_pthresh != RTE_PMD_PARAM_UNSET) @@ -2833,7 >> +2836,9 @@ >>> rxtx_port_config(struct rte_port *port) >>> } >>> >>> for (qid = 0; qid < nb_txq; qid++) { >>> + offloads = port->tx_conf[qid].offloads; >>> port->tx_conf[qid] = port->dev_info.default_txconf; >>> + port->tx_conf[qid].offloads |= offloads; >>> >>> /* Check if any Tx parameters have been passed */ >>> if (tx_pthresh != RTE_PMD_PARAM_UNSET) >>> >
[dpdk-dev] [PATCH v6] baseband/fpga_lte_fec: adding driver for FEC on FPGA
Update v6: Update of one copyright date in documentation file. Update v5: Update date and version from dpdk review. Rebased to latest. Update v4: Fix warning for the DEBUG configuration. Update v3: Squashing 3 previous patches into one as recommended. This is adding a new PMD driver for BBDEV device based on FPGA implementation on PAC N3000 HW to provide FEC 4G acceleration. v1 was shared earlier on this patchwork : https://patches.dpdk.org/patch/53409/ The existing BBDEV test framework is still supported. Main updates based on feedback during the last v1 were: - correct the documentation which was arguably misleading in hinting that ICC was required. There is no dependency whatsoever. - add meson build support Note that a number of other BBDEV patches are in v1 here and will be resubmitted for 19.08 rc1 https://patches.dpdk.org/project/dpdk/list/?series=4657 Nicolas Chautru (1): baseband/fpga_lte_fec: adding driver for FEC on FPGA config/common_base |6 + doc/guides/bbdevs/fpga_lte_fec.rst | 318 +++ doc/guides/bbdevs/index.rst|1 + drivers/baseband/Makefile |2 + drivers/baseband/fpga_lte_fec/Makefile | 29 + drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 2674 drivers/baseband/fpga_lte_fec/fpga_lte_fec.h | 73 + drivers/baseband/fpga_lte_fec/meson.build |7 + .../rte_pmd_bbdev_fpga_lte_fec_version.map |3 + drivers/baseband/meson.build |2 +- mk/rte.app.mk |1 + 11 files changed, 3115 insertions(+), 1 deletion(-) create mode 100644 doc/guides/bbdevs/fpga_lte_fec.rst create mode 100644 drivers/baseband/fpga_lte_fec/Makefile create mode 100644 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c create mode 100644 drivers/baseband/fpga_lte_fec/fpga_lte_fec.h create mode 100644 drivers/baseband/fpga_lte_fec/meson.build create mode 100644 drivers/baseband/fpga_lte_fec/rte_pmd_bbdev_fpga_lte_fec_version.map -- 1.8.3.1
[dpdk-dev] [PATCH v5] baseband/fpga_lte_fec: adding driver for FEC on FPGA
Supports for FEC 4G PMD Driver on FPGA card PAC N3000 Signed-off-by: Nicolas Chautru --- config/common_base |6 + doc/guides/bbdevs/fpga_lte_fec.rst | 318 +++ doc/guides/bbdevs/index.rst|1 + drivers/baseband/Makefile |2 + drivers/baseband/fpga_lte_fec/Makefile | 29 + drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 2674 drivers/baseband/fpga_lte_fec/fpga_lte_fec.h | 73 + drivers/baseband/fpga_lte_fec/meson.build |7 + .../rte_pmd_bbdev_fpga_lte_fec_version.map |3 + drivers/baseband/meson.build |2 +- mk/rte.app.mk |1 + 11 files changed, 3115 insertions(+), 1 deletion(-) create mode 100644 doc/guides/bbdevs/fpga_lte_fec.rst create mode 100644 drivers/baseband/fpga_lte_fec/Makefile create mode 100644 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c create mode 100644 drivers/baseband/fpga_lte_fec/fpga_lte_fec.h create mode 100644 drivers/baseband/fpga_lte_fec/meson.build create mode 100644 drivers/baseband/fpga_lte_fec/rte_pmd_bbdev_fpga_lte_fec_version.map diff --git a/config/common_base b/config/common_base index 6f19ad5..9632a07 100644 --- a/config/common_base +++ b/config/common_base @@ -521,6 +521,7 @@ CONFIG_RTE_PMD_PACKET_PREFETCH=y # EXPERIMENTAL: API may change without prior notice # CONFIG_RTE_LIBRTE_BBDEV=y +CONFIG_RTE_LIBRTE_BBDEV_DEBUG=n CONFIG_RTE_BBDEV_MAX_DEVS=128 CONFIG_RTE_BBDEV_OFFLOAD_COST=y @@ -535,6 +536,11 @@ CONFIG_RTE_LIBRTE_PMD_BBDEV_NULL=y CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW=n # +# Compile PMD for Intel FPGA LTE FEC bbdev device +# +CONFIG_RTE_LIBRTE_PMD_FPGA_LTE_FEC=y + +# # Compile generic crypto device library # CONFIG_RTE_LIBRTE_CRYPTODEV=y diff --git a/doc/guides/bbdevs/fpga_lte_fec.rst b/doc/guides/bbdevs/fpga_lte_fec.rst new file mode 100644 index 000..71b058c --- /dev/null +++ b/doc/guides/bbdevs/fpga_lte_fec.rst @@ -0,0 +1,318 @@ +.. SPDX-License-Identifier: BSD-3-Clause +Copyright(c) 2018 Intel Corporation + +Intel(R) FPGA LTE FEC Poll Mode Driver +== + +The BBDEV FPGA LTE FEC poll mode driver (PMD) supports an FPGA implementation of a VRAN +Turbo Encode / Decode LTE wireless acceleration function, using Intel's PCI-e and FPGA +based Vista Creek device. + +Features + + +FPGA LTE FEC PMD supports the following features: + +- Turbo Encode in the DL with total throughput of 4.5 Gbits/s +- Turbo Decode in the UL with total throughput of 1.5 Gbits/s assuming 8 decoder iterations +- 8 VFs per PF (physical device) +- Maximum of 32 UL queues per VF +- Maximum of 32 DL queues per VF +- PCIe Gen-3 x8 Interface +- MSI-X +- SR-IOV + + +FPGA LTE FEC PMD supports the following BBDEV capabilities: + +* For the turbo encode operation: + - ``RTE_BBDEV_TURBO_CRC_24B_ATTACH`` : set to attach CRC24B to CB(s) + - ``RTE_BBDEV_TURBO_RATE_MATCH`` : if set then do not do Rate Match bypass + - ``RTE_BBDEV_TURBO_ENC_INTERRUPTS`` : set for encoder dequeue interrupts + + +* For the turbo decode operation: + - ``RTE_BBDEV_TURBO_CRC_TYPE_24B`` : check CRC24B from CB(s) + - ``RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE`` : perform subblock de-interleave + - ``RTE_BBDEV_TURBO_DEC_INTERRUPTS`` : set for decoder dequeue interrupts + - ``RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN`` : set if negative LLR encoder i/p is supported + - ``RTE_BBDEV_TURBO_DEC_TB_CRC_24B_KEEP`` : keep CRC24B bits appended while decoding + + +Limitations +--- + +FPGA LTE FEC does not support the following: + +- Scatter-Gather function + + +Installation +-- + +Section 3 of the DPDK manual provides instuctions on installing and compiling DPDK. The +default set of bbdev compile flags may be found in config/common_base, where for example +the flag to build the FPGA LTE FEC device, ``CONFIG_RTE_LIBRTE_PMD_FPGA_LTE_FEC``, is already +set. It is assumed DPDK has been compiled using for instance: + +.. code-block:: console + + make install T=x86_64-native-linuxapp-gcc + + +DPDK requires hugepages to be configured as detailed in section 2 of the DPDK manual. +The bbdev test application has been tested with a configuration 40 x 1GB hugepages. The +hugepage configuration of a server may be examined using: + +.. code-block:: console + + grep Huge* /proc/meminfo + + +Initialization +-- + +When the device first powers up, its PCI Physical Functions (PF) can be listed through this command: + +.. code-block:: console + + sudo lspci -vd1172:5052 + +The physical and virtual functions are compatible with Linux UIO drivers: +``vfio`` and ``igb_uio``. However, in order to work the FPGA LTE FEC device firstly needs +to be bound to one of these linux drivers through DPDK. + + +Bind PF UIO driver(s) +~ + +Install the DPDK igb_uio driver, bind it with th
[dpdk-dev] [PATCH v6] baseband/fpga_lte_fec: adding driver for FEC on FPGA
Supports for FEC 4G PMD Driver on FPGA card PAC N3000 Signed-off-by: Nicolas Chautru --- config/common_base |6 + doc/guides/bbdevs/fpga_lte_fec.rst | 318 +++ doc/guides/bbdevs/index.rst|1 + drivers/baseband/Makefile |2 + drivers/baseband/fpga_lte_fec/Makefile | 29 + drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 2674 drivers/baseband/fpga_lte_fec/fpga_lte_fec.h | 73 + drivers/baseband/fpga_lte_fec/meson.build |7 + .../rte_pmd_bbdev_fpga_lte_fec_version.map |3 + drivers/baseband/meson.build |2 +- mk/rte.app.mk |1 + 11 files changed, 3115 insertions(+), 1 deletion(-) create mode 100644 doc/guides/bbdevs/fpga_lte_fec.rst create mode 100644 drivers/baseband/fpga_lte_fec/Makefile create mode 100644 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c create mode 100644 drivers/baseband/fpga_lte_fec/fpga_lte_fec.h create mode 100644 drivers/baseband/fpga_lte_fec/meson.build create mode 100644 drivers/baseband/fpga_lte_fec/rte_pmd_bbdev_fpga_lte_fec_version.map diff --git a/config/common_base b/config/common_base index 6f19ad5..9632a07 100644 --- a/config/common_base +++ b/config/common_base @@ -521,6 +521,7 @@ CONFIG_RTE_PMD_PACKET_PREFETCH=y # EXPERIMENTAL: API may change without prior notice # CONFIG_RTE_LIBRTE_BBDEV=y +CONFIG_RTE_LIBRTE_BBDEV_DEBUG=n CONFIG_RTE_BBDEV_MAX_DEVS=128 CONFIG_RTE_BBDEV_OFFLOAD_COST=y @@ -535,6 +536,11 @@ CONFIG_RTE_LIBRTE_PMD_BBDEV_NULL=y CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW=n # +# Compile PMD for Intel FPGA LTE FEC bbdev device +# +CONFIG_RTE_LIBRTE_PMD_FPGA_LTE_FEC=y + +# # Compile generic crypto device library # CONFIG_RTE_LIBRTE_CRYPTODEV=y diff --git a/doc/guides/bbdevs/fpga_lte_fec.rst b/doc/guides/bbdevs/fpga_lte_fec.rst new file mode 100644 index 000..bdfbd75 --- /dev/null +++ b/doc/guides/bbdevs/fpga_lte_fec.rst @@ -0,0 +1,318 @@ +.. SPDX-License-Identifier: BSD-3-Clause +Copyright(c) 2019 Intel Corporation + +Intel(R) FPGA LTE FEC Poll Mode Driver +== + +The BBDEV FPGA LTE FEC poll mode driver (PMD) supports an FPGA implementation of a VRAN +Turbo Encode / Decode LTE wireless acceleration function, using Intel's PCI-e and FPGA +based Vista Creek device. + +Features + + +FPGA LTE FEC PMD supports the following features: + +- Turbo Encode in the DL with total throughput of 4.5 Gbits/s +- Turbo Decode in the UL with total throughput of 1.5 Gbits/s assuming 8 decoder iterations +- 8 VFs per PF (physical device) +- Maximum of 32 UL queues per VF +- Maximum of 32 DL queues per VF +- PCIe Gen-3 x8 Interface +- MSI-X +- SR-IOV + + +FPGA LTE FEC PMD supports the following BBDEV capabilities: + +* For the turbo encode operation: + - ``RTE_BBDEV_TURBO_CRC_24B_ATTACH`` : set to attach CRC24B to CB(s) + - ``RTE_BBDEV_TURBO_RATE_MATCH`` : if set then do not do Rate Match bypass + - ``RTE_BBDEV_TURBO_ENC_INTERRUPTS`` : set for encoder dequeue interrupts + + +* For the turbo decode operation: + - ``RTE_BBDEV_TURBO_CRC_TYPE_24B`` : check CRC24B from CB(s) + - ``RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE`` : perform subblock de-interleave + - ``RTE_BBDEV_TURBO_DEC_INTERRUPTS`` : set for decoder dequeue interrupts + - ``RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN`` : set if negative LLR encoder i/p is supported + - ``RTE_BBDEV_TURBO_DEC_TB_CRC_24B_KEEP`` : keep CRC24B bits appended while decoding + + +Limitations +--- + +FPGA LTE FEC does not support the following: + +- Scatter-Gather function + + +Installation +-- + +Section 3 of the DPDK manual provides instuctions on installing and compiling DPDK. The +default set of bbdev compile flags may be found in config/common_base, where for example +the flag to build the FPGA LTE FEC device, ``CONFIG_RTE_LIBRTE_PMD_FPGA_LTE_FEC``, is already +set. It is assumed DPDK has been compiled using for instance: + +.. code-block:: console + + make install T=x86_64-native-linuxapp-gcc + + +DPDK requires hugepages to be configured as detailed in section 2 of the DPDK manual. +The bbdev test application has been tested with a configuration 40 x 1GB hugepages. The +hugepage configuration of a server may be examined using: + +.. code-block:: console + + grep Huge* /proc/meminfo + + +Initialization +-- + +When the device first powers up, its PCI Physical Functions (PF) can be listed through this command: + +.. code-block:: console + + sudo lspci -vd1172:5052 + +The physical and virtual functions are compatible with Linux UIO drivers: +``vfio`` and ``igb_uio``. However, in order to work the FPGA LTE FEC device firstly needs +to be bound to one of these linux drivers through DPDK. + + +Bind PF UIO driver(s) +~ + +Install the DPDK igb_uio driver, bind it with th
[dpdk-dev] [PATCH v6] baseband/fpga_lte_fec: adding driver for FEC on FPGA
Update v6: Update of one copyright date in documentation file. Update v5: Update date and version from dpdk review. Rebased to latest. Update v4: Fix warning for the DEBUG configuration. Update v3: Squashing 3 previous patches into one as recommended. This is adding a new PMD driver for BBDEV device based on FPGA implementation on PAC N3000 HW to provide FEC 4G acceleration. v1 was shared earlier on this patchwork : https://patches.dpdk.org/patch/53409/ The existing BBDEV test framework is still supported. Main updates based on feedback during the last v1 were: - correct the documentation which was arguably misleading in hinting that ICC was required. There is no dependency whatsoever. - add meson build support Note that a number of other BBDEV patches are in v1 here and will be resubmitted for 19.08 rc1 https://patches.dpdk.org/project/dpdk/list/?series=4657 Nicolas Chautru (1): baseband/fpga_lte_fec: adding driver for FEC on FPGA config/common_base |6 + doc/guides/bbdevs/fpga_lte_fec.rst | 318 +++ doc/guides/bbdevs/index.rst|1 + drivers/baseband/Makefile |2 + drivers/baseband/fpga_lte_fec/Makefile | 29 + drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 2674 drivers/baseband/fpga_lte_fec/fpga_lte_fec.h | 73 + drivers/baseband/fpga_lte_fec/meson.build |7 + .../rte_pmd_bbdev_fpga_lte_fec_version.map |3 + drivers/baseband/meson.build |2 +- mk/rte.app.mk |1 + 11 files changed, 3115 insertions(+), 1 deletion(-) create mode 100644 doc/guides/bbdevs/fpga_lte_fec.rst create mode 100644 drivers/baseband/fpga_lte_fec/Makefile create mode 100644 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c create mode 100644 drivers/baseband/fpga_lte_fec/fpga_lte_fec.h create mode 100644 drivers/baseband/fpga_lte_fec/meson.build create mode 100644 drivers/baseband/fpga_lte_fec/rte_pmd_bbdev_fpga_lte_fec_version.map -- 1.8.3.1
Re: [dpdk-dev] [PATCH 1/2] lib/librte_ethdev: add in default value of rte_eth_dev_info
On 6/14/2019 6:31 AM, Mo, YufengX wrote: > Hi, Sunil Kumar Kori > > This series' patches have been merged on dpdk/master. They cause testpmd core > dumped on intel nics. Right, since they can provide values as "nb_seg_max = 0, nb_mtu_seg_max = 0", I am sending a patch now. > > ./usertools/dpdk-devbind.py -b igb_uio :xx:00.0 :xx:00.1 > ./x86_64-native-linuxapp-gcc/app/testpmd -v -c 0x3f -n 4 -- -i > > Running environment as the following: > > * OS: > fedora 20/22/27/30 > 3.16.4/4.4.14/5.1.0 > > * Compiler: > gcc version 5.3.1 > gcc version 7.3.1 > gcc version 4.8.3 > > * Hardware platform: > Broadwell-EP Xeon E5-2600 > Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz > Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz > Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz > > * NIC hardware: > fortville_spirit > Ethernet Controller XL710 for 40GbE QSFP+ 1583 > version: 1.5.16 > firmware-version: 6.01 0x800034a4 1.1747.0 > > fortville(25G 2 ports nic) > Ethernet Controller XXV710 for 25GbE SFP28 158b > driver: i40e > version: 2.1.14-k > firmware-version: 6.01 0x80003554 1.1747.0 > > fortville(10G 2 ports nic) > Ethernet Controller X710 for 10GbE SFP+ 1572 > driver: i40e > version: 2.1.14-k > firmware-version: 6.01 0x800035b0 1.1747.0 > > niantic > Device_str: 82599ES 10-Gigabit SFI/SFP+ Network Connection > firmware: 0x61bf0001 > ixgbe: 4.3.13 > ixgbevf: 2.12.1-k > > >> -Original Message- >> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Sunil Kumar Kori >> Sent: Monday, May 27, 2019 4:35 PM >> To: tho...@monjalon.net; Yigit, Ferruh ; >> arybche...@solarflare.com; Lu, Wenzhuo ; >> Wu, Jingjing ; Iremonger, Bernard >> >> Cc: dev@dpdk.org; Sunil Kumar Kori >> Subject: [dpdk-dev] [PATCH 1/2] lib/librte_ethdev: add in default value of >> rte_eth_dev_info >> >> rte_eth_dev_info structure exposes, nb_seg_max & nb_mtu_seg_max >> to provide maximum number of supported segments for a given platform. >> >> Defining UINT16_MAX as default value of above mentioned variables to >> expose support of infinite/maximum segments. >> >> Based on above values, application can decide best size for buffers >> while creating mbuf pool. >> >> Signed-off-by: Sunil Kumar Kori >> --- >> lib/librte_ethdev/rte_ethdev.c | 2 ++ >> lib/librte_ethdev/rte_ethdev.h | 2 ++ >> 2 files changed, 4 insertions(+) >> >> diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c >> index d7cfa3d..6933757 100644 >> --- a/lib/librte_ethdev/rte_ethdev.c >> +++ b/lib/librte_ethdev/rte_ethdev.c >> @@ -2543,6 +2543,8 @@ struct rte_eth_dev * >> .nb_max = UINT16_MAX, >> .nb_min = 0, >> .nb_align = 1, >> +.nb_seg_max = UINT16_MAX, >> +.nb_mtu_seg_max = UINT16_MAX, >> }; >> >> RTE_ETH_VALID_PORTID_OR_RET(port_id); >> diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h >> index 1f35e1d..6bd30b1 100644 >> --- a/lib/librte_ethdev/rte_ethdev.h >> +++ b/lib/librte_ethdev/rte_ethdev.h >> @@ -2333,6 +2333,8 @@ int rte_eth_dev_set_rx_queue_stats_mapping(uint16_t >> port_id, >> * .nb_max = UINT16_MAX, >> * .nb_min = 0, >> * .nb_align = 1, >> + * .nb_seg_max = UINT16_MAX, >> + * .nb_mtu_seg_max = UINT16_MAX, >> * }; >> * >> * device = dev->device >> -- >> 1.8.3.1 >
[dpdk-dev] [PATCH] eal/stack: fix 'pointer-sign' warning
clang raise 'pointer-sign' warnings in __atomic_compare_exchange when passing 'uint64_t *' to parameter of type 'int64_t *' converts between pointers to integer types with different sign. Fixes: 7e6e609939a8 ("stack: add C11 atomic implementation") Signed-off-by: Phil Yang Reviewed-by: Honnappa Nagarahalli --- lib/librte_stack/rte_stack_lf_c11.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/librte_stack/rte_stack_lf_c11.h b/lib/librte_stack/rte_stack_lf_c11.h index a316e9a..e3b9eff 100644 --- a/lib/librte_stack/rte_stack_lf_c11.h +++ b/lib/librte_stack/rte_stack_lf_c11.h @@ -97,7 +97,7 @@ __rte_stack_lf_pop_elems(struct rte_stack_lf_list *list, return NULL; #else struct rte_stack_lf_head old_head; - uint64_t len; + int64_t len; int success; /* Reserve num elements, if available */ -- 2.7.4
[dpdk-dev] [PATCH] app/testpmd: fix crash
Testpmd tries to calculate mbuf size based on "max Rx packet size" and "max MTU segment number". When driver set a "nb_mtu_seg_max" to zero, it causes division by zero segmentation fault in testpmd. If the PMD set "nb_mtu_seg_max" to zero, testpmd shouldn't try to calculate the mbuf size. Fixes: 33f9630fc23d ("app/testpmd: create mbuf based on max supported segments") Signed-off-by: Ferruh Yigit --- Cc: Sunil Kumar Kori Cc: YufengX Mo --- app/test-pmd/testpmd.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 0f2fffec3..4e958bc44 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -1174,7 +1174,8 @@ init_config(void) /* Check for maximum number of segments per MTU. Accordingly * update the mbuf data size. */ - if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX) { + if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX && + port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) { data_size = rx_mode.max_rx_pkt_len / port->dev_info.rx_desc_lim.nb_mtu_seg_max; -- 2.21.0
Re: [dpdk-dev] [PATCH 1/2] lib/librte_ethdev: add in default value of rte_eth_dev_info
On 6/14/2019 5:51 PM, Ferruh Yigit wrote: > On 6/14/2019 6:31 AM, Mo, YufengX wrote: >> Hi, Sunil Kumar Kori >> >> This series' patches have been merged on dpdk/master. They cause testpmd >> core dumped on intel nics. > > Right, since they can provide values as "nb_seg_max = 0, nb_mtu_seg_max = 0", > I am sending a patch now. @Yufeng, Can you please try with patch https://patches.dpdk.org/patch/54811/? @Thomas, Can it be possible to merge fix to master? Or I can merge if you want?
[dpdk-dev] kernel crash with DPDK 18.11
Hi All, I am using DPDK 18.11 with linux kernel 4.19.28. I have kni devices created and I can see a crash in the kernel with ipv6/v4 fragmented packets. Does anyone seen this already or is there any fixes/patches available for this . Snippet of kernel panic Message from syslogd@vsbc1 at Jun 14 04:52:05 ... kernel:[ 6815.672497] rte_kni: kni_net_rx_normal kernel:[ 6815.672497] skbuff: skb_over_panic: text:24f44e9b len:3024 put:1518 head:2a1b576a data:cec907b4 tail:0xc12 end:0x980 dev: Message from syslogd@vsbc1 at Jun 14 04:52:05 ... kernel:[ 6815.672497] skbuff: skb_over_panic: text:24f44e9b len:3024 put:1518 head:2a1b576a data:cec907b4 tail:0xc12 end:0x980 dev: Message from syslogd@vsbc1 at Jun 14 04:52:06 ... kernel:[ 6815.854134] Kernel panic - not syncing: Fatal exception Message from syslogd@vsbc1 at Jun 14 04:52:06 ... kernel:[ 6815.854134] Kernel panic - not syncing: Fatal exception -- Regards, Souvik --- Notice: This e-mail together with any attachments may contain information of Ribbon Communications Inc. that is confidential and/or proprietary for the sole use of the intended recipient. Any review, disclosure, reliance or distribution by others or forwarding without express permission is strictly prohibited. If you are not the intended recipient, please notify the sender immediately and then delete all copies, including any attachments. ---
Re: [dpdk-dev] [PATCH] kni: fix possible kernel crash with va2pa
Was there any update to this patch , I am also seeing kernel crash in kni_net_rx_normal dueing skb_put which is happening for chained mbufs. -- Regards, Souvik From: dev On Behalf Of Ferruh Yigit Sent: Wednesday, March 6, 2019 12:31 PM To: Yangchao Zhou ; dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH] kni: fix possible kernel crash with va2pa NOTICE: This email was received from an EXTERNAL sender On 2/28/2019 7:30 AM, Yangchao Zhou wrote: > va2pa depends on the physical address and virtual address offset of > current mbuf. It may get the wrong physical address of next mbuf which > allocated in another hugepage segment. Hi Yangchao, The problem you described seems valid, when current mbuf and the mbuf pointed bu next pointer from different (huge)pages, address calculation will be wrong. Can you able to reproduce the issue, or recognized the problem theoretically? > > Signed-off-by: Yangchao Zhou mailto:zhouya...@gmail.com>> > --- > kernel/linux/kni/kni_net.c | 16 ++-- > .../eal/include/exec-env/rte_kni_common.h | 4 > lib/librte_kni/rte_kni.c | 15 ++- > 3 files changed, 20 insertions(+), 15 deletions(-) > > diff --git a/kernel/linux/kni/kni_net.c b/kernel/linux/kni/kni_net.c > index 7371b6d58..caef8754f 100644 > --- a/kernel/linux/kni/kni_net.c > +++ b/kernel/linux/kni/kni_net.c > @@ -61,18 +61,6 @@ kva2data_kva(struct rte_kni_mbuf *m) > return phys_to_virt(m->buf_physaddr + m->data_off); > } > > -/* virtual address to physical address */ > -static void * > -va2pa(void *va, struct rte_kni_mbuf *m) > -{ > - void *pa; > - > - pa = (void *)((unsigned long)va - > - ((unsigned long)m->buf_addr - > - (unsigned long)m->buf_physaddr)); > - return pa; > -} > - > /* > * It can be called to process the request. > */ > @@ -363,7 +351,7 @@ kni_net_rx_normal(struct kni_dev *kni) > if (!kva->next) > break; > > - kva = pa2kva(va2pa(kva->next, kva)); > + kva = pa2kva(kva->next_pa); > data_kva = kva2data_kva(kva); > } > } > @@ -545,7 +533,7 @@ kni_net_rx_lo_fifo_skb(struct kni_dev *kni) > if (!kva->next) > break; > > - kva = pa2kva(va2pa(kva->next, kva)); > + kva = pa2kva(kva->next_pa); > data_kva = kva2data_kva(kva); > } > } > diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h > b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h > index 5afa08713..608f5c13f 100644 > --- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h > +++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h > @@ -87,6 +87,10 @@ struct rte_kni_mbuf { > char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_MIN_SIZE))); > void *pool; > void *next; > + union { > + uint64_t tx_offload; > + void *next_pa; /**< Physical address of next mbuf. */ > + }; This will cause overwrite the 'tx_offload' via 'next_pa', we don't use tx_offload in KNI but not sure about removing potential use for future. What do you think about converting 'm->next' to physical address before putting them into 'rx_q', and in kernel side after data copied to skb convert 'm->next' back to virtual address before putting it into 'free_q' ? I think both address conversion can be possible to do, a little tricky because address conversion should be calculated in next mbuf and previous mbuf->next in the chain should be updated. > }; > > /* > diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c > index 73aef..1aaebcfa1 100644 > --- a/lib/librte_kni/rte_kni.c > +++ b/lib/librte_kni/rte_kni.c > @@ -353,6 +353,17 @@ va2pa(struct rte_mbuf *m) > (unsigned long)m->buf_iova)); > } > > +static void * > +va2pa_all(struct rte_mbuf *m) > +{ > + struct rte_kni_mbuf *mbuf = (struct rte_kni_mbuf *)m; > + while (mbuf->next) { > + mbuf->next_pa = va2pa(mbuf->next); > + mbuf = mbuf->next; > + } > + return va2pa(m); > +} > + > static void > obj_free(struct rte_mempool *mp __rte_unused, void *opaque, void *obj, > unsigned obj_idx __rte_unused) > @@ -550,7 +561,7 @@ rte_kni_tx_burst(struct rte_kni *kni, struct rte_mbuf > **mbufs, unsigned num) > unsigned int i; > > for (i = 0; i < num; i++) > - phy_mbufs[i] = va2pa(mbufs[i]); > + phy_mbufs[i] = va2pa_all(mbufs[i]); > > ret = kni_fifo_put(kni->rx_q, phy_mbufs, num); > > @@ -607,6 +618,8 @@ kni_allocate_mbufs(struct rte_kni *kni) > offsetof(struct rte_kni_mbuf, pkt_len)); > RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) != > offsetof(struct rte_kni_mbuf, ol_flags)); > + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) != > + offsetof(struct rte_kni_mbuf, tx_offload)); > > /* Check if pktmbuf pool has been configured */ > if (kni->pktmbuf_pool == NULL) { > --- Notice: This e-mail together with any attachments may contain information of Ribbon Communications Inc. that is confidential and/or proprietary for the sole use of the intended recipient. Any review, d
Re: [dpdk-dev] [PATCH] eal/stack: fix 'pointer-sign' warning
> clang raise 'pointer-sign' warnings in __atomic_compare_exchange when > passing 'uint64_t *' to parameter of type 'int64_t *' converts between > pointers to integer types with different sign. > > Fixes: 7e6e609939a8 ("stack: add C11 atomic implementation") > > Signed-off-by: Phil Yang > Reviewed-by: Honnappa Nagarahalli > > --- > lib/librte_stack/rte_stack_lf_c11.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/lib/librte_stack/rte_stack_lf_c11.h > b/lib/librte_stack/rte_stack_lf_c11.h > index a316e9a..e3b9eff 100644 > --- a/lib/librte_stack/rte_stack_lf_c11.h > +++ b/lib/librte_stack/rte_stack_lf_c11.h > @@ -97,7 +97,7 @@ __rte_stack_lf_pop_elems(struct rte_stack_lf_list *list, > return NULL; > #else > struct rte_stack_lf_head old_head; > - uint64_t len; > + int64_t len; This works, but I'd prefer to keep 'len' unsigned. How about changing the definition of 'len' in struct rte_stack_lf_list to uint64_t, and in rte_stack_lf_generic.h casting it to rte_atomic64_t* when its address is passed to the rte_atomic64_...() functions?
[dpdk-dev] [PATCH] app/crypto-perf: fix return status detection
Currently, there's no return status check from lcore's jobs. In case of fail - crypto-perf tool returns success anyway. This patch adds such a detection and returns proper status at the end. Fixes: ce8af1a4398d ("app/crypto-perf: wait for cores launched by app") Cc: sta...@dpdk.org Signed-off-by: Tomasz Jozwiak --- app/test-crypto-perf/main.c | 10 -- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c index 4247f6a..ad72d7e 100644 --- a/app/test-crypto-perf/main.c +++ b/app/test-crypto-perf/main.c @@ -664,9 +664,12 @@ main(int argc, char **argv) if (i == total_nb_qps) break; - rte_eal_wait_lcore(lcore_id); + ret |= rte_eal_wait_lcore(lcore_id); i++; } + + if (ret != EXIT_SUCCESS) + goto err; } else { /* Get next size from range or list */ @@ -691,10 +694,13 @@ main(int argc, char **argv) if (i == total_nb_qps) break; - rte_eal_wait_lcore(lcore_id); + ret |= rte_eal_wait_lcore(lcore_id); i++; } + if (ret != EXIT_SUCCESS) + goto err; + /* Get next size from range or list */ if (opts.inc_buffer_size != 0) opts.test_buffer_size += opts.inc_buffer_size; -- 2.7.4
[dpdk-dev] [PATCH] app/crypto-perf: fix display once detection
This patch changes 'only_once' variable to 'display_once', which should be atomic type due to fact, that all runner functions can be executed in paraller way on different lcores. Fixes: df52cb3b6e13 ("app/crypto-perf: move verify as single test type") Cc: sta...@dpdk.org Signed-off-by: Tomasz Jozwiak --- app/test-crypto-perf/cperf_test_latency.c| 5 ++--- app/test-crypto-perf/cperf_test_pmd_cyclecount.c | 8 +++- app/test-crypto-perf/cperf_test_throughput.c | 8 +++- app/test-crypto-perf/cperf_test_verify.c | 8 +++- 4 files changed, 11 insertions(+), 18 deletions(-) diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c index 0fc3a66..62478a2 100644 --- a/app/test-crypto-perf/cperf_test_latency.c +++ b/app/test-crypto-perf/cperf_test_latency.c @@ -129,7 +129,7 @@ cperf_latency_test_runner(void *arg) uint8_t burst_size_idx = 0; uint32_t imix_idx = 0; - static int only_once; + static rte_atomic16_t display_once = RTE_ATOMIC16_INIT(0); if (ctx == NULL) return 0; @@ -311,7 +311,7 @@ cperf_latency_test_runner(void *arg) time_min = tunit*(double)(tsc_min) / tsc_hz; if (ctx->options->csv) { - if (!only_once) + if (rte_atomic16_test_and_set(&display_once)) printf("\n# lcore, Buffer Size, Burst Size, Pakt Seq #, " "Packet Size, cycles, time (us)"); @@ -326,7 +326,6 @@ cperf_latency_test_runner(void *arg) / tsc_hz); } - only_once = 1; } else { printf("\n# Device %d on lcore %u\n", ctx->dev_id, ctx->lcore_id); diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c index 92af5ec..70ffd6b 100644 --- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c +++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c @@ -391,7 +391,7 @@ cperf_pmd_cyclecount_test_runner(void *test_ctx) state.lcore = rte_lcore_id(); state.linearize = 0; - static int only_once; + static rte_atomic16_t display_once = RTE_ATOMIC16_INIT(0); static bool warmup = true; /* @@ -437,13 +437,12 @@ cperf_pmd_cyclecount_test_runner(void *test_ctx) } if (!opts->csv) { - if (!only_once) + if (rte_atomic16_test_and_set(&display_once)) printf(PRETTY_HDR_FMT, "lcore id", "Buf Size", "Burst Size", "Enqueued", "Dequeued", "Enq Retries", "Deq Retries", "Cycles/Op", "Cycles/Enq", "Cycles/Deq"); - only_once = 1; printf(PRETTY_LINE_FMT, state.ctx->lcore_id, opts->test_buffer_size, test_burst_size, @@ -454,13 +453,12 @@ cperf_pmd_cyclecount_test_runner(void *test_ctx) state.cycles_per_enq, state.cycles_per_deq); } else { - if (!only_once) + if (rte_atomic16_test_and_set(&display_once)) printf(CSV_HDR_FMT, "# lcore id", "Buf Size", "Burst Size", "Enqueued", "Dequeued", "Enq Retries", "Deq Retries", "Cycles/Op", "Cycles/Enq", "Cycles/Deq"); - only_once = 1; printf(CSV_LINE_FMT, state.ctx->lcore_id, opts->test_buffer_size, test_burst_size, diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c index 2767f4e..972d520 100644 --- a/app/test-crypto-perf/cperf_test_throughput.c +++ b/app/test-crypto-perf/cperf_test_throughput.c @@ -95,7 +95,7 @@ cperf_throughput_test_runner(void *test_ctx) uint8_t burst_size_idx = 0; uint32_t imix_idx = 0; - static int only_once; + static rte_atomic16_t display_once = RTE_ATOMIC16_INIT(0); struct rte_crypto_op *ops[ctx->options->max_burst_size]; struct rte_crypto_op *ops_processed[ctx->options->max_burst_size]; @@ -262,13 +262,12 @@ cperf_throughput_test_runner(void *test_ctx) ctx->options->total_ops); if (!ctx->options->csv) { - if (!only_once) + if (rte_atomic16_test_
[dpdk-dev] [PATCH v2 00/15] Unit tests fixes for CI
This is a joint effort to make the unit tests ready for CI. The first patches are fixes that I had accumulated. Then the second part of the series focuses on skipping tests when some requirements are not fulfilled so that we can start them in a restrained environment like Travis virtual machines that gives us two cores and does not have specific hw devices. We are still not ready for enabling those tests in Travis. At least, the following issues remain: - some fixes on librte_acl have not been merged yet [1], - the tests on --file-prefix are still ko, and have been isolated in a test that we could disable while waiting for the fixes, - rwlock_autotest and hash_readwrite_lf_autotest are taking a little more than 10s, - librte_table unit test crashes on ipv6 [2], - the "perf" tests are taking way too long for my taste, - the shared build unit tests all fail when depending on mempool since the mempool drivers are not loaded, 1: http://patchwork.dpdk.org/project/dpdk/list/?series=4242 2: https://bugs.dpdk.org/show_bug.cgi?id=285 Changelog since v1: - removed limit on 128 cores in rcu tests, - reworked Michael patch on eal test and started splitting big tests into subtests: when a subtest fails, it does not impact the other subtests; plus, subtests are shorter to run, so easier to make them fit in 10s, Comments/reviews welcome! -- David Marchand David Marchand (13): test/bonding: add missing sources for link bonding RSS test/crypto: move tests to the driver specific list test/eventdev: move tests to the driver specific list test/hash: fix off-by-one check on core count test/hash: clean remaining trace of scaling autotest test/latencystats: fix stack smashing test/rcu: remove arbitrary limit on max core count test/stack: fix lock-free test name test/eal: set memory channel config only in dedicated test test/eal: set core mask/list config only in dedicated test test: split into shorter subtests for CI test: do not start tests in parallel test: skip tests when missing requirements Dharmik Thakkar (1): test/hash: rectify slaveid to point to valid cores Michael Santana (1): test/eal: check number of cores before running subtests app/test/autotest.py| 2 +- app/test/autotest_data.py | 4 +- app/test/meson.build| 84 ++-- app/test/test.c | 24 ++-- app/test/test_compressdev.c | 4 +- app/test/test_cryptodev.c | 4 +- app/test/test_distributor.c | 4 +- app/test/test_distributor_perf.c| 4 +- app/test/test_eal_flags.c | 265 app/test/test_event_timer_adapter.c | 5 +- app/test/test_eventdev.c| 2 + app/test/test_func_reentrancy.c | 6 +- app/test/test_hash_multiwriter.c| 7 +- app/test/test_hash_readwrite.c | 7 +- app/test/test_hash_readwrite_lf.c | 34 +++-- app/test/test_ipsec.c | 4 +- app/test/test_latencystats.c| 18 --- app/test/test_mbuf.c| 13 +- app/test/test_rcu_qsbr.c| 139 +-- app/test/test_rcu_qsbr_perf.c | 73 -- app/test/test_rwlock.c | 6 + app/test/test_service_cores.c | 14 ++ app/test/test_stack.c | 8 +- app/test/test_timer.c | 10 +- app/test/test_timer_secondary.c | 10 +- 25 files changed, 381 insertions(+), 370 deletions(-) -- 1.8.3.1
[dpdk-dev] [PATCH v2 02/15] test/crypto: move tests to the driver specific list
For consistency, put all specific crypto driver tests in the dedicated list (in alphabetic order). While at it: - remove dead reference to cryptodev_sw_mrvl_autotest (renamed as cryptodev_sw_mvsam_autotest), - call the crypto scheduler test only when built, Fixes: 9eabcb682493 ("test: update autotest list") Fixes: 3d20ffe6ddb1 ("test: reorder test cases in meson") Cc: sta...@dpdk.org Signed-off-by: David Marchand Acked-by: Aaron Conole --- app/test/autotest_data.py | 4 ++-- app/test/meson.build | 28 +++- 2 files changed, 17 insertions(+), 15 deletions(-) diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py index 0f2c9a7..6cf7eca 100644 --- a/app/test/autotest_data.py +++ b/app/test/autotest_data.py @@ -393,8 +393,8 @@ "Report": None, }, { -"Name":"Cryptodev sw mrvl autotest", -"Command": "cryptodev_sw_mrvl_autotest", +"Name":"Cryptodev sw mvsam autotest", +"Command": "cryptodev_sw_mvsam_autotest", "Func":default_autotest, "Report": None, }, diff --git a/app/test/meson.build b/app/test/meson.build index a51b50a..cb3de71 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -200,10 +200,7 @@ fast_parallel_test_names = [ # All test cases in fast_non_parallel_test_names list are non-parallel fast_non_parallel_test_names = [ 'bitratestats_autotest', -'cryptodev_sw_armv8_autotest', 'crc_autotest', -'cryptodev_openssl_asym_autotest', -'cryptodev_sw_mvsam_autotest', 'delay_us_sleep_autotest', 'distributor_autotest', 'eventdev_common_autotest', @@ -259,21 +256,22 @@ perf_test_names = [ # All test cases in driver_test_names list are non-parallel driver_test_names = [ -'link_bonding_autotest', -'link_bonding_mode4_autotest', -'link_bonding_rssconf_autotest', -'cryptodev_sw_mrvl_autotest', -'cryptodev_dpaa2_sec_autotest', -'cryptodev_dpaa_sec_autotest', -'cryptodev_qat_autotest', 'cryptodev_aesni_mb_autotest', -'cryptodev_openssl_autotest', -'cryptodev_scheduler_autotest', 'cryptodev_aesni_gcm_autotest', +'cryptodev_dpaa_sec_autotest', +'cryptodev_dpaa2_sec_autotest', 'cryptodev_null_autotest', -'cryptodev_sw_snow3g_autotest', +'cryptodev_openssl_autotest', +'cryptodev_openssl_asym_autotest', +'cryptodev_qat_autotest', +'cryptodev_sw_armv8_autotest', 'cryptodev_sw_kasumi_autotest', +'cryptodev_sw_mvsam_autotest', +'cryptodev_sw_snow3g_autotest', 'cryptodev_sw_zuc_autotest', +'link_bonding_autotest', +'link_bonding_mode4_autotest', +'link_bonding_rssconf_autotest', ] # All test cases in dump_test_names list are non-parallel @@ -329,6 +327,10 @@ if dpdk_conf.has('RTE_LIBRTE_COMPRESSDEV') endif endif +if dpdk_conf.has('RTE_LIBRTE_PMD_CRYPTO_SCHEDULER') + driver_test_names += 'cryptodev_scheduler_autotest' +endif + foreach d:test_deps def_lib = get_option('default_library') test_dep_objs += get_variable(def_lib + '_rte_' + d) -- 1.8.3.1
[dpdk-dev] [PATCH v2 06/15] test/hash: clean remaining trace of scaling autotest
Fixes: 3c518ca41ffa ("test/hash: remove hash scaling unit test") Cc: sta...@dpdk.org Signed-off-by: David Marchand Acked-by: Aaron Conole --- app/test/meson.build | 1 - 1 file changed, 1 deletion(-) diff --git a/app/test/meson.build b/app/test/meson.build index ad46515..44cf561 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -207,7 +207,6 @@ fast_non_parallel_test_names = [ 'fbarray_autotest', 'hash_readwrite_autotest', 'hash_readwrite_lf_autotest', -'hash_scaling_autotest', 'ipsec_autotest', 'kni_autotest', 'kvargs_autotest', -- 1.8.3.1
[dpdk-dev] [PATCH v2 03/15] test/eventdev: move tests to the driver specific list
Same treatment than crypto tests, move the eventdev drivers tests in the driver list. While at it: - eventdev_octeontx_autotest has been renamed as eventdev_selftest_octeontx, - eventdev_sw_autotest has been renamed as eventdev_selftest_sw, Fixes: 50fb749a3972 ("event/octeontx: move test to driver") Fixes: 85fb515b7318 ("event/sw: move test to driver") Fixes: 123d67c73b06 ("test/event: register selftests") Cc: sta...@dpdk.org Signed-off-by: David Marchand Acked-by: Aaron Conole --- app/test/meson.build | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/app/test/meson.build b/app/test/meson.build index cb3de71..ad46515 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -204,8 +204,6 @@ fast_non_parallel_test_names = [ 'delay_us_sleep_autotest', 'distributor_autotest', 'eventdev_common_autotest', -'eventdev_octeontx_autotest', -'eventdev_sw_autotest', 'fbarray_autotest', 'hash_readwrite_autotest', 'hash_readwrite_lf_autotest', @@ -241,7 +239,6 @@ perf_test_names = [ 'timer_racecond_autotest', 'efd_autotest', 'hash_functions_autotest', -'eventdev_selftest_sw', 'member_perf_autotest', 'efd_perf_autotest', 'lpm6_perf_autotest', @@ -269,6 +266,8 @@ driver_test_names = [ 'cryptodev_sw_mvsam_autotest', 'cryptodev_sw_snow3g_autotest', 'cryptodev_sw_zuc_autotest', +'eventdev_selftest_octeontx', +'eventdev_selftest_sw', 'link_bonding_autotest', 'link_bonding_mode4_autotest', 'link_bonding_rssconf_autotest', -- 1.8.3.1
[dpdk-dev] [PATCH v2 05/15] test/hash: rectify slaveid to point to valid cores
From: Dharmik Thakkar This patch rectifies slave_id to point to valid core indexes rather than core ranks in read-write lock-free concurrency test. It also replaces a 'for' loop with RTE_LCORE_FOREACH API. Fixes: c7eb0972e74b ("test/hash: add lock-free r/w concurrency") Cc: sta...@dpdk.org Signed-off-by: Dharmik Thakkar Reviewed-by: Ruifeng Wang Signed-off-by: David Marchand Acked-by: Yipeng Wang --- app/test/test_hash_readwrite_lf.c | 24 +++- 1 file changed, 11 insertions(+), 13 deletions(-) diff --git a/app/test/test_hash_readwrite_lf.c b/app/test/test_hash_readwrite_lf.c index f9f233a..5644361 100644 --- a/app/test/test_hash_readwrite_lf.c +++ b/app/test/test_hash_readwrite_lf.c @@ -126,11 +126,9 @@ struct { uint32_t i = 0; uint16_t core_id; uint32_t max_cores = rte_lcore_count(); - for (core_id = 0; core_id < RTE_MAX_LCORE && i < max_cores; core_id++) { - if (rte_lcore_is_enabled(core_id)) { - enabled_core_ids[i] = core_id; - i++; - } + RTE_LCORE_FOREACH(core_id) { + enabled_core_ids[i] = core_id; + i++; } if (i != max_cores) { @@ -738,7 +736,7 @@ struct { enabled_core_ids[i]); for (i = 1; i <= rwc_core_cnt[n]; i++) - if (rte_eal_wait_lcore(i) < 0) + if (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) goto err; unsigned long long cycles_per_lookup = @@ -810,7 +808,7 @@ struct { if (ret < 0) goto err; for (i = 1; i <= rwc_core_cnt[n]; i++) - if (rte_eal_wait_lcore(i) < 0) + if (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) goto err; unsigned long long cycles_per_lookup = @@ -886,7 +884,7 @@ struct { if (ret < 0) goto err; for (i = 1; i <= rwc_core_cnt[n]; i++) - if (rte_eal_wait_lcore(i) < 0) + if (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) goto err; unsigned long long cycles_per_lookup = @@ -962,7 +960,7 @@ struct { if (ret < 0) goto err; for (i = 1; i <= rwc_core_cnt[n]; i++) - if (rte_eal_wait_lcore(i) < 0) + if (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) goto err; unsigned long long cycles_per_lookup = @@ -1037,7 +1035,7 @@ struct { if (ret < 0) goto err; for (i = 1; i <= rwc_core_cnt[n]; i++) - if (rte_eal_wait_lcore(i) < 0) + if (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) goto err; unsigned long long cycles_per_lookup = @@ -1132,12 +1130,12 @@ struct { for (i = rwc_core_cnt[n] + 1; i <= rwc_core_cnt[m] + rwc_core_cnt[n]; i++) - rte_eal_wait_lcore(i); + rte_eal_wait_lcore(enabled_core_ids[i]); writer_done = 1; for (i = 1; i <= rwc_core_cnt[n]; i++) - if (rte_eal_wait_lcore(i) < 0) + if (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) goto err; unsigned long long cycles_per_lookup = @@ -1221,7 +1219,7 @@ struct { writer_done = 1; for (i = 1; i <= rwc_core_cnt[n]; i++) - if (rte_eal_wait_lcore(i) < 0) + if (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) goto err; unsigned long long cycles_per_lookup = -- 1.8.3.1
[dpdk-dev] [PATCH v2 07/15] test/latencystats: fix stack smashing
Caught in one Travis run: + --- + + Test Suite : Latency Stats Unit Test Suite + --- + + TestCase [ 0] : test_latency_init succeeded + TestCase [ 1] : test_latency_update succeeded [snip] + TestCase [1601724781] : test_latencystats_get_names succeeded [snip] + Tests Failed : 1601790830 htonl(1601724781) -> "m", "a", "x", "_" htonl(1601790830) -> "n", "c", "y", "_" Looks like someone went too far. The test passes a bigger size than the array it passes along. Fixes: 1e3676a06e4c ("test/latency: add unit tests for latencystats library") Cc: sta...@dpdk.org Signed-off-by: David Marchand Acked-by: Aaron Conole --- app/test/test_latencystats.c | 18 -- 1 file changed, 18 deletions(-) diff --git a/app/test/test_latencystats.c b/app/test/test_latencystats.c index 039c508..8dd794b 100644 --- a/app/test/test_latencystats.c +++ b/app/test/test_latencystats.c @@ -69,13 +69,10 @@ static int test_latencystats_get_names(void) int ret = 0, i = 0; int size = 0; struct rte_metric_name names[NUM_STATS]; - struct rte_metric_name wrongnames[NUM_STATS - 2]; size_t m_size = sizeof(struct rte_metric_name); for (i = 0; i < NUM_STATS; i++) memset(&names[i], 0, m_size); - for (i = 0; i < NUM_STATS - 2; i++) - memset(&wrongnames[i], 0, m_size); /* Success Test: Valid names and size */ size = NUM_STATS; @@ -99,10 +96,6 @@ static int test_latencystats_get_names(void) TEST_ASSERT((ret == NUM_STATS), "Test Failed to get the metrics count," "Actual: %d Expected: %d", ret, NUM_STATS); - /* Failure Test: Invalid names (array size lesser than size) */ - size = NUM_STATS + 1; - ret = rte_latencystats_get_names(wrongnames, size); - TEST_ASSERT((ret == NUM_STATS), "Test Failed to get metrics names"); return TEST_SUCCESS; } @@ -112,13 +105,10 @@ static int test_latencystats_get(void) int ret = 0, i = 0; int size = 0; struct rte_metric_value values[NUM_STATS]; - struct rte_metric_value wrongvalues[NUM_STATS - 2]; size_t v_size = sizeof(struct rte_metric_value); for (i = 0; i < NUM_STATS; i++) memset(&values[i], 0, v_size); - for (i = 0; i < NUM_STATS - 2; i++) - memset(&wrongvalues[i], 0, v_size); /* Success Test: Valid values and valid size */ size = NUM_STATS; @@ -137,14 +127,6 @@ static int test_latencystats_get(void) TEST_ASSERT((ret == NUM_STATS), "Test Failed to get the stats count," "Actual: %d Expected: %d", ret, NUM_STATS); - /* Failure Test: Invalid values(array size lesser than size) -* and invalid size -*/ - size = NUM_STATS + 2; - ret = rte_latencystats_get(wrongvalues, size); - TEST_ASSERT(ret == NUM_STATS, "Test Failed to get latency metrics" - " values"); - return TEST_SUCCESS; } -- 1.8.3.1
[dpdk-dev] [PATCH v2 04/15] test/hash: fix off-by-one check on core count
This subtest wants to start rwc_core_cnt[n] reader threads, while the master core is waiting for them to report. Fixes: c7eb0972e74b ("test/hash: add lock-free r/w concurrency") Cc: sta...@dpdk.org Signed-off-by: David Marchand Acked-by: Aaron Conole Acked-by: Yipeng Wang --- app/test/test_hash_readwrite_lf.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/app/test/test_hash_readwrite_lf.c b/app/test/test_hash_readwrite_lf.c index 343a338..f9f233a 100644 --- a/app/test/test_hash_readwrite_lf.c +++ b/app/test/test_hash_readwrite_lf.c @@ -939,7 +939,7 @@ struct { } for (n = 0; n < NUM_TEST; n++) { unsigned int tot_lcore = rte_lcore_count(); - if (tot_lcore < rwc_core_cnt[n]) + if (tot_lcore < rwc_core_cnt[n] + 1) goto finish; printf("\nNumber of readers: %u\n", rwc_core_cnt[n]); -- 1.8.3.1
[dpdk-dev] [PATCH v2 01/15] test/bonding: add missing sources for link bonding RSS
Fixes: 3d20ffe6ddb1 ("test: reorder test cases in meson") Cc: sta...@dpdk.org Signed-off-by: David Marchand Acked-by: Aaron Conole --- app/test/meson.build | 1 + 1 file changed, 1 insertion(+) diff --git a/app/test/meson.build b/app/test/meson.build index 4de856f..a51b50a 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -62,6 +62,7 @@ test_sources = files('commands.c', 'test_latencystats.c', 'test_link_bonding.c', 'test_link_bonding_mode4.c', + 'test_link_bonding_rssconf.c', 'test_logs.c', 'test_lpm.c', 'test_lpm6.c', -- 1.8.3.1
[dpdk-dev] [PATCH v2 08/15] test/rcu: remove arbitrary limit on max core count
We can have up to RTE_MAX_LCORE in a dpdk application. Remove the limit on 128 cores and tests that are now always false. Fixes: b87089b0bb19 ("test/rcu: add API and functional tests") Cc: sta...@dpdk.org Signed-off-by: David Marchand --- Changelog since v1: - new patch added to remove the local TEST_RCU_MAX_LCORE limit - changed some integer types to accomodate with the change --- app/test/test_rcu_qsbr.c | 133 ++ app/test/test_rcu_qsbr_perf.c | 63 +++- 2 files changed, 79 insertions(+), 117 deletions(-) diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c index 5f0b138..97af087 100644 --- a/app/test/test_rcu_qsbr.c +++ b/app/test/test_rcu_qsbr.c @@ -26,19 +26,18 @@ /* Make sure that this has the same value as __RTE_QSBR_CNT_INIT */ #define TEST_RCU_QSBR_CNT_INIT 1 -#define TEST_RCU_MAX_LCORE 128 -uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE]; +uint16_t enabled_core_ids[RTE_MAX_LCORE]; uint8_t num_cores; static uint32_t *keys; #define TOTAL_ENTRY (1024 * 8) #define COUNTER_VALUE 4096 -static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY]; +static uint32_t *hash_data[RTE_MAX_LCORE][TOTAL_ENTRY]; static uint8_t writer_done; -static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE]; -struct rte_hash *h[TEST_RCU_MAX_LCORE]; -char hash_name[TEST_RCU_MAX_LCORE][8]; +static struct rte_rcu_qsbr *t[RTE_MAX_LCORE]; +struct rte_hash *h[RTE_MAX_LCORE]; +char hash_name[RTE_MAX_LCORE][8]; struct test_rcu_thread_info { /* Index in RCU array */ @@ -48,28 +47,7 @@ struct test_rcu_thread_info { /* lcore IDs registered on the RCU variable */ uint16_t r_core_ids[2]; }; -struct test_rcu_thread_info thread_info[TEST_RCU_MAX_LCORE/4]; - -static inline int -get_enabled_cores_mask(void) -{ - uint16_t core_id; - uint32_t max_cores = rte_lcore_count(); - - if (max_cores > TEST_RCU_MAX_LCORE) { - printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE); - return -1; - } - - core_id = 0; - num_cores = 0; - RTE_LCORE_FOREACH_SLAVE(core_id) { - enabled_core_ids[num_cores] = core_id; - num_cores++; - } - - return 0; -} +struct test_rcu_thread_info thread_info[RTE_MAX_LCORE/4]; static int alloc_rcu(void) @@ -77,9 +55,9 @@ struct test_rcu_thread_info { int i; uint32_t sz; - sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE); + sz = rte_rcu_qsbr_get_memsize(RTE_MAX_LCORE); - for (i = 0; i < TEST_RCU_MAX_LCORE; i++) + for (i = 0; i < RTE_MAX_LCORE; i++) t[i] = (struct rte_rcu_qsbr *)rte_zmalloc(NULL, sz, RTE_CACHE_LINE_SIZE); @@ -91,7 +69,7 @@ struct test_rcu_thread_info { { int i; - for (i = 0; i < TEST_RCU_MAX_LCORE; i++) + for (i = 0; i < RTE_MAX_LCORE; i++) rte_free(t[i]); return 0; @@ -111,7 +89,7 @@ struct test_rcu_thread_info { sz = rte_rcu_qsbr_get_memsize(0); TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 1), "Get Memsize for 0 threads"); - sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE); + sz = rte_rcu_qsbr_get_memsize(RTE_MAX_LCORE); /* For 128 threads, * for machines with cache line size of 64B - 8384 * for machines with cache line size of 128 - 16768 @@ -132,7 +110,7 @@ struct test_rcu_thread_info { printf("\nTest rte_rcu_qsbr_init()\n"); - r = rte_rcu_qsbr_init(NULL, TEST_RCU_MAX_LCORE); + r = rte_rcu_qsbr_init(NULL, RTE_MAX_LCORE); TEST_RCU_QSBR_RETURN_IF_ERROR((r != 1), "NULL variable"); return 0; @@ -156,7 +134,7 @@ struct test_rcu_thread_info { TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable, invalid thread id"); - rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE); + rte_rcu_qsbr_init(t[0], RTE_MAX_LCORE); /* Register valid thread id */ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]); @@ -168,7 +146,7 @@ struct test_rcu_thread_info { "Already registered thread id"); /* Register valid thread id - max allowed thread id */ - ret = rte_rcu_qsbr_thread_register(t[0], TEST_RCU_MAX_LCORE - 1); + ret = rte_rcu_qsbr_thread_register(t[0], RTE_MAX_LCORE - 1); TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Max thread id"); ret = rte_rcu_qsbr_thread_register(t[0], 10); @@ -185,9 +163,10 @@ struct test_rcu_thread_info { static int test_rcu_qsbr_thread_unregister(void) { - int i, j, ret; + unsigned int num_threads[3] = {1, RTE_MAX_LCORE, 1}; + unsigned int i, j; uint64_t token; - uint8_t num_threads[3] = {1, TEST_RCU_MAX_LCORE, 1}; + int ret; printf("\nTest rte_rcu_qsbr_thread_unregister()\n"); @@ -198,7 +177,7 @@ struct test_rcu_thread_info { TEST_RCU_QSBR_RETURN_IF_ERROR((r
[dpdk-dev] [PATCH v2 09/15] test/stack: fix lock-free test name
Fixes: 0420378bbfc4 ("test/stack: check lock-free implementation") Cc: sta...@dpdk.org Signed-off-by: David Marchand Acked-by: Aaron Conole --- app/test/meson.build | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/app/test/meson.build b/app/test/meson.build index 44cf561..7628ed9 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -188,7 +188,7 @@ fast_parallel_test_names = [ 'sched_autotest', 'spinlock_autotest', 'stack_autotest', -'stack_nb_autotest', +'stack_lf_autotest', 'string_autotest', 'table_autotest', 'tailq_autotest', @@ -247,7 +247,7 @@ perf_test_names = [ 'ring_pmd_perf_autotest', 'pmd_perf_autotest', 'stack_perf_autotest', -'stack_nb_perf_autotest', +'stack_lf_perf_autotest', ] # All test cases in driver_test_names list are non-parallel -- 1.8.3.1
[dpdk-dev] [PATCH v2 11/15] test/eal: set core mask/list config only in dedicated test
Setting a coremask was mandatory a long time ago but has been optional for a while. The checks on PCI whitelist/blacklist, vdev, memory rank, memory channel, HPET, memory size and other miscs options have no requirement wrt cores. Let's remove those coremasks so that we only care about it in the dedicated checks. Signed-off-by: David Marchand Acked-by: Aaron Conole --- app/test/test_eal_flags.c | 141 -- 1 file changed, 73 insertions(+), 68 deletions(-) diff --git a/app/test/test_eal_flags.c b/app/test/test_eal_flags.c index e82e56a..5e11b9f 100644 --- a/app/test/test_eal_flags.c +++ b/app/test/test_eal_flags.c @@ -245,25 +245,25 @@ enum hugepage_action { #endif const char *wlinval[][11] = { - {prgname, prefix, mp_flag, "-c", "1", + {prgname, prefix, mp_flag, pci_whitelist, "error", "", ""}, - {prgname, prefix, mp_flag, "-c", "1", + {prgname, prefix, mp_flag, pci_whitelist, "0:0:0", "", ""}, - {prgname, prefix, mp_flag, "-c", "1", + {prgname, prefix, mp_flag, pci_whitelist, "0:error:0.1", "", ""}, - {prgname, prefix, mp_flag, "-c", "1", + {prgname, prefix, mp_flag, pci_whitelist, "0:0:0.1error", "", ""}, - {prgname, prefix, mp_flag, "-c", "1", + {prgname, prefix, mp_flag, pci_whitelist, "error0:0:0.1", "", ""}, - {prgname, prefix, mp_flag, "-c", "1", + {prgname, prefix, mp_flag, pci_whitelist, "0:0:0.1.2", "", ""}, }; /* Test with valid whitelist option */ - const char *wlval1[] = {prgname, prefix, mp_flag, "-c", "1", + const char *wlval1[] = {prgname, prefix, mp_flag, pci_whitelist, "00FF:09:0B.3"}; - const char *wlval2[] = {prgname, prefix, mp_flag, "-c", "1", + const char *wlval2[] = {prgname, prefix, mp_flag, pci_whitelist, "09:0B.3", pci_whitelist, "0a:0b.1"}; - const char *wlval3[] = {prgname, prefix, mp_flag, "-c", "1", + const char *wlval3[] = {prgname, prefix, mp_flag, pci_whitelist, "09:0B.3,type=test", pci_whitelist, "08:00.1,type=normal", }; @@ -311,15 +311,15 @@ enum hugepage_action { #endif const char *blinval[][9] = { - {prgname, prefix, mp_flag, "-c", "1", "-b", "error"}, - {prgname, prefix, mp_flag, "-c", "1", "-b", "0:0:0"}, - {prgname, prefix, mp_flag, "-c", "1", "-b", "0:error:0.1"}, - {prgname, prefix, mp_flag, "-c", "1", "-b", "0:0:0.1error"}, - {prgname, prefix, mp_flag, "-c", "1", "-b", "error0:0:0.1"}, - {prgname, prefix, mp_flag, "-c", "1", "-b", "0:0:0.1.2"}, + {prgname, prefix, mp_flag, "-b", "error"}, + {prgname, prefix, mp_flag, "-b", "0:0:0"}, + {prgname, prefix, mp_flag, "-b", "0:error:0.1"}, + {prgname, prefix, mp_flag, "-b", "0:0:0.1error"}, + {prgname, prefix, mp_flag, "-b", "error0:0:0.1"}, + {prgname, prefix, mp_flag, "-b", "0:0:0.1.2"}, }; /* Test with valid blacklist option */ - const char *blval[] = {prgname, prefix, mp_flag, "-c", "1", + const char *blval[] = {prgname, prefix, mp_flag, "-b", "FF:09:0B.3"}; int i; @@ -356,17 +356,17 @@ enum hugepage_action { /* Test with invalid vdev option */ const char *vdevinval[] = {prgname, prefix, no_huge, - "-c", "1", vdev, "eth_dummy"}; + vdev, "eth_dummy"}; /* Test with valid vdev option */ const char *vdevval1[] = {prgname, prefix, no_huge, - "-c", "1", vdev, "net_ring0"}; + vdev, "net_ring0"}; const char *vdevval2[] = {prgname, prefix, no_huge, - "-c", "1", vdev, "net_ring0,args=test"}; + vdev, "net_ring0,args=test"}; const char *vdevval3[] = {prgname, prefix, no_huge, - "-c", "1", vdev, "net_ring0,nodeaction=r1:0:CREATE"}; + vdev, "net_ring0,nodeaction=r1:0:CREATE"}; if (launch_proc(vdevinval) == 0) { printf("Error - process did run ok with invalid " @@ -413,13 +413,13 @@ enum hugepage_action { #endif const char *rinval[][9] = { - {prgname, prefix, mp_flag, "-c", "1", "-r", "error"}, - {prgname, prefix, mp_flag, "-c", "1", "-r", "0"}, - {prgname, prefix, mp_flag, "-c", "1", "-r", "-1"}, - {prgname, prefix, mp_flag, "-c", "1", "-r", "17"}, + {prgname, prefix, mp_flag, "-r", "error"}, + {prgname, pre
[dpdk-dev] [PATCH v2 13/15] test: split into shorter subtests for CI
Based on Michael initial idea of separating the file-prefix subtest in the eal flags test. Let's split the biggest tests into their subparts. It is then easier to have them fit in the 10s timeout we have configured in Travis. We also get a better idea of which part fails in the previously big tests we had. Those new subtests are called from the meson testsuite. The autotest tool is left untouched. Note: we still have an issue with test_hash_readwrite_lf.c, any help from the original authors would be appreciated. Signed-off-by: David Marchand --- Changelog since v1: - new patch added, --- app/test/meson.build | 18 -- app/test/test_eal_flags.c | 20 ++-- app/test/test_rwlock.c| 6 ++ 3 files changed, 40 insertions(+), 4 deletions(-) diff --git a/app/test/meson.build b/app/test/meson.build index a9dddf8..f799c8e 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -159,7 +159,18 @@ fast_parallel_test_names = [ 'cpuflags_autotest', 'cycles_autotest', 'debug_autotest', -'eal_flags_autotest', +'eal_flags_c_opt_autotest', +'eal_flags_master_opt_autotest', +'eal_flags_n_opt_autotest', +'eal_flags_hpet_autotest', +'eal_flags_no_huge_autotest', +'eal_flags_w_opt_autotest', +'eal_flags_b_opt_autotest', +'eal_flags_vdev_opt_autotest', +'eal_flags_r_opt_autotest', +'eal_flags_mem_autotest', +'eal_flags_file_prefix_autotest', +'eal_flags_misc_autotest', 'eal_fs_autotest', 'errno_autotest', 'event_ring_autotest', @@ -184,7 +195,10 @@ fast_parallel_test_names = [ 'red_autotest', 'ring_autotest', 'ring_pmd_autotest', -'rwlock_autotest', +'rwlock_test1_autotest', +'rwlock_rda_autotest', +'rwlock_rds_wrm_autotest', +'rwlock_rde_wro_autotest', 'sched_autotest', 'spinlock_autotest', 'stack_autotest', diff --git a/app/test/test_eal_flags.c b/app/test/test_eal_flags.c index cfa8a61..9985ee9 100644 --- a/app/test/test_eal_flags.c +++ b/app/test/test_eal_flags.c @@ -342,10 +342,10 @@ enum hugepage_action { * Test that the app doesn't run with invalid vdev option. * Final test ensures it does run with valid options as sanity check */ -#ifdef RTE_LIBRTE_PMD_RING static int test_invalid_vdev_flag(void) { +#ifdef RTE_LIBRTE_PMD_RING #ifdef RTE_EXEC_ENV_FREEBSD /* BSD target doesn't support prefixes at this point, and we also need to * run another primary process here */ @@ -391,8 +391,10 @@ enum hugepage_action { return -1; } return 0; -} +#else + return TEST_SKIPPED; #endif +} /* * Test that the app doesn't run with invalid -r option. @@ -1413,3 +1415,17 @@ enum hugepage_action { } REGISTER_TEST_COMMAND(eal_flags_autotest, test_eal_flags); + +/* subtests used in meson for CI */ +REGISTER_TEST_COMMAND(eal_flags_c_opt_autotest, test_missing_c_flag); +REGISTER_TEST_COMMAND(eal_flags_master_opt_autotest, test_master_lcore_flag); +REGISTER_TEST_COMMAND(eal_flags_n_opt_autotest, test_invalid_n_flag); +REGISTER_TEST_COMMAND(eal_flags_hpet_autotest, test_no_hpet_flag); +REGISTER_TEST_COMMAND(eal_flags_no_huge_autotest, test_no_huge_flag); +REGISTER_TEST_COMMAND(eal_flags_w_opt_autotest, test_whitelist_flag); +REGISTER_TEST_COMMAND(eal_flags_b_opt_autotest, test_invalid_b_flag); +REGISTER_TEST_COMMAND(eal_flags_vdev_opt_autotest, test_invalid_vdev_flag); +REGISTER_TEST_COMMAND(eal_flags_r_opt_autotest, test_invalid_r_flag); +REGISTER_TEST_COMMAND(eal_flags_mem_autotest, test_memory_flags); +REGISTER_TEST_COMMAND(eal_flags_file_prefix_autotest, test_file_prefix); +REGISTER_TEST_COMMAND(eal_flags_misc_autotest, test_misc_flags); diff --git a/app/test/test_rwlock.c b/app/test/test_rwlock.c index c3d656a..40f9175 100644 --- a/app/test/test_rwlock.c +++ b/app/test/test_rwlock.c @@ -547,3 +547,9 @@ struct try_rwlock_lcore { } REGISTER_TEST_COMMAND(rwlock_autotest, test_rwlock); + +/* subtests used in meson for CI */ +REGISTER_TEST_COMMAND(rwlock_test1_autotest, rwlock_test1); +REGISTER_TEST_COMMAND(rwlock_rda_autotest, try_rwlock_test_rda); +REGISTER_TEST_COMMAND(rwlock_rds_wrm_autotest, try_rwlock_test_rds_wrm); +REGISTER_TEST_COMMAND(rwlock_rde_wro_autotest, try_rwlock_test_rde_wro); -- 1.8.3.1
[dpdk-dev] [PATCH v2 10/15] test/eal: set memory channel config only in dedicated test
The -n option is an optimisation configuration option that defaults to 0. Such a default value makes the mempool library distributes objects as if there was 4 memory channels, so -n 4 is the same as the default behavior. This parameter was mandatory a long time ago, but has been optional for a while. We check that setting this value works fine in its own test. Remove it everywhere else. Signed-off-by: David Marchand Acked-by: Aaron Conole --- Changelog since v1: - rebased on master --- app/test/autotest.py | 2 +- app/test/meson.build | 2 +- app/test/test_eal_flags.c | 191 -- 3 files changed, 101 insertions(+), 94 deletions(-) diff --git a/app/test/autotest.py b/app/test/autotest.py index 46c469e..b42f488 100644 --- a/app/test/autotest.py +++ b/app/test/autotest.py @@ -32,7 +32,7 @@ def usage(): else: test_whitelist = testlist -cmdline = "%s -c f -n 4" % (sys.argv[1]) +cmdline = "%s -c f" % (sys.argv[1]) print(cmdline) diff --git a/app/test/meson.build b/app/test/meson.build index 7628ed9..a9dddf8 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -368,7 +368,7 @@ endif num_cores_arg = '-l ' + num_cores -test_args = [num_cores_arg, '-n 4'] +test_args = [num_cores_arg] foreach arg : fast_parallel_test_names if host_machine.system() == 'linux' test(arg, dpdk_test, diff --git a/app/test/test_eal_flags.c b/app/test/test_eal_flags.c index 9112c96..e82e56a 100644 --- a/app/test/test_eal_flags.c +++ b/app/test/test_eal_flags.c @@ -245,25 +245,25 @@ enum hugepage_action { #endif const char *wlinval[][11] = { - {prgname, prefix, mp_flag, "-n", "1", "-c", "1", + {prgname, prefix, mp_flag, "-c", "1", pci_whitelist, "error", "", ""}, - {prgname, prefix, mp_flag, "-n", "1", "-c", "1", + {prgname, prefix, mp_flag, "-c", "1", pci_whitelist, "0:0:0", "", ""}, - {prgname, prefix, mp_flag, "-n", "1", "-c", "1", + {prgname, prefix, mp_flag, "-c", "1", pci_whitelist, "0:error:0.1", "", ""}, - {prgname, prefix, mp_flag, "-n", "1", "-c", "1", + {prgname, prefix, mp_flag, "-c", "1", pci_whitelist, "0:0:0.1error", "", ""}, - {prgname, prefix, mp_flag, "-n", "1", "-c", "1", + {prgname, prefix, mp_flag, "-c", "1", pci_whitelist, "error0:0:0.1", "", ""}, - {prgname, prefix, mp_flag, "-n", "1", "-c", "1", + {prgname, prefix, mp_flag, "-c", "1", pci_whitelist, "0:0:0.1.2", "", ""}, }; /* Test with valid whitelist option */ - const char *wlval1[] = {prgname, prefix, mp_flag, "-n", "1", "-c", "1", + const char *wlval1[] = {prgname, prefix, mp_flag, "-c", "1", pci_whitelist, "00FF:09:0B.3"}; - const char *wlval2[] = {prgname, prefix, mp_flag, "-n", "1", "-c", "1", + const char *wlval2[] = {prgname, prefix, mp_flag, "-c", "1", pci_whitelist, "09:0B.3", pci_whitelist, "0a:0b.1"}; - const char *wlval3[] = {prgname, prefix, mp_flag, "-n", "1", "-c", "1", + const char *wlval3[] = {prgname, prefix, mp_flag, "-c", "1", pci_whitelist, "09:0B.3,type=test", pci_whitelist, "08:00.1,type=normal", }; @@ -311,15 +311,16 @@ enum hugepage_action { #endif const char *blinval[][9] = { - {prgname, prefix, mp_flag, "-n", "1", "-c", "1", "-b", "error"}, - {prgname, prefix, mp_flag, "-n", "1", "-c", "1", "-b", "0:0:0"}, - {prgname, prefix, mp_flag, "-n", "1", "-c", "1", "-b", "0:error:0.1"}, - {prgname, prefix, mp_flag, "-n", "1", "-c", "1", "-b", "0:0:0.1error"}, - {prgname, prefix, mp_flag, "-n", "1", "-c", "1", "-b", "error0:0:0.1"}, - {prgname, prefix, mp_flag, "-n", "1", "-c", "1", "-b", "0:0:0.1.2"}, + {prgname, prefix, mp_flag, "-c", "1", "-b", "error"}, + {prgname, prefix, mp_flag, "-c", "1", "-b", "0:0:0"}, + {prgname, prefix, mp_flag, "-c", "1", "-b", "0:error:0.1"}, + {prgname, prefix, mp_flag, "-c", "1", "-b", "0:0:0.1error"}, + {prgname, prefix, mp_flag, "-c", "1", "-b", "error0:0:0.1"}, + {prgname, prefix, mp_flag, "-c", "1", "-b", "0:0:0.1.2"}, }; /* Test with valid blacklist option */ - const char *blval[] = {prgname, prefix, mp_flag, "-n", "1", "-c", "1", "-b", "FF:09:0B.3"}; + const char *blval[] = {prgname, prefix, mp_flag, "-c", "1", + "-b", "FF:09:0B.3"}; int i; @@ -354,17 +355,17 @@ enum hugepage_action { #endif /* Test with invalid
[dpdk-dev] [PATCH v2 12/15] test/eal: check number of cores before running subtests
From: Michael Santana The eal flags unit test assumes that a certain number of cores are available (4 and 8 cores), however this may not always be the case. Individual developers may run the unit test on their local desktop which typically have 2 to 4 cores, in said case the test is bound to fail for lacking 4 or 8 cores. Additionally, as we push forward introducing CI into DPDK we are limited to the hardware specification of CI services (e.g. Travis CI) that only have 2 cores on their servers, in which case the test would fail. To fix this we check available cores before running a subtest. This applies to subtests that are dedicated to test that the -l and --lcore flags work correctly. If not enough cores are available the subtest is simply skipped, otherwise the subtest is run. Signed-off-by: Michael Santana Signed-off-by: David Marchand Acked-by: Aaron Conole --- app/test/test_eal_flags.c | 15 --- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/app/test/test_eal_flags.c b/app/test/test_eal_flags.c index 5e11b9f..cfa8a61 100644 --- a/app/test/test_eal_flags.c +++ b/app/test/test_eal_flags.c @@ -19,7 +19,7 @@ #include #include -#include +#include #include #include @@ -560,7 +560,9 @@ enum hugepage_action { "process ran without error with invalid -l flag\n"); return -1; } - if (launch_proc(argv15) != 0) { + if (rte_lcore_is_enabled(0) && rte_lcore_is_enabled(1) && + rte_lcore_is_enabled(2) && rte_lcore_is_enabled(3) && + launch_proc(argv15) != 0) { printf("Error - " "process did not run ok with valid corelist value\n"); return -1; @@ -579,7 +581,11 @@ enum hugepage_action { return -1; } - if (launch_proc(argv29) != 0) { + if (rte_lcore_is_enabled(0) && rte_lcore_is_enabled(1) && + rte_lcore_is_enabled(2) && rte_lcore_is_enabled(3) && + rte_lcore_is_enabled(3) && rte_lcore_is_enabled(5) && + rte_lcore_is_enabled(4) && rte_lcore_is_enabled(7) && + launch_proc(argv29) != 0) { printf("Error - " "process did not run ok with valid corelist value\n"); return -1; @@ -606,6 +612,9 @@ enum hugepage_action { snprintf(prefix, sizeof(prefix), "--file-prefix=%s", tmp); #endif + if (!rte_lcore_is_enabled(0) || !rte_lcore_is_enabled(1)) + return TEST_SKIPPED; + /* --master-lcore flag but no value */ const char *argv1[] = { prgname, prefix, mp_flag, "-c", "3", "--master-lcore"}; -- 1.8.3.1
[dpdk-dev] [PATCH v2 14/15] test: do not start tests in parallel
Running the tests in parallel has two drawbacks: - the tests are racing on the hugepages allocations, - the tests are sharing the cores to run their checks which results in undeterministic execution time, This results in random failures. For better reproducibility in CI, start them all in a serialised way. Signed-off-by: David Marchand --- Changelog since v1: - rebased on master --- app/test/meson.build | 25 + 1 file changed, 5 insertions(+), 20 deletions(-) diff --git a/app/test/meson.build b/app/test/meson.build index f799c8e..f1db02f 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -148,8 +148,7 @@ test_deps = ['acl', 'timer' ] -# All test cases in fast_parallel_test_names list are parallel -fast_parallel_test_names = [ +fast_test_names = [ 'acl_autotest', 'alarm_autotest', 'atomic_autotest', @@ -209,10 +208,6 @@ fast_parallel_test_names = [ 'timer_autotest', 'user_delay_us', 'version_autotest', -] - -# All test cases in fast_non_parallel_test_names list are non-parallel -fast_non_parallel_test_names = [ 'bitratestats_autotest', 'crc_autotest', 'delay_us_sleep_autotest', @@ -236,7 +231,6 @@ fast_non_parallel_test_names = [ 'thash_autotest', ] -# All test cases in perf_test_names list are non-parallel perf_test_names = [ 'ring_perf_autotest', 'mempool_perf_autotest', @@ -264,7 +258,6 @@ perf_test_names = [ 'stack_lf_perf_autotest', ] -# All test cases in driver_test_names list are non-parallel driver_test_names = [ 'cryptodev_aesni_mb_autotest', 'cryptodev_aesni_gcm_autotest', @@ -286,7 +279,6 @@ driver_test_names = [ 'link_bonding_rssconf_autotest', ] -# All test cases in dump_test_names list are non-parallel dump_test_names = [ 'dump_struct_sizes', 'dump_mempool', @@ -335,7 +327,7 @@ if dpdk_conf.has('RTE_LIBRTE_COMPRESSDEV') test_dep_objs += compress_test_dep test_sources += 'test_compressdev.c' test_deps += 'compressdev' - fast_non_parallel_test_names += 'compressdev_autotest' + fast_test_names += 'compressdev_autotest' endif endif @@ -383,30 +375,23 @@ endif num_cores_arg = '-l ' + num_cores test_args = [num_cores_arg] -foreach arg : fast_parallel_test_names +foreach arg : fast_test_names if host_machine.system() == 'linux' test(arg, dpdk_test, env : ['DPDK_TEST=' + arg], args : test_args + ['--file-prefix=@0@'.format(arg)], timeout : timeout_seconds_fast, + is_parallel : false, suite : 'fast-tests') else test(arg, dpdk_test, env : ['DPDK_TEST=' + arg], args : test_args, timeout : timeout_seconds_fast, - suite : 'fast-tests') - endif -endforeach - -foreach arg : fast_non_parallel_test_names - test(arg, dpdk_test, - env : ['DPDK_TEST=' + arg], - args : test_args, - timeout : timeout_seconds_fast, is_parallel : false, suite : 'fast-tests') + endif endforeach foreach arg : perf_test_names -- 1.8.3.1
[dpdk-dev] [PATCH v2 15/15] test: skip tests when missing requirements
Let's mark as skipped the tests when they are missing some requirements like a number of used cores or specific hardware availability, like compress, crypto or eventdev devices. Signed-off-by: David Marchand --- Changelog since v1: - adapted rcu parts with changes from newly added patch 8 --- app/test/test.c | 24 app/test/test_compressdev.c | 4 ++-- app/test/test_cryptodev.c | 4 ++-- app/test/test_distributor.c | 4 ++-- app/test/test_distributor_perf.c| 4 ++-- app/test/test_event_timer_adapter.c | 5 +++-- app/test/test_eventdev.c| 2 ++ app/test/test_func_reentrancy.c | 6 +++--- app/test/test_hash_multiwriter.c| 7 +++ app/test/test_hash_readwrite.c | 7 +++ app/test/test_hash_readwrite_lf.c | 8 app/test/test_ipsec.c | 4 ++-- app/test/test_mbuf.c| 13 ++--- app/test/test_rcu_qsbr.c| 10 +- app/test/test_rcu_qsbr_perf.c | 10 +- app/test/test_service_cores.c | 14 ++ app/test/test_stack.c | 8 +--- app/test/test_timer.c | 10 +- app/test/test_timer_secondary.c | 10 ++ 19 files changed, 90 insertions(+), 64 deletions(-) diff --git a/app/test/test.c b/app/test/test.c index ea1e98f..194a92a 100644 --- a/app/test/test.c +++ b/app/test/test.c @@ -208,14 +208,16 @@ printf(" + Test Suite : %s\n", suite->suite_name); } - if (suite->setup) - if (suite->setup() != 0) { + if (suite->setup) { + test_success = suite->setup(); + if (test_success != 0) { /* -* setup failed, so count all enabled tests and mark -* them as failed +* setup did not pass, so count all enabled tests and +* mark them as failed/skipped */ while (suite->unit_test_cases[total].testcase) { - if (!suite->unit_test_cases[total].enabled) + if (!suite->unit_test_cases[total].enabled || + test_success == TEST_SKIPPED) skipped++; else failed++; @@ -223,6 +225,7 @@ } goto suite_summary; } + } printf(" + --- +\n"); @@ -246,6 +249,8 @@ test_success = suite->unit_test_cases[total].testcase(); if (test_success == TEST_SUCCESS) succeeded++; + else if (test_success == TEST_SKIPPED) + skipped++; else if (test_success == -ENOTSUP) unsupported++; else @@ -262,6 +267,8 @@ if (test_success == TEST_SUCCESS) status = "succeeded"; + else if (test_success == TEST_SKIPPED) + status = "skipped"; else if (test_success == -ENOTSUP) status = "unsupported"; else @@ -293,7 +300,8 @@ last_test_result = failed; if (failed) - return -1; - - return 0; + return TEST_FAILED; + if (total == skipped) + return TEST_SKIPPED; + return TEST_SUCCESS; } diff --git a/app/test/test_compressdev.c b/app/test/test_compressdev.c index 1b1983e..cf78775 100644 --- a/app/test/test_compressdev.c +++ b/app/test/test_compressdev.c @@ -134,8 +134,8 @@ struct test_data_params { unsigned int i; if (rte_compressdev_count() == 0) { - RTE_LOG(ERR, USER1, "Need at least one compress device\n"); - return TEST_FAILED; + RTE_LOG(WARNING, USER1, "Need at least one compress device\n"); + return TEST_SKIPPED; } RTE_LOG(NOTICE, USER1, "Running tests on device %s\n", diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index eca6d3d..0509af7 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -408,8 +408,8 @@ struct crypto_unittest_params { nb_devs = rte_cryptodev_count(); if (nb_devs < 1) { - RTE_LOG(ERR, USER1, "No crypto devices found?\n"); - return TEST_FAILED; + RTE_LOG(WARNING, USER1, "No crypto devices found?\n"); + return TEST_SKIPPED; } /* Create list of valid crypto devs */ diff --git a/app/test/test_distributor.c b/app/test/test_distributor.c index da3348f..8084c07 100644 --- a/app/test/test_distributor.c +++ b/app