Re: [PATCH v3] eal: add build-time option to omit trace
On Fri, Sep 20, 2024 at 2:38 PM Morten Brørup wrote: > > Some applications want to omit the trace feature. > Either to reduce the memory footprint, to reduce the exposed attack > surface, or for other reasons. > > This patch adds an option in rte_config.h to include or omit trace in the > build. Trace is included by default. > > Omitting trace works by omitting all trace points. > For API and ABI compatibility, the trace feature itself remains. > > Signed-off-by: Morten Brørup > --- > v3: > * Simpler version with much fewer ifdefs. (Jerin Jacob) > v2: > * Added/modified macros required for building applications with trace > omitted. > --- > app/test/test_trace.c | 11 ++- > config/rte_config.h | 1 + > lib/eal/include/rte_trace_point.h | 21 + > 3 files changed, 32 insertions(+), 1 deletion(-) > > diff --git a/app/test/test_trace.c b/app/test/test_trace.c > index 00809f433b..7918cc865d 100644 > --- a/app/test/test_trace.c > +++ b/app/test/test_trace.c > @@ -12,7 +12,16 @@ > > int app_dpdk_test_tp_count; > > -#ifdef RTE_EXEC_ENV_WINDOWS > +#if !defined(RTE_TRACE) Now that, we are is not disabling any symbols, This conditional compilation can be removed. Use __rte_trace_point_generic_is_enabled() in another leg. > + > +static int > +test_trace(void) > +{ > + printf("trace omitted at build-time, skipping test\n"); > + return TEST_SKIPPED; > +} > + > +#elif defined(RTE_EXEC_ENV_WINDOWS) > > static int > test_trace(void) > diff --git a/config/rte_config.h b/config/rte_config.h > index dd7bb0d35b..fd6f8a2f1a 100644 > --- a/config/rte_config.h > +++ b/config/rte_config.h > @@ -49,6 +49,7 @@ > #define RTE_MAX_TAILQ 32 > #define RTE_LOG_DP_LEVEL RTE_LOG_INFO > #define RTE_MAX_VFIO_CONTAINERS 64 > +#define RTE_TRACE 1 > > /* bsd module defines */ > #define RTE_CONTIGMEM_MAX_NUM_BUFS 64 > diff --git a/lib/eal/include/rte_trace_point.h > b/lib/eal/include/rte_trace_point.h > index 41e2a7f99e..b80688ce89 100644 > --- a/lib/eal/include/rte_trace_point.h > +++ b/lib/eal/include/rte_trace_point.h > @@ -212,6 +212,25 @@ bool rte_trace_point_is_enabled(rte_trace_point_t *tp); > __rte_experimental > rte_trace_point_t *rte_trace_point_lookup(const char *name); > > +/** > + * @internal > + * > + * Test if the tracepoint compile-time option is enabled. > + * > + * @return > + * true if tracepoint enabled, false otherwise. > + */ > +__rte_experimental > +static __rte_always_inline bool > +__rte_trace_point_generic_is_enabled(void) Add to version.map file > +{ > +#ifdef RTE_TRACE > + return true; > +#else > + return false; > +#endif > +} > + > /** > * @internal > * > @@ -359,6 +378,8 @@ __rte_trace_point_emit_ev_header(void *mem, uint64_t in) > #define __rte_trace_point_emit_header_generic(t) \ > void *mem; \ > do { \ > + if (!__rte_trace_point_generic_is_enabled()) \ > + return; \ To be super safe, Add similar check in RTE_INIT(trace##_init) > const uint64_t val = rte_atomic_load_explicit(t, > rte_memory_order_acquire); \ > if (likely(!(val & __RTE_TRACE_FIELD_ENABLE_MASK))) \ > return; \ > -- > 2.43.0 >
Re: [PATCH v1] ethdev: fix int overflow in descriptor count logic
On 9/23/24 12:26, Niall Meade wrote: Addressed a specific overflow issue in the eth_dev_adjust_nb_desc() function where the uint16_t variable nb_desc would overflow when its value was greater than (2^16 - nb_align). This overflow caused nb_desc to incorrectly wrap around between 0 and nb_align-1, leading to the function setting nb_desc to nb_min instead of the expected nb_max. The resolution involves upcasting nb_desc to a uint32_t before the RTE_ALIGN_CEIL macro is applied. This change ensures that the subsequent call to RTE_ALIGN_FLOOR(nb_desc + (nb_align - 1), nb_align) does not result in an overflow, as it would when nb_desc is a uint16_t. By using a uint32_t for these operations, the correct behavior is maintained without the risk of overflow. Fixes: 0f67fc3baeb9 ("ethdev: add function to adjust number of descriptors") Signed-off-by: Niall Meade Thanks a lot for the fix. Reviewed-by: Andrew Rybchenko
Re: [PATCH v2 1/3] eventdev: introduce event pre-scheduling
On 2024-09-19 15:13, Pavan Nikhilesh Bhagavatula wrote: From: pbhagavat...@marvell.com Sent: Tuesday, September 17, 2024 3:11 AM To: jer...@marvell.com; sthot...@marvell.com; Sevincer, Abdullah ; hemant.agra...@nxp.com; sachin.sax...@oss.nxp.com; Van Haaren, Harry ; mattias.ronnb...@ericsson.com; lian...@liangbit.com; Mccarthy, Peter Cc: dev@dpdk.org; Pavan Nikhilesh Subject: [PATCH v2 1/3] eventdev: introduce event pre-scheduling From: Pavan Nikhilesh Event pre-scheduling improves scheduling performance by assigning events to event ports in advance when dequeues are issued. The dequeue operation initiates the pre-schedule operation, which completes in parallel without affecting the dequeued event flow contexts and dequeue latency. Is the prescheduling done to get the event more quickly in the next dequeue? The first dequeue executes pre-schedule to make events available for the next dequeue. Is this how it is supposed to work? Yes, that is correct. "improves scheduling performance" may be a bit misleading, in that case. I suggest "reduces scheduling overhead" instead. You can argue it likely reduces scheduling performance, in certain scenarios. "reduces scheduling overhead, at the cost of load balancing performance." It seems to me that this should be a simple hint-type API, where the hint is used by the event device to decide if pre-scheduling should be used or not (assuming pre-scheduling on/off is even an option). The hint would just be a way for the application to express whether or not it want the scheduler to prioritize load balancing agility and port-to-port wall-time latency, or scheduling overhead, which in turn could potentially be rephrased as the app being throughput or latency/RT-oriented. It could also be useful for the event device to know which priority levels are to be considered latency-sensitive, and which are throughput-oriented - maybe in the form of a threshold. Event devices can indicate pre-scheduling capabilities using `RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE` and `RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE` via the event device info function `info.event_dev_cap`. Applications can select the pre-schedule type and configure it through `rte_event_dev_config.preschedule_type` during `rte_event_dev_configure`. The supported pre-schedule types are: * `RTE_EVENT_DEV_PRESCHEDULE_NONE` - No pre-scheduling. * `RTE_EVENT_DEV_PRESCHEDULE` - Always issue a pre-schedule on dequeue. * `RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE` - Delay issuing pre-schedule until there are no forward progress constraints with the held flow contexts. Signed-off-by: Pavan Nikhilesh --- app/test/test_eventdev.c| 63 + doc/guides/prog_guide/eventdev/eventdev.rst | 22 +++ lib/eventdev/rte_eventdev.h | 48 3 files changed, 133 insertions(+) diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c index e4e234dc98..cf496ee88d 100644 --- a/app/test/test_eventdev.c +++ b/app/test/test_eventdev.c @@ -1250,6 +1250,67 @@ test_eventdev_profile_switch(void) return TEST_SUCCESS; } +static int +preschedule_test(rte_event_dev_preschedule_type_t preschedule_type, +const char *preschedule_name) { +#define NB_EVENTS 1024 + uint64_t start, total; + struct rte_event ev; + int rc, cnt; + + ev.event_type = RTE_EVENT_TYPE_CPU; + ev.queue_id = 0; + ev.op = RTE_EVENT_OP_NEW; + ev.u64 = 0xBADF00D0; + + for (cnt = 0; cnt < NB_EVENTS; cnt++) { + ev.flow_id = cnt; + rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1); + TEST_ASSERT(rc == 1, "Failed to enqueue event"); + } + + RTE_SET_USED(preschedule_type); + total = 0; + while (cnt) { + start = rte_rdtsc_precise(); + rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0); + if (rc) { + total += rte_rdtsc_precise() - start; + cnt--; + } + } + printf("Preschedule type : %s, avg cycles %" PRIu64 "\n", preschedule_name, + total / NB_EVENTS); + + return TEST_SUCCESS; +} + +static int +test_eventdev_preschedule_configure(void) +{ + struct rte_event_dev_config dev_conf; + struct rte_event_dev_info info; + int rc; + + rte_event_dev_info_get(TEST_DEV_ID, &info); + + if ((info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE) == 0) + return TEST_SKIPPED; + + devconf_set_default_sane_values(&dev_conf, &info); + dev_conf.preschedule_type = RTE_EVENT_DEV_PRESCHEDULE; + rc = rte_event_dev_configure(TEST_DEV_ID, &dev_conf); + TEST_ASSERT_SUCCESS(rc, "Failed to configure eventdev"); + + rc = preschedule_test(RTE_EVENT_DEV_PRESCHEDULE_NONE, "RTE_EVENT_DEV_PRESCHEDULE_NONE"); + rc |= preschedule_test(RTE_EVENT_DEV_PRESCHEDULE, "RT
Minutes of DPDK Technical Board Meeting, 2024-09-18
Members Attending: 11/11 - Aaron Conole - Bruce Richardson - Hemant Agrawal - Honnappa Nagarahalli - Jerin Jacob - Kevin Traynor - Konstantin Ananyev - Maxime Coquelin - Morten Brørup - Stephen Hemminger - Thomas Monjalon (Chair) NOTE: The Technical Board meetings take place every second Wednesday at 3 pm UTC on https://zoom-lfx.platform.linuxfoundation.org/meeting/96459488340?password=d808f1f6-0a28-4165-929e-5a5bcae7efeb Meetings are public, and DPDK community members are welcome to attend. Agenda and minutes can be found at http://core.dpdk.org/techboard/minutes 1/ DPDK summit remote speakers The techboard prefers talks happening on stage in general. One week before the summit it appears that many speakers won't be able to travel because of VISA issues. It has been decided to allow these speakers to present remotely, preferably with the help of someone being physically on stage. The same decision applies in case a speaker has another issue preventing a travel to Canada. Remote speakers are asked to provide a link to a pre-recorded video backup. 2/ dpdk-next-dts tree maintainer Juraj Linkeš won't be able to continue his involvement in DTS. He will be replaced in the role of git tree maintainer for DTS by Paul Szczepanek (Arm) and Patrick Robb (UNH). Paul agreed to be the primary maintainer. 3/ Linux uAPI headers import Maxime asked whether his proposal to import some Linux headers inside the DPDK repository was accepted. The goal is to be able to use a recent Linux API without having to consider whether the build machine knows it or not. As always, if a call is not supported in the running kernel, an error will be returned by the kernel. The board confirmed it is the right direction. 4/ DPDK + AI Nathan raised a recurring question about the position of DPDK regarding AI techniques. He asked how to integrate it. Some usages of AI for networking were discussed briefly. There will be more information and discussions during the summit. Then another discussion started about helping the AI training with DPDK. There is a desire to have such protocol layers implemented with DPDK, and we are looking for volunteers.
Re: [PATCH v5 1/1] dts: add text parser for testpmd verbose output
The question is when the exception would be raised, or, in other words, what should we do when hasattr(cls, name) is False. If I understand this correctly, is it's False, then name is not among the flags and that means testpmd returned an unsupported flag, which shouldn't happen, but if it does in the future, we would be better off throwing an exception, or at very least, log a warning, so that we have an indication that we need to add support for a new flag. This is a good point. Realistically if it is ever false that would mean we have a gap in implementation. I like the idea of flagging a loud warning over throwing an exception in this case though since if we threw an exception that would stop all test cases that use OL flags to stop working even if they don't require the new flag. That would definitely get the problem fixed sooner, but would also shutdown automated testing until it then. It's a tradeoff between risking CI being affected (if a new flag is added without also adding it to DTS) and how noticable the warning is. I guess we can implement something in CI that will look for warnings like these? +flag |= cls[name] +return flag + +@classmethod +def make_parser(cls) -> ParserFn: +"""Makes a parser function. + +Returns: +ParserFn: A dictionary for the `dataclasses.field` metadata argument containing a +parser function that makes an instance of this flag from text. +""" +return TextParser.wrap( +TextParser.wrap(TextParser.find(r"ol_flags: ([^\n]+)"), str.split), +cls.from_str_list, +) The RSSOffloadTypesFlag does the split in its from_list_string method. Do we want to do the same here? Maybe could create a ParsableFlag (or Creatable? Or something else) superclass that would implement these from_* methods (from_list_string, from_str) and subclass it. Flags should be subclassable if they don't contain members. The superclass would be useful so that we don't redefine the same method over and over and so that it's clear what's already available. I like this idea a lot. Basically all of these flags that are used in parsers are going to need something like that which is going to be basically the same so just implementing it one time would be great. I'm not sure if it fits the scope of this series though, do you think I should write it and add it here or in a separate patch? A separate patch seems better, as it touches different parts of the code. We should probably implement the same logic in this patch (without the exception or warning and with the same if condition) and then make changes in the other patch. @@ -656,6 +1147,9 @@ def stop(self, verify: bool = True) -> None: Raises: InteractiveCommandExecutionError: If `verify` is :data:`True` and the command to stop forwarding results in an error. + +Returns: +Output gathered from sending the stop command. This not just from sending the stop command, but everything else that preceded (when collecting the verbose output), right? Technically yes, but that's just due to the nature of how interactive shells aren't perfect when it comes to asynchronous output. That's why I tried to be sneaky and say that it is the "output gathered from sending the stop command" trying to imply that it is not just the output of the `stop` command, but all the output that is gathered from sending it. I can update this though. diff --git a/dts/framework/utils.py b/dts/framework/utils.py @@ -27,6 +27,12 @@ from .exception import ConfigurationError REGEX_FOR_PCI_ADDRESS: str = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/" +_REGEX_FOR_COLON_SEP_MAC: str = r"(?:[\da-fA-F]{2}:){5}[\da-fA-F]{2}" +_REGEX_FOR_HYPHEN_SEP_MAC: str = r"(?:[\da-fA-F]{2}-){5,7}[\da-fA-F]{2}" {5,7} should be just 5 repetitions. When could it be more? I added it for EUI-64 addresses, but maybe this isn't very relevant here since I just read that they are encouraged on non-ethernet devices. I can remove it if it doesn't seem worth it to capture. From what I gather the EUI-64 address is composed from MAC addresses, but it's a different identifier. I'd say if we ever need it we can add it as a separate regex (and look for both if we need to).
Re: [PATCH 2/9] drivers/raw: introduce cnxk rvu lf device driver
On Sun, Sep 8, 2024 at 1:28 AM Akhil Goyal wrote: > > CNXK product families can have a use case to allow PF and VF > applications to communicate using mailboxes and also get notified > of any interrupt that may occur on the device. > Hence, a new raw device driver is added for such RVU LF devices. > These devices can map to a PF or a VF which can send mailboxes to > each other. > > Signed-off-by: Akhil Goyal > +# SPDX-License-Identifier: BSD-3-Clause > +# Copyright(C) 2024 Marvell. > +# > + > +deps += ['bus_pci', 'common_cnxk', 'rawdev'] > +sources = files( > +'cnxk_rvu_lf.c', > +) > +require_iova_in_mbuf = false > diff --git a/drivers/raw/cnxk_rvu_lf/rte_pmd_rvu_lf.h > b/drivers/raw/cnxk_rvu_lf/rte_pmd_rvu_lf.h > new file mode 100644 > index 00..2d3cd032b7 > --- /dev/null > +++ b/drivers/raw/cnxk_rvu_lf/rte_pmd_rvu_lf.h > @@ -0,0 +1,39 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(C) 2024 Marvell. > + */ > + > +#ifndef _CNXK_RVU_LF_H_ > +#define _CNXK_RVU_LF_H_ > + > +#include > + > +#include > +#include > +#include > +#include > +#include Missing update to doc/api/doxy-api-index.md > + > +/** > + * @file rte_pmd_rvu_lf.h > + * > + * Marvell RVU LF raw PMD specific structures and interface > + * > + * This API allows applications to manage RVU LF device in user space along > with > + * installing interrupt handlers for low latency signal processing. > + */ > + > +#ifdef __cplusplus > +extern "C" { > +#endif > +extern int cnxk_logtype_rvu_lf; Public symbol. Please add Doxygen symbol > + > +#define CNXK_RVU_LF_LOG(level, fmt, args...) \ Public symbol. Please add Doxygen symbol > + rte_log(RTE_LOG_ ## level, cnxk_logtype_rvu_lf, \ > + "%s(): " fmt "\n", __func__, ## args) > + Do we need to make this public symbols?
Re: [PATCH v2 1/2] vhost: add logging mechanism for reconnection
On Fri, Sep 20, 2024 at 11:09 AM Maxime Coquelin wrote: > > This patch introduces a way for backend to keep track > of the needed information to be able to reconnect without > frontend cooperation. > > It will be used for VDUSE, which does not provide interface > for the backend to save and later recover local virtqueues > metadata needed to reconnect. > > Vhost-user support could also be added for improved packed > ring reconnection support. > > Signed-off-by: Maxime Coquelin vq->last_avail_idx gets updated in other places and I suspect we are missing some calls to vhost_virtqueue_reconnect_log_split/packed. I spotted: - lib/vhost/vhost.c: rte_vhost_set_vring_base() - lib/vhost/vhost_user.c: translate_ring_addresses(), vhost_user_set_vring_base(). The rest lgtm. -- David Marchand
Re: [PATCH 00/33] add Marvell cn20k SOC support for mempool and net
On Tue, Sep 10, 2024 at 2:54 PM Nithin Dabilpuram wrote: > > This series adds support for Marvell cn20k SOC for mempool and > net PMD's. > > This series also adds few net/cnxk PMD updates to expose IPsec > features supported by HW that are very custom in nature and > some enhancements for cn10k. # Please update release note for mempool cn20k driver support # Please update release note for ethdev cn20k driver support # Please split non cn20k driver patches to separate series. # Please fix the following build issue total: 0 errors, 0 warnings Applying: net/cnxk: add PMD APIs to submit CPT instruction ccache clang -Idrivers/libtmp_rte_net_cnxk.a.p -Idrivers -I../drivers -Idrivers/net/cnxk -I../drivers/net/cnxk -Ilib/ethdev -I../lib/ethdev -I. -I.. -Iconfig -I../config -Ilib/eal/include -I../lib/eal/include -Ilib/eal/linux/include -I.. /lib/eal/linux/include -Ilib/eal/x86/include -I../lib/eal/x86/include -Ilib/eal/common -I../lib/eal/common -Ilib/eal -I../lib/eal -Ilib/kvargs -I../lib/kvargs -Ilib/log -I../lib/log -Ilib/metrics -I../lib/metrics -Ilib/telemetry -I../lib /telemetry -Ilib/net -I../lib/net -Ilib/mbuf -I../lib/mbuf -Ilib/mempool -I../lib/mempool -Ilib/ring -I../lib/ring -Ilib/meter -I../lib/meter -Idrivers/bus/pci -I../drivers/bus/pci -I../drivers/bus/pci/linux -Ilib/pci -I../lib/pci -Idriv ers/bus/vdev -I../drivers/bus/vdev -Ilib/cryptodev -I../lib/cryptodev -Ilib/rcu -I../lib/rcu -Ilib/eventdev -I../lib/eventdev -Ilib/hash -I../lib/hash -Ilib/timer -I../lib/timer -Ilib/dmadev -I../lib/dmadev -Ilib/security -I../lib/securi ty -Idrivers/common/cnxk -I../drivers/common/cnxk -Idrivers/mempool/cnxk -I../drivers/mempool/cnxk -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -Werror -std=c11 -O2 -g -include rte_config.h -Wcast-qual -W deprecated -Wformat -Wformat-nonliteral -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wnested-externs -Wold-style-definition -Wpointer-arith -Wsign-compare -Wstrict-prototypes -Wundef -Wwrite-strings -Wno-address-of-pack ed-member -Wno-missing-field-initializers -D_GNU_SOURCE -fPIC -march=native -mrtm -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -Wno-format-truncation -flax-vector-conversions -Wno-strict-aliasing -Wno-asm-operand-widths -DRTE_LOG_DEFAUL T_LOGTYPE=pmd.net.cnxk -MD -MQ drivers/libtmp_rte_net_cnxk.a.p/net_cnxk_cn20k_ethdev.c.o -MF drivers/libtmp_rte_net_cnxk.a.p/net_cnxk_cn20k_ethdev.c.o.d -o drivers/libtmp_rte_net_cnxk.a.p/net_cnxk_cn20k_ethdev.c.o -c ../drivers/net/cnxk/ cn20k_ethdev.c In file included from ../drivers/net/cnxk/cn20k_ethdev.c:4: In file included from ../drivers/net/cnxk/cn20k_ethdev.h:8: ../drivers/net/cnxk/cnxk_ethdev.h:516:8: error: address argument to atomic operation must be a pointer to _Atomic type ('int32_t *' (aka 'int *') invalid) 516 | val = rte_atomic_fetch_sub_explicit(fc_sw, nb_inst, __ATOMIC_RELAXED) - nb_inst; | ^ ~ ../lib/eal/include/rte_stdatomic.h:95:2: note: expanded from macro 'rte_atomic_fetch_sub_explicit' 95 | atomic_fetch_sub_explicit(ptr, val, memorder) | ^ ~~~ /usr/lib/clang/18/include/stdatomic.h:154:35: note: expanded from macro 'atomic_fetch_sub_explicit' 154 | #define atomic_fetch_sub_explicit __c11_atomic_fetch_sub | ^ In file included from ../drivers/net/cnxk/cn20k_ethdev.c:4: In file included from ../drivers/net/cnxk/cn20k_ethdev.h:8: ../drivers/net/cnxk/cnxk_ethdev.h:520:30: error: address argument to atomic operation must be a pointer to _Atomic type ('uint64_t *' (aka 'unsigned long *') invalid) 520 | newval = (int64_t)nb_desc - rte_atomic_load_explicit(fc, __ATOMIC_RELAXED); | ^~~ ../lib/eal/include/rte_stdatomic.h:73:2: note: expanded from macro 'rte_atomic_load_explicit' 73 | atomic_load_explicit(ptr, memorder) | ^~~~ /usr/lib/clang/18/include/stdatomic.h:139:30: note: expanded from macro 'atomic_load_explicit' 139 | #define atomic_load_explicit __c11_atomic_load | ^ In file included from ../drivers/net/cnxk/cn20k_ethdev.c:4: In file included from ../drivers/net/cnxk/cn20k_ethdev.h:8: ../drivers/net/cnxk/cnxk_ethdev.h:523:7: error: address argument to atomic operation must be a pointer to _Atomic type ('int32_t *' (aka 'int *') invalid) 523 | if (!rte_atomic_compare_exchange_strong_explicit(fc_sw, &val, newval, __ATOMIC_RELEASE, | ^ ~ ../lib/eal/include/rte_stdatomic.h:83:2: note: expanded from macro 'rte_atomic_compare_exchange_strong_explicit' 83 | atomic_compare_exchange_strong_explicit(ptr, expected, desired, \ | ^ ~~~ /usr/lib/clang/18/include/stdatomic.h:145:49: note: expanded from macro 'atomic_compare_e
[PATCH v1] ethdev: fix int overflow in descriptor count logic
Addressed a specific overflow issue in the eth_dev_adjust_nb_desc() function where the uint16_t variable nb_desc would overflow when its value was greater than (2^16 - nb_align). This overflow caused nb_desc to incorrectly wrap around between 0 and nb_align-1, leading to the function setting nb_desc to nb_min instead of the expected nb_max. The resolution involves upcasting nb_desc to a uint32_t before the RTE_ALIGN_CEIL macro is applied. This change ensures that the subsequent call to RTE_ALIGN_FLOOR(nb_desc + (nb_align - 1), nb_align) does not result in an overflow, as it would when nb_desc is a uint16_t. By using a uint32_t for these operations, the correct behavior is maintained without the risk of overflow. Fixes: 0f67fc3baeb9 ("ethdev: add function to adjust number of descriptors") Signed-off-by: Niall Meade --- .mailmap| 1 + lib/ethdev/rte_ethdev.c | 12 +--- 2 files changed, 10 insertions(+), 3 deletions(-) diff --git a/.mailmap b/.mailmap index 4a508bafad..c1941e78bb 100644 --- a/.mailmap +++ b/.mailmap @@ -1053,6 +1053,7 @@ Nelson Escobar Nemanja Marjanovic Netanel Belgazal Netanel Gonen +Niall Meade Niall Power Nicholas Pratte Nick Connolly diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index f1c658f49e..f978283edf 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -6577,13 +6577,19 @@ static void eth_dev_adjust_nb_desc(uint16_t *nb_desc, const struct rte_eth_desc_lim *desc_lim) { + /* Upcast to uint32 to avoid potential overflow with RTE_ALIGN_CEIL(). */ + uint32_t nb_desc_32 = *nb_desc; + if (desc_lim->nb_align != 0) - *nb_desc = RTE_ALIGN_CEIL(*nb_desc, desc_lim->nb_align); + nb_desc_32 = RTE_ALIGN_CEIL(nb_desc_32, desc_lim->nb_align); if (desc_lim->nb_max != 0) - *nb_desc = RTE_MIN(*nb_desc, desc_lim->nb_max); + nb_desc_32 = RTE_MIN(nb_desc_32, desc_lim->nb_max); + + nb_desc_32 = RTE_MAX(nb_desc_32, desc_lim->nb_min); - *nb_desc = RTE_MAX(*nb_desc, desc_lim->nb_min); + /* Assign clipped u32 back to u16. */ + *nb_desc = nb_desc_32; } int -- 2.34.1 -- Intel Research and Development Ireland Limited Registered in Ireland Registered Office: Collinstown Industrial Park, Leixlip, County Kildare Registered Number: 308263 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.
Re: [PATCH v3] ethdev: optimize the activation of fast-path tracepoints
Thanks to all of you. Much appreciated! On Sat, Sep 21, 2024 at 9:07 PM Ferruh Yigit wrote: > On 9/19/2024 5:37 PM, Jerin Jacob wrote: > > On Thu, Sep 19, 2024 at 12:16 AM Adel Belkhiri > wrote: > >> > >> From: Adel Belkhiri > >> > >> Split the tracepoints rte_ethdev_trace_rx_burst and > >> rte_eth_trace_call_rx_callbacks into two separate ones > >> for empty and non-empty calls to avoid saturating > >> quickly the trace buffer. > >> > >> Signed-off-by: Adel Belkhiri > > > > Acked-by: Jerin Jacob > > > > Applied to dpdk-next-net/main, thanks. >
Re: [PATCH v2] common/cnxk: unregister IRQ before reconfigure
On Thu, Sep 19, 2024 at 11:33 PM wrote: > > From: Pavan Nikhilesh > > Unregister SSO device and NPA IRQs before resizing > them. > > Signed-off-by: Pavan Nikhilesh # Add fixes: tag # Change commit message to common/cnxk: fix... > --- > v2 Changes: > - Reorder npa interrupt un-registration. > > drivers/common/cnxk/roc_dev.c | 16 +++- > drivers/common/cnxk/roc_dev_priv.h | 2 ++ > drivers/common/cnxk/roc_sso.c | 7 +++ > 3 files changed, 16 insertions(+), 9 deletions(-) > > diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c > index daf7684d8e..4a5787e7f9 100644 > --- a/drivers/common/cnxk/roc_dev.c > +++ b/drivers/common/cnxk/roc_dev.c > @@ -1017,8 +1017,8 @@ mbox_unregister_vf_irq(struct plt_pci_device *pci_dev, > struct dev *dev) >RVU_VF_INT_VEC_MBOX); > } > > -static void > -mbox_unregister_irq(struct plt_pci_device *pci_dev, struct dev *dev) > +void > +dev_mbox_unregister_irq(struct plt_pci_device *pci_dev, struct dev *dev) > { > if (dev_is_vf(dev)) > mbox_unregister_vf_irq(pci_dev, dev); > @@ -1096,8 +1096,8 @@ roc_pf_vf_flr_irq(void *param) > } > } > > -static int > -vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev) > +void > +dev_vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev) > { > struct plt_intr_handle *intr_handle = pci_dev->intr_handle; > int i; > @@ -1113,8 +1113,6 @@ vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, > struct dev *dev) > > dev_irq_unregister(intr_handle, roc_pf_vf_flr_irq, dev, >RVU_PF_INT_VEC_VFFLR1); > - > - return 0; > } > > int > @@ -1600,7 +1598,7 @@ dev_init(struct dev *dev, struct plt_pci_device > *pci_dev) > iounmap: > dev_vf_mbase_put(pci_dev, vf_mbase); > mbox_unregister: > - mbox_unregister_irq(pci_dev, dev); > + dev_mbox_unregister_irq(pci_dev, dev); > if (dev->ops) > plt_free(dev->ops); > mbox_fini: > @@ -1636,10 +1634,10 @@ dev_fini(struct dev *dev, struct plt_pci_device > *pci_dev) > if (dev->lmt_mz) > plt_memzone_free(dev->lmt_mz); > > - mbox_unregister_irq(pci_dev, dev); > + dev_mbox_unregister_irq(pci_dev, dev); > > if (!dev_is_vf(dev)) > - vf_flr_unregister_irqs(pci_dev, dev); > + dev_vf_flr_unregister_irqs(pci_dev, dev); > /* Release PF - VF */ > mbox = &dev->mbox_vfpf; > if (mbox->hwbase && mbox->dev) > diff --git a/drivers/common/cnxk/roc_dev_priv.h > b/drivers/common/cnxk/roc_dev_priv.h > index 50e12cbf17..df7dc222ed 100644 > --- a/drivers/common/cnxk/roc_dev_priv.h > +++ b/drivers/common/cnxk/roc_dev_priv.h > @@ -131,6 +131,8 @@ int dev_irqs_disable(struct plt_intr_handle *intr_handle); > int dev_irq_reconfigure(struct plt_intr_handle *intr_handle, uint16_t > max_intr); > > int dev_mbox_register_irq(struct plt_pci_device *pci_dev, struct dev *dev); > +void dev_mbox_unregister_irq(struct plt_pci_device *pci_dev, struct dev > *dev); > int dev_vf_flr_register_irqs(struct plt_pci_device *pci_dev, struct dev > *dev); > +void dev_vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev > *dev); > > #endif /* _ROC_DEV_PRIV_H */ > diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c > index 293b0c81a1..7d45b06dda 100644 > --- a/drivers/common/cnxk/roc_sso.c > +++ b/drivers/common/cnxk/roc_sso.c > @@ -842,7 +842,14 @@ sso_update_msix_vec_count(struct roc_sso *roc_sso, > uint16_t sso_vec_cnt) > return dev_irq_reconfigure(pci_dev->intr_handle, mbox_vec_cnt > + npa_vec_cnt); > } > > + /* Before re-configuring unregister irqs */ > npa_vec_cnt = (dev->npa.pci_dev == pci_dev) ? NPA_LF_INT_VEC_POISON + > 1 : 0; > + if (npa_vec_cnt) > + npa_unregister_irqs(&dev->npa); > + > + dev_mbox_unregister_irq(pci_dev, dev); > + if (!dev_is_vf(dev)) > + dev_vf_flr_unregister_irqs(pci_dev, dev); > > /* Re-configure to include SSO vectors */ > rc = dev_irq_reconfigure(pci_dev->intr_handle, mbox_vec_cnt + > npa_vec_cnt + sso_vec_cnt); > -- > 2.25.1 >
[PATCH v3 2/2] dts: add port stats checks test suite
From: Jeremy Spewock This patch adds a new test suite to DTS that validates the accuracy of the port statistics using testpmd. The functionality is tested by sending a packet of a fixed side to the SUT and verifying that the statistic for packets received, received bytes, packets sent, and sent bytes all update accordingly. Depends-on: patch-144268 ("dts: add text parser for testpmd verbose output") Depends-on: patch-144270 ("dts: add VLAN methods to testpmd shell) Signed-off-by: Jeremy Spewock --- dts/framework/config/conf_yaml_schema.json | 3 +- dts/tests/TestSuite_port_stats_checks.py | 160 + 2 files changed, 162 insertions(+), 1 deletion(-) create mode 100644 dts/tests/TestSuite_port_stats_checks.py diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json index df390e8ae2..72af3bbe36 100644 --- a/dts/framework/config/conf_yaml_schema.json +++ b/dts/framework/config/conf_yaml_schema.json @@ -187,7 +187,8 @@ "enum": [ "hello_world", "os_udp", -"pmd_buffer_scatter" +"pmd_buffer_scatter", +"port_stats_checks" ] }, "test_target": { diff --git a/dts/tests/TestSuite_port_stats_checks.py b/dts/tests/TestSuite_port_stats_checks.py new file mode 100644 index 00..c873ebe61a --- /dev/null +++ b/dts/tests/TestSuite_port_stats_checks.py @@ -0,0 +1,160 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2024 University of New Hampshire + +"""Port Statistics testing suite. + +This test suite tests the functionality of querying the statistics of a port and verifies that the +values provided in the statistics accurately reflect the traffic that has been handled on the port. +This is shown by sending a packet of a fixed size to the SUT and verifying that the number of RX +packets has increased by 1, the number of RX bytes has increased by the specified size, the number +of TX packets has also increased by 1 (since we expect the packet to be forwarded), and the number +of TX bytes has also increased by the same fixed amount. +""" + +from typing import ClassVar, Tuple + +from scapy.layers.inet import IP # type: ignore[import-untyped] +from scapy.layers.l2 import Ether # type: ignore[import-untyped] +from scapy.packet import Packet, Raw # type: ignore[import-untyped] + +from framework.params.testpmd import SimpleForwardingModes +from framework.remote_session.testpmd_shell import ( +RtePTypes, +TestPmdShell, +TestPmdVerbosePacket, +) +from framework.test_suite import TestSuite + + +class TestPortStatsChecks(TestSuite): +"""DPDK Port statistics testing suite. + +Support for port statistics is tested by sending a packet of a fixed size denoted by +`total_packet_len` and verifying the that TX/RX packets of the TX/RX ports updated by exactly +1 and the TX/RX bytes of the TX/RX ports updated by exactly `total_packet_len`. This is done by +finding the total amount of packets that were sent/received which did not originate from this +test suite and taking the sum of the lengths of each of these "noise" packets and subtracting +it from the total values in the port statistics so that all that is left are relevant values. +""" + +#: Port where traffic will be received on the SUT. +recv_port: ClassVar[int] = 0 +#: Port where traffic will be sent from on the SUT. +send_port: ClassVar[int] = 1 + +#: +ip_header_len: ClassVar[int] = 20 +#: +ether_header_len: ClassVar[int] = 14 + +#: Length of the packet being sent including the IP and frame headers. +total_packet_len: ClassVar[int] = 100 +#: Packet to send during testing. +send_pkt: ClassVar[Packet] = ( +Ether() / IP() / Raw("X" * (total_packet_len - ip_header_len - ether_header_len)) +) + +def extract_noise_information( +self, verbose_out: list[TestPmdVerbosePacket] +) -> Tuple[int, int, int, int]: +"""Extract information about packets that were not sent by the framework in `verbose_out`. + +Extract the number of sent/received packets that did not originate from this test suite as +well as the sum of the lengths of said "noise" packets. Note that received packets are only +examined on the port with the ID `self.recv_port` since these are the receive stats that +will be analyzed in this suite. Sent packets are also only examined on the port with the ID +`self.send_port`. + +Packets are considered to be "noise" when they don't match the expected structure of the +packets that are being sent by this test suite. Specifically, the source and destination +mac addresses as well as the software packet type are checked on packets received by +testpmd to ensure they match the proper addresses of the TG and SUT nodes. Packets that are +sent by testpmd however only check the source mac address and the software p
[PATCH v3 1/2] dts: add clearing port stats to testpmd shell
From: Jeremy Spewock Methods currently exist for querying the statistics of a port in testpmd, but there weren't methods added for clearing the current statistics on a port. This patch adds methods that allow you to clear the statistics of a single port or all ports to account for situations where the user only wants the port statistics after a certain point and does not care about any existing prior values. This patch also modifies the show_port_stats_all method to allow for it to return the raw testpmd output gathered from sending the command as well as the parsed output. This allows users to access and examine additional information provided by the raw output if needed. Depends-on: patch-144268 ("dts: add text parser for testpmd verbose output") Depends-on: patch-144270 ("dts: add VLAN methods to testpmd shell) Signed-off-by: Jeremy Spewock --- dts/framework/remote_session/testpmd_shell.py | 49 +-- 1 file changed, 45 insertions(+), 4 deletions(-) diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py index affd37ba5b..c2267a7662 100644 --- a/dts/framework/remote_session/testpmd_shell.py +++ b/dts/framework/remote_session/testpmd_shell.py @@ -20,7 +20,7 @@ from dataclasses import dataclass, field from enum import Flag, auto from pathlib import PurePath -from typing import Any, Callable, ClassVar, Concatenate, ParamSpec +from typing import Any, Callable, ClassVar, Concatenate, ParamSpec, Tuple from typing_extensions import Self, Unpack @@ -1393,11 +1393,13 @@ def _update_port(self, port: TestPmdPort) -> None: for existing_port in self._ports ] -def show_port_stats_all(self) -> list[TestPmdPortStats]: +def show_port_stats_all(self) -> Tuple[list[TestPmdPortStats], str]: """Returns the statistics of all the ports. Returns: -list[TestPmdPortStats]: A list containing all the ports stats as `TestPmdPortStats`. +Tuple[str, list[TestPmdPortStats]]: A tuple where the first element is the stats of all +ports as `TestPmdPortStats` and second is the raw testpmd output that was collected +from the sent command. """ output = self.send_command("show port stats all") @@ -1412,7 +1414,7 @@ def show_port_stats_all(self) -> list[TestPmdPortStats]: # # # iter = re.finditer(r"(^ #*.+#*$[^#]+)^ #*\r$", output, re.MULTILINE) -return [TestPmdPortStats.parse(block.group(1)) for block in iter] +return ([TestPmdPortStats.parse(block.group(1)) for block in iter], output) def show_port_stats(self, port_id: int) -> TestPmdPortStats: """Returns the given port statistics. @@ -1675,6 +1677,45 @@ def set_verbose(self, level: int, verify: bool = True) -> None: f"Testpmd failed to set verbose level to {level}." ) +def clear_port_stats(self, port_id: int, verify: bool = True) -> None: +"""Clear statistics of a given port. + +Args: +port_id: ID of the port to clear the statistics on. +verify: If :data:`True` the output of the command will be scanned to verify that it was +successful, otherwise failures will be ignored. Defaults to :data:`True`. + +Raises: +InteractiveCommandExecutionError: If `verify` is :data:`True` and testpmd fails to +clear the statistics of the given port. +""" +clear_output = self.send_command(f"clear port stats {port_id}") +if verify and f"NIC statistics for port {port_id} cleared" not in clear_output: +raise InteractiveCommandExecutionError( +f"Test pmd failed to set clear forwarding stats on port {port_id}" +) + +def clear_port_stats_all(self, verify: bool = True) -> None: +"""Clear the statistics of all ports that testpmd is aware of. + +Args: +verify: If :data:`True` the output of the command will be scanned to verify that all +ports had their statistics cleared, otherwise failures will be ignored. Defaults to +:data:`True`. + +Raises: +InteractiveCommandExecutionError: If `verify` is :data:`True` and testpmd fails to +clear the statistics of any of its ports. +""" +clear_output = self.send_command("clear port stats all") +if verify: +if type(self._app_params.ports) is list: +for port_id in range(len(self._app_params.ports)): +if f"NIC statistics for port {port_id} cleared" not in clear_output: +raise InteractiveCommandExecutionError( +f"Test pmd failed to set clear forwarding stats on port {port_id}" +) + def _close(self) -> None:
[PATCH v3 0/2] dts: port over stats checks
From: Jeremy Spewock v3: * fix name of the first patch Jeremy Spewock (2): dts: add clearing port stats to testpmd shell dts: add port stats checks test suite dts/framework/config/conf_yaml_schema.json| 3 +- dts/framework/remote_session/testpmd_shell.py | 49 +- dts/tests/TestSuite_port_stats_checks.py | 160 ++ 3 files changed, 207 insertions(+), 5 deletions(-) create mode 100644 dts/tests/TestSuite_port_stats_checks.py -- 2.46.0
Re: [PATCH v2 1/2] dts: add clearing port stats and verbose mode to testpmd
Apologies, please disregard this patch, I sent out a v3 with a fixed subject line.
Re: [PATCH v2 2/2] vhost: add reconnection support to VDUSE
On Fri, Sep 20, 2024 at 11:09 AM Maxime Coquelin wrote: > > This patch enables VDUSE reconnection support making use of > the newly introduced reconnection mechanism in Vhost > library. > > At DPDK VDUSE device creation time, there are two > possibilities: > 1. The Kernel VDUSE device does not exist: > a. A reconnection file named after the VUDSE device name VDUSE* > is created in VDUSE tmpfs. > b. The file is truncated to 'struct vhost_reconnect_data' > size, and mmapped. > c. Negotiated features, Virtio status... are saved for > sanity checks at reconnect time. > 2. The Kernel VDUSE device already exists: > a. Exit with failure if no reconnect file exists for > this device. > b. Open and mmap the reconnect file. > c. Perform sanity check to ensure features are compatible. > d. Restore virtqueues' available indexes at startup time. > > Then at runtime, the virtqueues' available index are logged by > the Vhost reconnection mechanism. > > At DPDK VDUSE device destruction time, there are two > possibilities: > 1. The Kernel VDUSE device destruction succeed, which succeeded* > means it is no more attached to the vDPA bus. The > reconnection file is unmapped and then removed. > 2. The Kernel VDUSE device destruction failed, meaning it > is no more attached to the vDPA bus. The reconnection > file is unmapped but not removed to make possible later > reconnection. > > Signed-off-by: Maxime Coquelin Reviewed-by: David Marchand -- David Marchand
[PATCH v2 2/2] dts: add port control testing suite
From: Jeremy Spewock This patch ports over the port_control test suite from the Old DTS framework and adapts the functionality to fit with the current testing framework. The test suite provides validation of basic port control functions such as starting, stopping, and closing ports. It should be noted that this test suite is not completely 1-to-1 with the one from Old DTS as it does exclude test cases that use QEMU for testing as this is not something we are looking to add to the framework in the near future. It also excludes test cases for resetting ports as this feature is something that is not supported by all devices and does not expose a capability regarding if it is through testpmd. Signed-off-by: Jeremy Spewock --- It should also be noted that this suite has issues running in some mlx5 NICs. This is due to the fact that ports are validated as "stopped" using their link status which does not update upon stopping ports of an mlx5 NIC. I am not sure at this time if this is intentional or just a bug, but it doesn't seem there is a consistent way to check if a port is stopped other than link status. dts/framework/config/conf_yaml_schema.json | 3 +- dts/tests/TestSuite_port_control.py| 80 ++ 2 files changed, 82 insertions(+), 1 deletion(-) create mode 100644 dts/tests/TestSuite_port_control.py diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json index df390e8ae2..b0d8e1f6a6 100644 --- a/dts/framework/config/conf_yaml_schema.json +++ b/dts/framework/config/conf_yaml_schema.json @@ -187,7 +187,8 @@ "enum": [ "hello_world", "os_udp", -"pmd_buffer_scatter" +"pmd_buffer_scatter", +"port_control" ] }, "test_target": { diff --git a/dts/tests/TestSuite_port_control.py b/dts/tests/TestSuite_port_control.py new file mode 100644 index 00..9e843512ab --- /dev/null +++ b/dts/tests/TestSuite_port_control.py @@ -0,0 +1,80 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2024 University of New Hampshire +"""Port Control Testing Suite. + +This test suite serves to show that ports within testpmd support basic configuration functions. +Things such as starting a port, stopping a port, and closing a port should all be supported by the +device. Additionally, after each of these configuration steps (outside of closing the port) it +should still be possible to start the port again and verify that the port is able to forward a +large amount of packets (1000 are sent in the test cases). +""" +from scapy.layers.l2 import Ether # type: ignore[import-untyped] +from scapy.packet import Packet, Raw # type: ignore[import-untyped] + +from framework.params.testpmd import SimpleForwardingModes +from framework.remote_session.testpmd_shell import TestPmdShell +from framework.test_suite import TestSuite + + +class TestPortControl(TestSuite): +"""DPDK Port Control Testing Suite.""" + +def send_packets_and_verify(self) -> None: +"""Send 1000 packets and verify that all packets were forwarded back. + +Packets sent are identical and are all ethernet frames with a payload of 30 "X" characters. +This payload is used to differentiate noise on the wire from packets sent by this +framework. +""" +payload = "X" * 30 +num_pakts = 1000 +send_p = Ether() / Raw(payload) +recv_pakts: list[Packet] = [] +# The scapy sniffer can only handle a little under 200 packets per 1000 at a time, so this +# is limited to 100 per burst. +for _ in range(int(num_pakts / 100)): +recv_pakts += self.send_packets_and_capture([send_p] * 100) +recv_pakts += self.send_packets_and_capture([send_p] * (num_pakts % 100)) +recv_pakts = [ +p +for p in recv_pakts +if ( +# Remove padding from the bytes. +hasattr(p, "load") +and p.load.decode("utf-8").replace("\x00", "") == payload +) +] +self.verify( +len(recv_pakts) == num_pakts, +f"Received {len(recv_pakts)} packets when {num_pakts} were expected.", +) + +def test_start_ports(self) -> None: +"""Ensure that the port can receive traffic after explicitly being started.""" +with TestPmdShell(self.sut_node, forward_mode=SimpleForwardingModes.mac) as testpmd: +testpmd.start_all_ports() +testpmd.start() +self.send_packets_and_verify() + +def test_stop_ports(self) -> None: +"""Verify that the link goes down after stopping ports. + +This case also verifies that the port can be started again and properly forward traffic +after being stopped. +""" +with TestPmdShell(self.sut_node, forward_mode=SimpleForwardingModes.mac) as testpmd: +testpmd.stop_all_ports() +self.veri
[PATCH v2 0/2] dts: port over port_control testing suite
From: Jeremy Spewock v2: * apply to next-dts Jeremy Spewock (2): dts: add method for closing ports to testpmd dts: add port control testing suite dts/framework/config/conf_yaml_schema.json| 3 +- dts/framework/remote_session/testpmd_shell.py | 18 + dts/tests/TestSuite_port_control.py | 80 +++ 3 files changed, 100 insertions(+), 1 deletion(-) create mode 100644 dts/tests/TestSuite_port_control.py -- 2.46.0
[PATCH v2 1/2] dts: add method for closing ports to testpmd
From: Jeremy Spewock Closing ports is a standard configuration feature that is available in testpmd but the framework lacks the ability to access this command through the Testpmd API. This patch adds a method that performs this action and verifies the results of sending the command to allow developers to have more control over the state of the ports that testpmd is aware of. Signed-off-by: Jeremy Spewock --- dts/framework/remote_session/testpmd_shell.py | 18 ++ 1 file changed, 18 insertions(+) diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py index 77902a468d..c7161df374 100644 --- a/dts/framework/remote_session/testpmd_shell.py +++ b/dts/framework/remote_session/testpmd_shell.py @@ -844,6 +844,24 @@ def set_ports_queues(self, number_of: int) -> None: self.send_command(f"port config all rxq {number_of}") self.send_command(f"port config all txq {number_of}") +@requires_stopped_ports +def close_all_ports(self, verify: bool = True) -> None: +"""Close all ports. + +Args: +verify: If :data:`True` the output of the close command will be scanned in an attempt +to verify that all ports were stopped successfully. Defaults to :data:`True`. + +Raises: +InteractiveCommandExecutionError: If `verify` is :data:`True` and at lease one port +failed to close. +""" +port_close_output = self.send_command("port close all") +if verify: +num_ports = len(self.ports) +if not all(f"Port {p_id} is closed" in port_close_output for p_id in range(num_ports)): +raise InteractiveCommandExecutionError("Ports were not closed successfully.") + def show_port_info_all(self) -> list[TestPmdPort]: """Returns the information of all the ports. -- 2.46.0
Re: [PATCH 1/9] rawdev: add API to get device from index
On Sun, Sep 8, 2024 at 1:03 AM Akhil Goyal wrote: > > Added an internal API for PMDs to get raw device pointer > from a device id. > > Signed-off-by: Akhil Goyal > --- > lib/rawdev/rte_rawdev_pmd.h | 24 # This patch needs to go through the main tree. So please send this separate patch and rest of the patch as separate series. # verion.map update is missing for new API as INTERNAL section. > 1 file changed, 24 insertions(+) > > diff --git a/lib/rawdev/rte_rawdev_pmd.h b/lib/rawdev/rte_rawdev_pmd.h > index 22b406444d..8339122348 100644 > --- a/lib/rawdev/rte_rawdev_pmd.h > +++ b/lib/rawdev/rte_rawdev_pmd.h > @@ -102,6 +102,30 @@ rte_rawdev_pmd_get_named_dev(const char *name) > return NULL; > } > > +/** > + * Get the rte_rawdev structure device pointer for given device ID. > + * > + * @param dev_id > + * raw device index. > + * > + * @return > + * - The rte_rawdev structure pointer for the given device ID. > + */ > +static inline struct rte_rawdev * > +rte_rawdev_pmd_get_dev(uint8_t dev_id) > +{ > + struct rte_rawdev *dev; > + > + if (dev_id >= RTE_RAWDEV_MAX_DEVS) > + return NULL; > + > + dev = &rte_rawdevs[dev_id]; > + if (dev->attached == RTE_RAWDEV_ATTACHED) > + return dev; > + > + return NULL; > +} > + > /** > * Validate if the raw device index is a valid attached raw device. > * > -- > 2.25.1 >
[DPDK/DTS Bug 1548] Add asynchronous output collector for testpmd verbose information
https://bugs.dpdk.org/show_bug.cgi?id=1548 Bug ID: 1548 Summary: Add asynchronous output collector for testpmd verbose information Product: DPDK Version: unspecified Hardware: All OS: All Status: UNCONFIRMED Severity: normal Priority: Normal Component: DTS Assignee: dev@dpdk.org Reporter: jspew...@iol.unh.edu CC: juraj.lin...@pantheon.tech, pr...@iol.unh.edu Target Milestone: --- Verbose packets can be received at any point in time while packets are being forwarded in testpmd, but output collection in interactive shells doesn't support this behavior (or any asynchronous behavior for that matter). Currently the way that output collection works with interactive shells is after sending a command the framework will consume all output since the previous command that is in the buffer. In the case of verbose output in testpmd, this means that if you send any command while verbose packet forwarding is turned on, you will consume and discard all of the verbose information that was sent to stdout prior to that command being sent. There are a few ways to handle this, one way that was proposed was just to ensure that testpmd methods always return their raw output so that it can be parsed, another was to track if verbose output is being sent and, if it is, collecting it from every command that is sent so that it can be read later [1]. The second method seems preferred, but if it is done that way there should be some method of time-stamping when each blob of verbose output was collected so that developers have the ability to examine verbose output specifically between two command, rather than all of the output gathered thus far. [1] https://inbox.dpdk.org/dev/CAKXZ7egonYtTivf-SZB=ixv+utgpww8sku9jy8xj58mgv4k...@mail.gmail.com/ -- You are receiving this mail because: You are the assignee for the bug.
[PATCH v2 2/2] dts: add port stats checks test suite
From: Jeremy Spewock This patch adds a new test suite to DTS that validates the accuracy of the port statistics using testpmd. The functionality is tested by sending a packet of a fixed side to the SUT and verifying that the statistic for packets received, received bytes, packets sent, and sent bytes all update accordingly. Depends-on: patch-144268 ("dts: add text parser for testpmd verbose output") Depends-on: patch-144270 ("dts: add VLAN methods to testpmd shell) Signed-off-by: Jeremy Spewock --- dts/framework/config/conf_yaml_schema.json | 3 +- dts/tests/TestSuite_port_stats_checks.py | 160 + 2 files changed, 162 insertions(+), 1 deletion(-) create mode 100644 dts/tests/TestSuite_port_stats_checks.py diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json index df390e8ae2..72af3bbe36 100644 --- a/dts/framework/config/conf_yaml_schema.json +++ b/dts/framework/config/conf_yaml_schema.json @@ -187,7 +187,8 @@ "enum": [ "hello_world", "os_udp", -"pmd_buffer_scatter" +"pmd_buffer_scatter", +"port_stats_checks" ] }, "test_target": { diff --git a/dts/tests/TestSuite_port_stats_checks.py b/dts/tests/TestSuite_port_stats_checks.py new file mode 100644 index 00..c873ebe61a --- /dev/null +++ b/dts/tests/TestSuite_port_stats_checks.py @@ -0,0 +1,160 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2024 University of New Hampshire + +"""Port Statistics testing suite. + +This test suite tests the functionality of querying the statistics of a port and verifies that the +values provided in the statistics accurately reflect the traffic that has been handled on the port. +This is shown by sending a packet of a fixed size to the SUT and verifying that the number of RX +packets has increased by 1, the number of RX bytes has increased by the specified size, the number +of TX packets has also increased by 1 (since we expect the packet to be forwarded), and the number +of TX bytes has also increased by the same fixed amount. +""" + +from typing import ClassVar, Tuple + +from scapy.layers.inet import IP # type: ignore[import-untyped] +from scapy.layers.l2 import Ether # type: ignore[import-untyped] +from scapy.packet import Packet, Raw # type: ignore[import-untyped] + +from framework.params.testpmd import SimpleForwardingModes +from framework.remote_session.testpmd_shell import ( +RtePTypes, +TestPmdShell, +TestPmdVerbosePacket, +) +from framework.test_suite import TestSuite + + +class TestPortStatsChecks(TestSuite): +"""DPDK Port statistics testing suite. + +Support for port statistics is tested by sending a packet of a fixed size denoted by +`total_packet_len` and verifying the that TX/RX packets of the TX/RX ports updated by exactly +1 and the TX/RX bytes of the TX/RX ports updated by exactly `total_packet_len`. This is done by +finding the total amount of packets that were sent/received which did not originate from this +test suite and taking the sum of the lengths of each of these "noise" packets and subtracting +it from the total values in the port statistics so that all that is left are relevant values. +""" + +#: Port where traffic will be received on the SUT. +recv_port: ClassVar[int] = 0 +#: Port where traffic will be sent from on the SUT. +send_port: ClassVar[int] = 1 + +#: +ip_header_len: ClassVar[int] = 20 +#: +ether_header_len: ClassVar[int] = 14 + +#: Length of the packet being sent including the IP and frame headers. +total_packet_len: ClassVar[int] = 100 +#: Packet to send during testing. +send_pkt: ClassVar[Packet] = ( +Ether() / IP() / Raw("X" * (total_packet_len - ip_header_len - ether_header_len)) +) + +def extract_noise_information( +self, verbose_out: list[TestPmdVerbosePacket] +) -> Tuple[int, int, int, int]: +"""Extract information about packets that were not sent by the framework in `verbose_out`. + +Extract the number of sent/received packets that did not originate from this test suite as +well as the sum of the lengths of said "noise" packets. Note that received packets are only +examined on the port with the ID `self.recv_port` since these are the receive stats that +will be analyzed in this suite. Sent packets are also only examined on the port with the ID +`self.send_port`. + +Packets are considered to be "noise" when they don't match the expected structure of the +packets that are being sent by this test suite. Specifically, the source and destination +mac addresses as well as the software packet type are checked on packets received by +testpmd to ensure they match the proper addresses of the TG and SUT nodes. Packets that are +sent by testpmd however only check the source mac address and the software p
[PATCH v2 1/2] dts: add clearing port stats and verbose mode to testpmd
From: Jeremy Spewock Methods currently exist for querying the statistics of a port in testpmd, but there weren't methods added for clearing the current statistics on a port. This patch adds methods that allow you to clear the statistics of a single port or all ports to account for situations where the user only wants the port statistics after a certain point and does not care about any existing prior values. This patch also modifies the show_port_stats_all method to allow for it to return the raw testpmd output gathered from sending the command as well as the parsed output. This allows users to access and examine additional information provided by the raw output if needed. Depends-on: patch-144268 ("dts: add text parser for testpmd verbose output") Depends-on: patch-144270 ("dts: add VLAN methods to testpmd shell) Signed-off-by: Jeremy Spewock --- dts/framework/remote_session/testpmd_shell.py | 49 +-- 1 file changed, 45 insertions(+), 4 deletions(-) diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py index affd37ba5b..c2267a7662 100644 --- a/dts/framework/remote_session/testpmd_shell.py +++ b/dts/framework/remote_session/testpmd_shell.py @@ -20,7 +20,7 @@ from dataclasses import dataclass, field from enum import Flag, auto from pathlib import PurePath -from typing import Any, Callable, ClassVar, Concatenate, ParamSpec +from typing import Any, Callable, ClassVar, Concatenate, ParamSpec, Tuple from typing_extensions import Self, Unpack @@ -1393,11 +1393,13 @@ def _update_port(self, port: TestPmdPort) -> None: for existing_port in self._ports ] -def show_port_stats_all(self) -> list[TestPmdPortStats]: +def show_port_stats_all(self) -> Tuple[list[TestPmdPortStats], str]: """Returns the statistics of all the ports. Returns: -list[TestPmdPortStats]: A list containing all the ports stats as `TestPmdPortStats`. +Tuple[str, list[TestPmdPortStats]]: A tuple where the first element is the stats of all +ports as `TestPmdPortStats` and second is the raw testpmd output that was collected +from the sent command. """ output = self.send_command("show port stats all") @@ -1412,7 +1414,7 @@ def show_port_stats_all(self) -> list[TestPmdPortStats]: # # # iter = re.finditer(r"(^ #*.+#*$[^#]+)^ #*\r$", output, re.MULTILINE) -return [TestPmdPortStats.parse(block.group(1)) for block in iter] +return ([TestPmdPortStats.parse(block.group(1)) for block in iter], output) def show_port_stats(self, port_id: int) -> TestPmdPortStats: """Returns the given port statistics. @@ -1675,6 +1677,45 @@ def set_verbose(self, level: int, verify: bool = True) -> None: f"Testpmd failed to set verbose level to {level}." ) +def clear_port_stats(self, port_id: int, verify: bool = True) -> None: +"""Clear statistics of a given port. + +Args: +port_id: ID of the port to clear the statistics on. +verify: If :data:`True` the output of the command will be scanned to verify that it was +successful, otherwise failures will be ignored. Defaults to :data:`True`. + +Raises: +InteractiveCommandExecutionError: If `verify` is :data:`True` and testpmd fails to +clear the statistics of the given port. +""" +clear_output = self.send_command(f"clear port stats {port_id}") +if verify and f"NIC statistics for port {port_id} cleared" not in clear_output: +raise InteractiveCommandExecutionError( +f"Test pmd failed to set clear forwarding stats on port {port_id}" +) + +def clear_port_stats_all(self, verify: bool = True) -> None: +"""Clear the statistics of all ports that testpmd is aware of. + +Args: +verify: If :data:`True` the output of the command will be scanned to verify that all +ports had their statistics cleared, otherwise failures will be ignored. Defaults to +:data:`True`. + +Raises: +InteractiveCommandExecutionError: If `verify` is :data:`True` and testpmd fails to +clear the statistics of any of its ports. +""" +clear_output = self.send_command("clear port stats all") +if verify: +if type(self._app_params.ports) is list: +for port_id in range(len(self._app_params.ports)): +if f"NIC statistics for port {port_id} cleared" not in clear_output: +raise InteractiveCommandExecutionError( +f"Test pmd failed to set clear forwarding stats on port {port_id}" +) + def _close(self) -> None:
[PATCH v2 0/2] dts: port over stats checks
From: Jeremy Spewock v2: * apply on next-dts * add raw testpmd output to show_port_stats_all to make test method consistent * add dependency on VLAN suite for set_verbose method Jeremy Spewock (2): dts: add clearing port stats and verbose mode to testpmd dts: add port stats checks test suite dts/framework/config/conf_yaml_schema.json| 3 +- dts/framework/remote_session/testpmd_shell.py | 49 +- dts/tests/TestSuite_port_stats_checks.py | 160 ++ 3 files changed, 207 insertions(+), 5 deletions(-) create mode 100644 dts/tests/TestSuite_port_stats_checks.py -- 2.46.0
[PATCH v4 2/5] dts: parameterize what ports the TG sends packets to
From: Jeremy Spewock Previously in the DTS framework the helper methods in the TestSuite class designated ports as either ingress or egress ports and would wrap the methods of the traffic generator to allow packets to only flow to those designated ingress or egress ports. This is undesirable in some cases, such as when you have virtual functions on top of your port, where the TG ports can send to more than one SUT port. This patch solves this problem by creating optional parameters that allow the user to specify which port to gather the MAC addresses from when sending and receiving packets. Signed-off-by: Jeremy Spewock --- dts/framework/test_suite.py | 48 + 1 file changed, 38 insertions(+), 10 deletions(-) diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index 051509fb86..9b707ca46d 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -186,6 +186,8 @@ def send_packet_and_capture( packet: Packet, filter_config: PacketFilteringConfig = PacketFilteringConfig(), duration: float = 1, +sut_ingress: Port | None = None, +sut_egress: Port | None = None, ) -> list[Packet]: """Send and receive `packet` using the associated TG. @@ -196,14 +198,16 @@ def send_packet_and_capture( packet: The packet to send. filter_config: The filter to use when capturing packets. duration: Capture traffic for this amount of time after sending `packet`. +sut_ingress: Optional port to use as the SUT ingress port. Defaults to +`self._sut_port_ingress`. +sut_egress: Optional port to use as the SUT egress port. Defaults to +`self._sut_port_egress` Returns: A list of received packets. """ return self.send_packets_and_capture( -[packet], -filter_config, -duration, +[packet], filter_config, duration, sut_ingress, sut_egress ) def send_packets_and_capture( @@ -211,6 +215,8 @@ def send_packets_and_capture( packets: list[Packet], filter_config: PacketFilteringConfig = PacketFilteringConfig(), duration: float = 1, +sut_ingress: Port | None = None, +sut_egress: Port | None = None, ) -> list[Packet]: """Send and receive `packets` using the associated TG. @@ -221,11 +227,19 @@ def send_packets_and_capture( packets: The packets to send. filter_config: The filter to use when capturing packets. duration: Capture traffic for this amount of time after sending `packet`. +sut_ingress: Optional port to use as the SUT ingress port. Defaults to +`self._sut_port_ingress`. +sut_egress: Optional port to use as the SUT egress port. Defaults to +`self._sut_port_egress` Returns: A list of received packets. """ -packets = [self._adjust_addresses(packet) for packet in packets] +if sut_ingress is None: +sut_ingress = self._sut_port_ingress +if sut_egress is None: +sut_egress = self._sut_port_egress +packets = [self._adjust_addresses(packet, sut_ingress, sut_egress) for packet in packets] return self.tg_node.send_packets_and_capture( packets, self._tg_port_egress, @@ -234,18 +248,30 @@ def send_packets_and_capture( duration, ) -def get_expected_packet(self, packet: Packet) -> Packet: +def get_expected_packet( +self, packet: Packet, sut_ingress: Port | None = None, sut_egress: Port | None = None +) -> Packet: """Inject the proper L2/L3 addresses into `packet`. Args: packet: The packet to modify. +sut_ingress: Optional port to use as the SUT ingress port. Defaults to +`self._sut_port_ingress`. +sut_egress: Optional port to use as the SUT egress port. Defaults to +`self._sut_port_egress`. Returns: `packet` with injected L2/L3 addresses. """ -return self._adjust_addresses(packet, expected=True) - -def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet: +if sut_ingress is None: +sut_ingress = self._sut_port_ingress +if sut_egress is None: +sut_egress = self._sut_port_egress +return self._adjust_addresses(packet, sut_ingress, sut_egress, expected=True) + +def _adjust_addresses( +self, packet: Packet, sut_ingress_port: Port, sut_egress_port: Port, expected: bool = False +) -> Packet: """L2 and L3 address additions in both directions. Assumptions: @@ -255,11 +281,13 @@ def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet: packet: T
[PATCH v4 1/5] dts: allow binding only a single port to a different driver
From: Jeremy Spewock Previously the DTS framework only included methods that bind all ports that the test run was aware of to either the DPDK driver or the OS driver. There are however some cases, like creating virtual functions, where you would want some ports bound to the OS driver and others bound to their DPDK driver. This patch adds the ability to bind individual ports to their respective drviers to solve this problem. Depends-on: patch-144318 ("dts: add binding to different drivers to TG node") Signed-off-by: Jeremy Spewock --- dts/framework/testbed_model/node.py | 21 - dts/tests/TestSuite_os_udp.py | 4 ++-- 2 files changed, 14 insertions(+), 11 deletions(-) diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py index 6484e16a0f..106e189ce3 100644 --- a/dts/framework/testbed_model/node.py +++ b/dts/framework/testbed_model/node.py @@ -138,11 +138,11 @@ def set_up_build_target(self, build_target_config: BuildTargetConfiguration) -> the setup steps will be taken. This is unused in this method, but subclasses that extend this method may need it. """ -self.bind_ports_to_driver() +self.bind_all_ports_to_driver() def tear_down_build_target(self) -> None: """Bind ports to their OS drivers.""" -self.bind_ports_to_driver(for_dpdk=False) +self.bind_all_ports_to_driver(for_dpdk=False) def create_session(self, name: str) -> OSSession: """Create and return a new OS-aware remote session. @@ -258,7 +258,7 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]: else: return func -def bind_ports_to_driver(self, for_dpdk: bool = True) -> None: +def bind_all_ports_to_driver(self, for_dpdk: bool = True) -> None: """Bind all ports on the node to a driver. Args: @@ -266,12 +266,15 @@ def bind_ports_to_driver(self, for_dpdk: bool = True) -> None: If :data:`False`, binds to os_driver. """ for port in self.ports: -driver = port.os_driver_for_dpdk if for_dpdk else port.os_driver -self.main_session.send_command( -f"{self.path_to_devbind_script} -b {driver} --force {port.pci}", -privileged=True, -verify=True, -) +self._bind_port_to_driver(port, for_dpdk) + +def _bind_port_to_driver(self, port: Port, for_dpdk: bool = True) -> None: +driver = port.os_driver_for_dpdk if for_dpdk else port.os_driver +self.main_session.send_command( +f"{self.path_to_devbind_script} -b {driver} --force {port.pci}", +privileged=True, +verify=True, +) def create_session(node_config: NodeConfiguration, name: str, logger: DTSLogger) -> OSSession: diff --git a/dts/tests/TestSuite_os_udp.py b/dts/tests/TestSuite_os_udp.py index a78bd74139..5e9469bbac 100644 --- a/dts/tests/TestSuite_os_udp.py +++ b/dts/tests/TestSuite_os_udp.py @@ -23,7 +23,7 @@ def set_up_suite(self) -> None: Bind the SUT ports to the OS driver, configure the ports and configure the SUT to route traffic from if1 to if2. """ -self.sut_node.bind_ports_to_driver(for_dpdk=False) +self.sut_node.bind_all_ports_to_driver(for_dpdk=False) self.configure_testbed_ipv4() def test_os_udp(self) -> None: @@ -50,4 +50,4 @@ def tear_down_suite(self) -> None: """ self.configure_testbed_ipv4(restore=True) # Assume other suites will likely need dpdk driver -self.sut_node.bind_ports_to_driver(for_dpdk=True) +self.sut_node.bind_all_ports_to_driver(for_dpdk=True) -- 2.46.0
[PATCH v4 0/5] dts: add VFs to the framework
From: Jeremy Spewock v4: * apply to next-dts Jeremy Spewock (5): dts: allow binding only a single port to a different driver dts: parameterize what ports the TG sends packets to dts: add class for virtual functions dts: add OS abstractions for creating virtual functions dts: add functions for managing VFs to Node dts/framework/test_suite.py | 48 ++-- dts/framework/testbed_model/linux_session.py | 40 ++- dts/framework/testbed_model/node.py | 116 +-- dts/framework/testbed_model/os_session.py| 40 +++ dts/framework/testbed_model/port.py | 37 +- dts/tests/TestSuite_os_udp.py| 4 +- 6 files changed, 260 insertions(+), 25 deletions(-) -- 2.46.0
[PATCH v4 3/5] dts: add class for virtual functions
From: Jeremy Spewock In DPDK applications virtual functions are treated the same as ports, but within the framework there are benefits to differentiating the two in order to add more metadata to VFs about where they originate from. For this reason this patch adds a new class for handling virtual functions that extends the Port class with some additional information about the VF. Bugzilla ID: 1500 Signed-off-by: Jeremy Spewock --- dts/framework/testbed_model/port.py | 37 - 1 file changed, 36 insertions(+), 1 deletion(-) diff --git a/dts/framework/testbed_model/port.py b/dts/framework/testbed_model/port.py index 817405bea4..c1d85fec2b 100644 --- a/dts/framework/testbed_model/port.py +++ b/dts/framework/testbed_model/port.py @@ -27,7 +27,7 @@ class PortIdentifier: pci: str -@dataclass(slots=True) +@dataclass class Port: """Physical port on a node. @@ -80,6 +80,41 @@ def pci(self) -> str: return self.identifier.pci +@dataclass +class VirtualFunction(Port): +"""Virtual Function (VF) on a port. + +DPDK applications often treat VFs the same as they do the physical ports (PFs) on the host. +For this reason VFs are represented in the framework as a type of port with some additional +metadata that allows the framework to more easily identify which device the VF belongs to as +well as where the VF originated from. + +Attributes: +created_by_framework: :data:`True` if this VF represents one that the DTS framework created +on the node, :data:`False` otherwise. +pf_port: The PF that this VF was created on/gathered from. +""" + +created_by_framework: bool = False +pf_port: Port | None = None + +def __init__( +self, node_name: str, config: PortConfig, created_by_framework: bool, pf_port: Port +) -> None: +"""Extends :meth:`Port.__init__` with VF specific metadata. + +Args: +node_name: The name of the node the VF resides on. +config: Configuration information about the VF. +created_by_framework: :data:`True` if DTS created this VF, otherwise :data:`False` if +this class represents a VF that was preexisting on the node. +pf_port: The PF that this VF was created on/gathered from. +""" +super().__init__(node_name, config) +self.created_by_framework = created_by_framework +self.pf_port = pf_port + + @dataclass(slots=True, frozen=True) class PortLink: """The physical, cabled connection between the ports. -- 2.46.0
[PATCH v4 5/5] dts: add functions for managing VFs to Node
From: Jeremy Spewock In order for test suites to create virtual functions there has to be functions in the API that developers can use. This patch adds the ability to create virtual functions to the Node API so that they are reachable within test suites. Bugzilla ID: 1500 Depends-on: patch-144318 ("dts: add binding to different drivers to TG node") Signed-off-by: Jeremy Spewock --- dts/framework/testbed_model/node.py | 97 - 1 file changed, 94 insertions(+), 3 deletions(-) diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py index 106e189ce3..d9b6ee040b 100644 --- a/dts/framework/testbed_model/node.py +++ b/dts/framework/testbed_model/node.py @@ -13,7 +13,7 @@ The :func:`~Node.skip_setup` decorator can be used without subclassing. """ - +import re from abc import ABC, abstractmethod from ipaddress import IPv4Interface, IPv6Interface from pathlib import PurePath @@ -23,9 +23,10 @@ OS, BuildTargetConfiguration, NodeConfiguration, +PortConfig, TestRunConfiguration, ) -from framework.exception import ConfigurationError +from framework.exception import ConfigurationError, InternalError from framework.logger import DTSLogger, get_dts_logger from framework.settings import SETTINGS @@ -38,7 +39,7 @@ ) from .linux_session import LinuxSession from .os_session import OSSession -from .port import Port +from .port import Port, VirtualFunction class Node(ABC): @@ -276,6 +277,96 @@ def _bind_port_to_driver(self, port: Port, for_dpdk: bool = True) -> None: verify=True, ) +def create_virtual_functions( +self, num: int, pf_port: Port, dpdk_driver: str | None = None +) -> list[VirtualFunction]: +"""Create virtual functions (VFs) from a given physical function (PF) on the node. + +Virtual functions will be created if there are not any currently configured on `pf_port`. +If there are greater than or equal to `num` VFs already configured on `pf_port`, those will +be used instead of creating more. In order to create VFs, the PF must be bound to its +kernel driver. This method will handle binding `pf_port` and any other ports in the test +run that reside on the same device back to their OS drivers if this was not done already. +VFs gathered in this method will be bound to `driver` if one is provided, or the DPDK +driver for `pf_port` and then added to `self.ports`. + +Args: +num: The number of VFs to create. Must be greater than 0. +pf_port: The PF to create the VFs on. +dpdk_driver: Optional driver to bind the VFs to after they are created. Defaults to the +DPDK driver of `pf_port`. + +Raises: +InternalError: If `num` is less than or equal to 0. +""" +if num <= 0: +raise InternalError( +"Method for creating virtual functions received a non-positive value." +) +if not dpdk_driver: +dpdk_driver = pf_port.os_driver_for_dpdk +# Get any other port that is on the same device which DTS is aware of +all_device_ports = [ +p for p in self.ports if p.pci.split(".")[0] == pf_port.pci.split(".")[0] +] +# Ports must be bound to the kernel driver in order to create VFs from them +for port in all_device_ports: +self._bind_port_to_driver(port, False) +# Some PMDs require the interface being up in order to make VFs +self.configure_port_state(port) +created_vfs = self.main_session.set_num_virtual_functions(num, pf_port) +# We don't need more then `num` VFs from the list +vf_pcis = self.main_session.get_pci_addr_of_vfs(pf_port)[:num] +devbind_info = self.main_session.send_command( +f"{self.path_to_devbind_script} -s", privileged=True +).stdout + +ret = [] + +for pci in vf_pcis: +original_driver = re.search(f"{pci}.*drv=([\\d\\w-]*)", devbind_info) +os_driver = original_driver[1] if original_driver else pf_port.os_driver +vf_config = PortConfig( +self.name, pci, dpdk_driver, os_driver, pf_port.peer.node, pf_port.peer.pci +) +vf_port = VirtualFunction(self.name, vf_config, created_vfs, pf_port) +self.main_session.update_ports([vf_port]) +self._bind_port_to_driver(vf_port) +self.ports.append(vf_port) +ret.append(vf_port) +return ret + +def get_vfs_on_port(self, pf_port: Port) -> list[VirtualFunction]: +"""Get all virtual functions (VFs) that DTS is aware of on `pf_port`. + +Args: +pf_port: The port to search for the VFs on. + +Returns: +A list of VFs in the framework that were created/gathered from `pf_port`. +""" +ret
[PATCH v4 4/5] dts: add OS abstractions for creating virtual functions
From: Jeremy Spewock Virtual functions in the framework are created using SR-IOV. The process for doing this can vary depending on the operating system, so the commands to create VFs have to be abstracted into different classes based on the operating system. This patch adds the stubs for methods that create VFs and get the PCI addresses of all VFs on a port to the abstract class as well as a linux implementation for the methods. Bugzilla ID: 1500 Signed-off-by: Jeremy Spewock --- dts/framework/testbed_model/linux_session.py | 40 +++- dts/framework/testbed_model/os_session.py| 40 2 files changed, 79 insertions(+), 1 deletion(-) diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py index 544a665b83..2615411470 100644 --- a/dts/framework/testbed_model/linux_session.py +++ b/dts/framework/testbed_model/linux_session.py @@ -15,7 +15,11 @@ from typing_extensions import NotRequired -from framework.exception import ConfigurationError, RemoteCommandExecutionError +from framework.exception import ( +ConfigurationError, +InternalError, +RemoteCommandExecutionError, +) from framework.utils import expand_range from .cpu import LogicalCore @@ -210,3 +214,37 @@ def configure_ipv4_forwarding(self, enable: bool) -> None: """Overrides :meth:`~.os_session.OSSession.configure_ipv4_forwarding`.""" state = 1 if enable else 0 self.send_command(f"sysctl -w net.ipv4.ip_forward={state}", privileged=True) + +def set_num_virtual_functions(self, num: int, pf_port: Port) -> bool: +"""Overrides :meth:`~.os_session.OSSession.set_num_virtual_functions`.""" +sys_bus_path = f"/sys/bus/pci/devices/{pf_port.pci}/sriov_numvfs".replace(":", "\\:") +curr_num_vfs = int(self.send_command(f"cat {sys_bus_path}").stdout) +if num > 0 and curr_num_vfs >= num: +self._logger.info( +f"{curr_num_vfs} VFs already configured on port {pf_port.identifier.pci} on node " +f"{pf_port.identifier.node}." +) +return False +elif num > 0 and curr_num_vfs > 0: +self._logger.error( +f"Not enough VFs configured on port {pf_port.identifier.pci} on node " +f"{pf_port.identifier.node}. Need {num} but only {curr_num_vfs} are configured. " +"DTS is unable to modify number of preexisting VFs." +) +raise InternalError("Failed to create VFs on port.") +self.send_command(f"echo {num} > {sys_bus_path}", privileged=True, verify=True) +return True + +def get_pci_addr_of_vfs(self, pf_port: Port) -> list[str]: +"""Overrides :meth:`~.os_session.OSSession.get_pci_addr_of_vfs`.""" +sys_bus_path = f"/sys/bus/pci/devices/{pf_port.pci}".replace(":", "\\:") +curr_num_vfs = int(self.send_command(f"cat {sys_bus_path}/sriov_numvfs").stdout) +if curr_num_vfs > 0: +pci_addrs = self.send_command( +'awk -F "PCI_SLOT_NAME=" "/PCI_SLOT_NAME=/ {print \\$2}" ' ++ f"{sys_bus_path}/virtfn*/uevent", +privileged=True, +) +return pci_addrs.stdout.splitlines() +else: +return [] diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py index 79f56b289b..191fc3c0c8 100644 --- a/dts/framework/testbed_model/os_session.py +++ b/dts/framework/testbed_model/os_session.py @@ -395,3 +395,43 @@ def configure_ipv4_forwarding(self, enable: bool) -> None: Args: enable: If :data:`True`, enable the forwarding, otherwise disable it. """ + +@abstractmethod +def set_num_virtual_functions(self, num: int, pf_port: Port) -> bool: +"""Update the number of virtual functions (VFs) on a port. + +It should be noted that, due to the nature of VFs, if there are already VFs that exist on +the physical function (PF) prior to calling this function, additional ones cannot be added. +The only way to add more VFs is to remove the existing and then set the desired amount. For +this reason, this method will handle creation in the following order: + +1. Use existing VFs on the PF if the number of existing VFs is greater than or equal to +`num` +2. Throw an exception noting that VFs cannot be created if the PF has some VFs already set +on it, but the total VFs that it has are less then `num`. +3. Create `num` VFs on the PF if there are none on it already + +Args: +num: The number of VFs to set on the port. +pf_port: The port to add the VFs to. + +Raises: +InternalError: If `pf_port` has less than `num` VFs configured on it +already. + +Returns: +:data:`True` if this method successfully create
Re: [PATCH v2 1/2] vhost: add logging mechanism for reconnection
> On Sep 20, 2024, at 23:09, Maxime Coquelin wrote: > > External email: Use caution opening links or attachments > > > This patch introduces a way for backend to keep track > of the needed information to be able to reconnect without > frontend cooperation. > > It will be used for VDUSE, which does not provide interface > for the backend to save and later recover local virtqueues > metadata needed to reconnect. > > Vhost-user support could also be added for improved packed > ring reconnection support. > > Signed-off-by: Maxime Coquelin > --- > lib/vhost/vhost.h | 41 ++--- > lib/vhost/virtio_net.c | 8 > lib/vhost/virtio_net_ctrl.c | 2 ++ > 3 files changed, 48 insertions(+), 3 deletions(-) > > diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h > index cd3fa55f1b..1f4192f5d1 100644 > --- a/lib/vhost/vhost.h > +++ b/lib/vhost/vhost.h > @@ -269,6 +269,24 @@ struct vhost_async { >}; > }; > > +#define VHOST_RECONNECT_VERSION0x0 > +#define VHOST_MAX_VRING0x100 > +#define VHOST_MAX_QUEUE_PAIRS 0x80 > + > +struct __rte_cache_aligned vhost_reconnect_vring { > + uint16_t last_avail_idx; > + bool avail_wrap_counter; > +}; > + > +struct vhost_reconnect_data { > + uint32_t version; > + uint64_t features; > + uint8_t status; > + struct virtio_net_config config; > + uint32_t nr_vrings; > + struct vhost_reconnect_vring vring[VHOST_MAX_VRING]; > +}; > + > /** > * Structure contains variables relevant to RX/TX virtqueues. > */ > @@ -351,6 +369,7 @@ struct __rte_cache_aligned vhost_virtqueue { >struct virtqueue_stats stats; > >RTE_ATOMIC(bool) irq_pending; > + struct vhost_reconnect_vring *reconnect_log; > }; > > /* Virtio device status as per Virtio specification */ > @@ -362,9 +381,6 @@ struct __rte_cache_aligned vhost_virtqueue { > #define VIRTIO_DEVICE_STATUS_DEV_NEED_RESET0x40 > #define VIRTIO_DEVICE_STATUS_FAILED0x80 > > -#define VHOST_MAX_VRING0x100 > -#define VHOST_MAX_QUEUE_PAIRS 0x80 > - > /* Declare IOMMU related bits for older kernels */ > #ifndef VIRTIO_F_IOMMU_PLATFORM > > @@ -538,8 +554,26 @@ struct __rte_cache_aligned virtio_net { >struct rte_vhost_user_extern_ops extern_ops; > >struct vhost_backend_ops *backend_ops; > + > + struct vhost_reconnect_data *reconnect_log; > }; > > +static __rte_always_inline void > +vhost_virtqueue_reconnect_log_split(struct vhost_virtqueue *vq) > +{ > + if (vq->reconnect_log != NULL) > + vq->reconnect_log->last_avail_idx = vq->last_avail_idx; > +} > + > +static __rte_always_inline void > +vhost_virtqueue_reconnect_log_packed(struct vhost_virtqueue *vq) > +{ > + if (vq->reconnect_log != NULL) { > + vq->reconnect_log->last_avail_idx = vq->last_avail_idx; > + vq->reconnect_log->avail_wrap_counter = > vq->avail_wrap_counter; > + } > +} > + > static inline void > vq_assert_lock__(struct virtio_net *dev, struct vhost_virtqueue *vq, const > char *func) >__rte_assert_exclusive_lock(&vq->access_lock) > @@ -584,6 +618,7 @@ vq_inc_last_avail_packed(struct vhost_virtqueue *vq, > uint16_t num) >vq->avail_wrap_counter ^= 1; >vq->last_avail_idx -= vq->size; >} > + vhost_virtqueue_reconnect_log_packed(vq); > } > > void __vhost_log_cache_write(struct virtio_net *dev, > diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c > index 370402d849..f66a0c82f8 100644 > --- a/lib/vhost/virtio_net.c > +++ b/lib/vhost/virtio_net.c > @@ -1445,6 +1445,7 @@ virtio_dev_rx_split(struct virtio_net *dev, struct > vhost_virtqueue *vq, >} > >vq->last_avail_idx += num_buffers; > + vhost_virtqueue_reconnect_log_split(vq); >} > >do_data_copy_enqueue(dev, vq); > @@ -1857,6 +1858,7 @@ virtio_dev_rx_async_submit_split(struct virtio_net > *dev, struct vhost_virtqueue >pkts_info[slot_idx].mbuf = pkts[pkt_idx]; > >vq->last_avail_idx += num_buffers; > + vhost_virtqueue_reconnect_log_split(vq); >} > >if (unlikely(pkt_idx == 0)) > @@ -1885,6 +1887,7 @@ virtio_dev_rx_async_submit_split(struct virtio_net > *dev, struct vhost_virtqueue >/* recover shadow used ring and available ring */ >vq->shadow_used_idx -= num_descs; >vq->last_avail_idx -= num_descs; > + vhost_virtqueue_reconnect_log_split(vq); >} > >/* keep used descriptors */ > @@ -2100,6 +2103,7 @@ dma_error_handler_packed(struct vhost_virtqueue *vq, > uint16_t slot_idx, >vq->last_avail_idx = vq->last_avail_idx + vq->size - descs_err; >vq->avail_wrap_counter ^= 1; >} > + vhost_virtqueue_reconnect_log_packed(vq); > >if (as
Re: [PATCH v2 2/2] vhost: add reconnection support to VDUSE
> On Sep 20, 2024, at 23:09, Maxime Coquelin wrote: > > External email: Use caution opening links or attachments > > > This patch enables VDUSE reconnection support making use of > the newly introduced reconnection mechanism in Vhost > library. > > At DPDK VDUSE device creation time, there are two > possibilities: > 1. The Kernel VDUSE device does not exist: > a. A reconnection file named after the VUDSE device name > is created in VDUSE tmpfs. > b. The file is truncated to 'struct vhost_reconnect_data' > size, and mmapped. > c. Negotiated features, Virtio status... are saved for > sanity checks at reconnect time. > 2. The Kernel VDUSE device already exists: > a. Exit with failure if no reconnect file exists for > this device. > b. Open and mmap the reconnect file. > c. Perform sanity check to ensure features are compatible. > d. Restore virtqueues' available indexes at startup time. > > Then at runtime, the virtqueues' available index are logged by > the Vhost reconnection mechanism. > > At DPDK VDUSE device destruction time, there are two > possibilities: > 1. The Kernel VDUSE device destruction succeed, which >means it is no more attached to the vDPA bus. The >reconnection file is unmapped and then removed. > 2. The Kernel VDUSE device destruction failed, meaning it >is no more attached to the vDPA bus. The reconnection >file is unmapped but not removed to make possible later >reconnection. > > Signed-off-by: Maxime Coquelin > --- > lib/vhost/vduse.c | 308 -- > 1 file changed, 268 insertions(+), 40 deletions(-) > > diff --git a/lib/vhost/vduse.c b/lib/vhost/vduse.c > index c66602905c..f9ac317438 100644 > --- a/lib/vhost/vduse.c > +++ b/lib/vhost/vduse.c > @@ -136,7 +136,7 @@ vduse_control_queue_event(int fd, void *arg, int *remove > __rte_unused) > } > > static void > -vduse_vring_setup(struct virtio_net *dev, unsigned int index) > +vduse_vring_setup(struct virtio_net *dev, unsigned int index, bool reconnect) > { >struct vhost_virtqueue *vq = dev->virtqueue[index]; >struct vhost_vring_addr *ra = &vq->ring_addrs; > @@ -152,6 +152,19 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int > index) >return; >} > > + if (reconnect) { > + vq->last_avail_idx = vq->reconnect_log->last_avail_idx; > + vq->last_used_idx = vq->reconnect_log->last_avail_idx; > + } else { > + vq->last_avail_idx = vq_info.split.avail_index; > + vq->last_used_idx = vq_info.split.avail_index; > + } > + vq->size = vq_info.num; > + vq->ready = true; > + vq->enabled = vq_info.ready; > + ra->desc_user_addr = vq_info.desc_addr; > + ra->avail_user_addr = vq_info.driver_addr; > + ra->used_user_addr = vq_info.device_addr; >VHOST_CONFIG_LOG(dev->ifname, INFO, "VQ %u info:", index); >VHOST_CONFIG_LOG(dev->ifname, INFO, "\tnum: %u", vq_info.num); >VHOST_CONFIG_LOG(dev->ifname, INFO, "\tdesc_addr: %llx", > @@ -160,17 +173,9 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int > index) >(unsigned long long)vq_info.driver_addr); >VHOST_CONFIG_LOG(dev->ifname, INFO, "\tdevice_addr: %llx", >(unsigned long long)vq_info.device_addr); > - VHOST_CONFIG_LOG(dev->ifname, INFO, "\tavail_idx: %u", > vq_info.split.avail_index); > + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tavail_idx: %u", > vq->last_avail_idx); > + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tused_idx: %u", > vq->last_used_idx); >VHOST_CONFIG_LOG(dev->ifname, INFO, "\tready: %u", vq_info.ready); > - > - vq->last_avail_idx = vq_info.split.avail_index; > - vq->size = vq_info.num; > - vq->ready = true; > - vq->enabled = vq_info.ready; > - ra->desc_user_addr = vq_info.desc_addr; > - ra->avail_user_addr = vq_info.driver_addr; > - ra->used_user_addr = vq_info.device_addr; > - >vq->kickfd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); >if (vq->kickfd < 0) { >VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to init kickfd for > VQ %u: %s", > @@ -267,7 +272,7 @@ vduse_vring_cleanup(struct virtio_net *dev, unsigned int > index) > } > > static void > -vduse_device_start(struct virtio_net *dev) > +vduse_device_start(struct virtio_net *dev, bool reconnect) > { >unsigned int i, ret; > > @@ -287,6 +292,15 @@ vduse_device_start(struct virtio_net *dev) >return; >} > > + if (reconnect && dev->features != dev->reconnect_log->features) { > + VHOST_CONFIG_LOG(dev->ifname, ERR, > + "Mismatch between reconnect file features > 0x%" PRIx64 " & device features 0x%" PRIx64, > + dev->reconnect_log->features, dev->features); > + return; > + } > + > + dev->reconnect
[PATCH v4 02/11] dts: add test case decorators
Add decorators for functional and performance test cases. These decorators add attributes to the decorated test cases. With the addition of decorators, we change the test case discovery mechanism from looking at test case names according to a regex to simply checking an attribute of the function added with one of the decorators. The decorators allow us to add further variables to test cases. Also move the test case filtering to TestSuite while changing the mechanism to separate the logic in a more sensible manner. Bugzilla ID: 1460 Signed-off-by: Juraj Linkeš Reviewed-by: Dean Marx Reviewed-by: Nicholas Pratte --- dts/framework/runner.py | 93 dts/framework/test_result.py | 5 +- dts/framework/test_suite.py | 125 +- dts/tests/TestSuite_hello_world.py| 8 +- dts/tests/TestSuite_os_udp.py | 3 +- dts/tests/TestSuite_pmd_buffer_scatter.py | 3 +- dts/tests/TestSuite_smoke_tests.py| 6 +- 7 files changed, 160 insertions(+), 83 deletions(-) diff --git a/dts/framework/runner.py b/dts/framework/runner.py index ab98de8353..68482dc9af 100644 --- a/dts/framework/runner.py +++ b/dts/framework/runner.py @@ -21,11 +21,10 @@ import inspect import os import random -import re import sys from pathlib import Path -from types import FunctionType -from typing import Iterable, Sequence +from types import MethodType +from typing import Iterable from framework.testbed_model.sut_node import SutNode from framework.testbed_model.tg_node import TGNode @@ -54,7 +53,7 @@ TestSuiteResult, TestSuiteWithCases, ) -from .test_suite import TestSuite +from .test_suite import TestCase, TestSuite class DTSRunner: @@ -234,9 +233,9 @@ def _get_test_suites_with_cases( for test_suite_config in test_suite_configs: test_suite_class = self._get_test_suite_class(test_suite_config.test_suite) -test_cases = [] -func_test_cases, perf_test_cases = self._filter_test_cases( -test_suite_class, test_suite_config.test_cases +test_cases: list[type[TestCase]] = [] +func_test_cases, perf_test_cases = test_suite_class.get_test_cases( +test_suite_config.test_cases ) if func: test_cases.extend(func_test_cases) @@ -311,57 +310,6 @@ def is_test_suite(object) -> bool: f"Couldn't find any valid test suites in {test_suite_module.__name__}." ) -def _filter_test_cases( -self, test_suite_class: type[TestSuite], test_cases_to_run: Sequence[str] -) -> tuple[list[FunctionType], list[FunctionType]]: -"""Filter `test_cases_to_run` from `test_suite_class`. - -There are two rounds of filtering if `test_cases_to_run` is not empty. -The first filters `test_cases_to_run` from all methods of `test_suite_class`. -Then the methods are separated into functional and performance test cases. -If a method matches neither the functional nor performance name prefix, it's an error. - -Args: -test_suite_class: The class of the test suite. -test_cases_to_run: Test case names to filter from `test_suite_class`. -If empty, return all matching test cases. - -Returns: -A list of test case methods that should be executed. - -Raises: -ConfigurationError: If a test case from `test_cases_to_run` is not found -or it doesn't match either the functional nor performance name prefix. -""" -func_test_cases = [] -perf_test_cases = [] -name_method_tuples = inspect.getmembers(test_suite_class, inspect.isfunction) -if test_cases_to_run: -name_method_tuples = [ -(name, method) for name, method in name_method_tuples if name in test_cases_to_run -] -if len(name_method_tuples) < len(test_cases_to_run): -missing_test_cases = set(test_cases_to_run) - { -name for name, _ in name_method_tuples -} -raise ConfigurationError( -f"Test cases {missing_test_cases} not found among methods " -f"of {test_suite_class.__name__}." -) - -for test_case_name, test_case_method in name_method_tuples: -if re.match(self._func_test_case_regex, test_case_name): -func_test_cases.append(test_case_method) -elif re.match(self._perf_test_case_regex, test_case_name): -perf_test_cases.append(test_case_method) -elif test_cases_to_run: -raise ConfigurationError( -f"Method '{test_case_name}' matches neither " -f"a functional nor a performance test case name." -) - -return func_test_cases, perf_test_c
[PATCH v4 01/11] dts: add the aenum dependency
Regular Python enumerations create only one instance for members with the same value, such as: class MyEnum(Enum): foo = 1 bar = 1 MyEnum.foo and MyEnum.bar are aliases that return the same instance. DTS needs to return different instances in the above scenario so that we can map capabilities with different names to the same function that retrieves the capabilities. Signed-off-by: Juraj Linkeš Reviewed-by: Jeremy Spewock Reviewed-by: Dean Marx Reviewed-by: Nicholas Pratte --- dts/poetry.lock| 14 +- dts/pyproject.toml | 1 + 2 files changed, 14 insertions(+), 1 deletion(-) diff --git a/dts/poetry.lock b/dts/poetry.lock index 2dd8bad498..cf5f6569c6 100644 --- a/dts/poetry.lock +++ b/dts/poetry.lock @@ -1,5 +1,17 @@ # This file is automatically @generated by Poetry 1.8.2 and should not be changed by hand. +[[package]] +name = "aenum" +version = "3.1.15" +description = "Advanced Enumerations (compatible with Python's stdlib Enum), NamedTuples, and NamedConstants" +optional = false +python-versions = "*" +files = [ +{file = "aenum-3.1.15-py2-none-any.whl", hash = "sha256:27b1710b9d084de6e2e695dab78fe9f269de924b51ae2850170ee7e1ca6288a5"}, +{file = "aenum-3.1.15-py3-none-any.whl", hash = "sha256:e0dfaeea4c2bd362144b87377e2c61d91958c5ed0b4daf89cb6f45ae23af6288"}, +{file = "aenum-3.1.15.tar.gz", hash = "sha256:8cbd76cd18c4f870ff39b24284d3ea028fbe8731a58df3aa581e434c575b9559"}, +] + [[package]] name = "alabaster" version = "0.7.13" @@ -1350,4 +1362,4 @@ jsonschema = ">=4,<5" [metadata] lock-version = "2.0" python-versions = "^3.10" -content-hash = "6db17f96cb31fb463b0b0a31dff9c02aa72641e0bffd8a610033fe2324006c43" +content-hash = "6f20ce05310df93fed1d392160d1653ae5de5c6f260a5865eb3c6111a7c2b394" diff --git a/dts/pyproject.toml b/dts/pyproject.toml index 91d459f573..506380ac2f 100644 --- a/dts/pyproject.toml +++ b/dts/pyproject.toml @@ -27,6 +27,7 @@ fabric = "^2.7.1" scapy = "^2.5.0" pydocstyle = "6.1.1" typing-extensions = "^4.11.0" +aenum = "^3.1.15" [tool.poetry.group.dev.dependencies] mypy = "^1.10.0" -- 2.43.0
[PATCH v4 05/11] dts: add basic capability support
A test case or suite may require certain capabilities to be present in the tested environment. Add the basic infrastructure for checking the support status of capabilities: * The Capability ABC defining the common capability API * Extension of the TestProtocol with required capabilities (each test suite or case stores the capabilities it requires) * Integration with the runner which calls the new APIs to get which capabilities are supported. Signed-off-by: Juraj Linkeš Reviewed-by: Jeremy Spewock Reviewed-by: Dean Marx --- dts/framework/runner.py | 26 + dts/framework/test_result.py | 40 +++- dts/framework/test_suite.py | 1 + dts/framework/testbed_model/capability.py | 117 +- 4 files changed, 181 insertions(+), 3 deletions(-) diff --git a/dts/framework/runner.py b/dts/framework/runner.py index bf2bfa156f..fff7f1c0a1 100644 --- a/dts/framework/runner.py +++ b/dts/framework/runner.py @@ -26,6 +26,7 @@ from types import MethodType from typing import Iterable +from framework.testbed_model.capability import Capability, get_supported_capabilities from framework.testbed_model.sut_node import SutNode from framework.testbed_model.tg_node import TGNode @@ -454,6 +455,21 @@ def _run_build_target( self._logger.exception("Build target teardown failed.") build_target_result.update_teardown(Result.FAIL, e) +def _get_supported_capabilities( +self, +sut_node: SutNode, +topology_config: Topology, +test_suites_with_cases: Iterable[TestSuiteWithCases], +) -> set[Capability]: + +capabilities_to_check = set() +for test_suite_with_cases in test_suites_with_cases: + capabilities_to_check.update(test_suite_with_cases.required_capabilities) + +self._logger.debug(f"Found capabilities to check: {capabilities_to_check}") + +return get_supported_capabilities(sut_node, topology_config, capabilities_to_check) + def _run_test_suites( self, sut_node: SutNode, @@ -466,6 +482,12 @@ def _run_test_suites( The method assumes the build target we're testing has already been built on the SUT node. The current build target thus corresponds to the current DPDK build present on the SUT node. +Before running any suites, the method determines whether they should be skipped +by inspecting any required capabilities the test suite needs and comparing those +to capabilities supported by the tested environment. If all capabilities are supported, +the suite is run. If all test cases in a test suite would be skipped, the whole test suite +is skipped (the setup and teardown is not run). + If a blocking test suite (such as the smoke test suite) fails, the rest of the test suites in the current build target won't be executed. @@ -478,7 +500,11 @@ def _run_test_suites( """ end_build_target = False topology = Topology(sut_node.ports, tg_node.ports) +supported_capabilities = self._get_supported_capabilities( +sut_node, topology, test_suites_with_cases +) for test_suite_with_cases in test_suites_with_cases: +test_suite_with_cases.mark_skip_unsupported(supported_capabilities) test_suite_result = build_target_result.add_test_suite(test_suite_with_cases) try: if not test_suite_with_cases.skip: diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py index 50633eee4e..40f3fbb77e 100644 --- a/dts/framework/test_result.py +++ b/dts/framework/test_result.py @@ -25,10 +25,12 @@ import os.path from collections.abc import MutableSequence -from dataclasses import dataclass +from dataclasses import dataclass, field from enum import Enum, auto from typing import Union +from framework.testbed_model.capability import Capability + from .config import ( OS, Architecture, @@ -59,10 +61,18 @@ class is to hold a subset of test cases (which could be all test cases) because Attributes: test_suite_class: The test suite class. test_cases: The test case methods. +required_capabilities: The combined required capabilities of both the test suite +and the subset of test cases. """ test_suite_class: type[TestSuite] test_cases: list[type[TestCase]] +required_capabilities: set[Capability] = field(default_factory=set, init=False) + +def __post_init__(self): +"""Gather the required capabilities of the test suite and all test cases.""" +for test_object in [self.test_suite_class] + self.test_cases: + self.required_capabilities.update(test_object.required_capabilities) def create_config(self) -> TestSuiteConfig: """Generate a :class:`TestSuiteConfig` from the stored test suite with test cases. @@ -75,6 +85,34 @
[PATCH v4 03/11] dts: add mechanism to skip test cases or suites
If a test case is not relevant to the testing environment (such as when a NIC doesn't support a tested feature), the framework should skip it. The mechanism is a skeleton without actual logic that would set a test case or suite to be skipped. The mechanism uses a protocol to extend test suites and test cases with additional attributes that track whether the test case or suite should be skipped the reason for skipping it. Also update the results module with the new SKIP result. Signed-off-by: Juraj Linkeš Reviewed-by: Dean Marx --- dts/framework/runner.py | 34 --- dts/framework/test_result.py | 74 ++- dts/framework/test_suite.py | 7 ++- dts/framework/testbed_model/capability.py | 28 + 4 files changed, 105 insertions(+), 38 deletions(-) create mode 100644 dts/framework/testbed_model/capability.py diff --git a/dts/framework/runner.py b/dts/framework/runner.py index 68482dc9af..57284e510e 100644 --- a/dts/framework/runner.py +++ b/dts/framework/runner.py @@ -479,7 +479,20 @@ def _run_test_suites( for test_suite_with_cases in test_suites_with_cases: test_suite_result = build_target_result.add_test_suite(test_suite_with_cases) try: -self._run_test_suite(sut_node, tg_node, test_suite_result, test_suite_with_cases) +if not test_suite_with_cases.skip: +self._run_test_suite( +sut_node, +tg_node, +test_suite_result, +test_suite_with_cases, +) +else: +self._logger.info( +f"Test suite execution SKIPPED: " +f"'{test_suite_with_cases.test_suite_class.__name__}'. Reason: " +f"{test_suite_with_cases.test_suite_class.skip_reason}" +) +test_suite_result.update_setup(Result.SKIP) except BlockingTestSuiteError as e: self._logger.exception( f"An error occurred within {test_suite_with_cases.test_suite_class.__name__}. " @@ -578,14 +591,21 @@ def _execute_test_suite( test_case_result = test_suite_result.add_test_case(test_case_name) all_attempts = SETTINGS.re_run + 1 attempt_nr = 1 -self._run_test_case(test_suite, test_case, test_case_result) -while not test_case_result and attempt_nr < all_attempts: -attempt_nr += 1 +if not test_case.skip: +self._run_test_case(test_suite, test_case, test_case_result) +while not test_case_result and attempt_nr < all_attempts: +attempt_nr += 1 +self._logger.info( +f"Re-running FAILED test case '{test_case_name}'. " +f"Attempt number {attempt_nr} out of {all_attempts}." +) +self._run_test_case(test_suite, test_case, test_case_result) +else: self._logger.info( -f"Re-running FAILED test case '{test_case_name}'. " -f"Attempt number {attempt_nr} out of {all_attempts}." +f"Test case execution SKIPPED: {test_case_name}. Reason: " +f"{test_case.skip_reason}" ) -self._run_test_case(test_suite, test_case, test_case_result) +test_case_result.update_setup(Result.SKIP) def _run_test_case( self, diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py index b1ca584523..50633eee4e 100644 --- a/dts/framework/test_result.py +++ b/dts/framework/test_result.py @@ -75,6 +75,15 @@ def create_config(self) -> TestSuiteConfig: test_cases=[test_case.__name__ for test_case in self.test_cases], ) +@property +def skip(self) -> bool: +"""Skip the test suite if all test cases or the suite itself are to be skipped. + +Returns: +:data:`True` if the test suite should be skipped, :data:`False` otherwise. +""" +return all(test_case.skip for test_case in self.test_cases) or self.test_suite_class.skip + class Result(Enum): """The possible states that a setup, a teardown or a test case may end up in.""" @@ -86,12 +95,12 @@ class Result(Enum): #: ERROR = auto() #: -SKIP = auto() -#: BLOCK = auto() +#: +SKIP = auto() def __bool__(self) -> bool: -"""Only PASS is True.""" +"""Only :attr:`PASS` is True.""" return self is self.PASS @@ -169,14 +178,15 @@ def update_setup(self, result: Result, error: Exception | None = None) -> None: self.setup_result.result = result self.setup_result.error = error -if result in [Result.BLOCK, Result
[PATCH v4 06/11] dts: add NIC capability support
Some test cases or suites may be testing a NIC feature that is not supported on all NICs, so add support for marking test cases or suites as requiring NIC capabilities. The marking is done with a decorator, which populates the internal required_capabilities attribute of TestProtocol. The NIC capability itself is a wrapper around the NicCapability defined in testpmd_shell. The reason is Enums cannot be extended and the class implements the methods of its abstract base superclass. The decorator API is designed to be simple to use. The arguments passed to it are all from the testpmd shell. Everything else (even the actual capability object creation) is done internally. Signed-off-by: Juraj Linkeš Reviewed-by: Dean Marx --- dts/framework/remote_session/testpmd_shell.py | 55 +- dts/framework/testbed_model/capability.py | 167 +- 2 files changed, 220 insertions(+), 2 deletions(-) diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py index 77902a468d..3401adcc28 100644 --- a/dts/framework/remote_session/testpmd_shell.py +++ b/dts/framework/remote_session/testpmd_shell.py @@ -17,10 +17,16 @@ import functools import re import time +from collections.abc import Callable, MutableSet from dataclasses import dataclass, field from enum import Flag, auto from pathlib import PurePath -from typing import Any, Callable, ClassVar, Concatenate, ParamSpec +from typing import TYPE_CHECKING, Any, ClassVar, Concatenate, ParamSpec, TypeAlias + +if TYPE_CHECKING: +from enum import Enum as NoAliasEnum +else: +from aenum import NoAliasEnum from typing_extensions import Self, Unpack @@ -37,6 +43,12 @@ P = ParamSpec("P") TestPmdShellMethod = Callable[Concatenate["TestPmdShell", P], Any] +TestPmdShellCapabilityMethod: TypeAlias = Callable[ +["TestPmdShell", MutableSet["NicCapability"], MutableSet["NicCapability"]], None +] + +TestPmdShellDecorator: TypeAlias = Callable[[TestPmdShellMethod], TestPmdShellMethod] + class TestPmdDevice: """The data of a device that testpmd can recognize. @@ -986,3 +998,44 @@ def _close(self) -> None: self.stop() self.send_command("quit", "Bye...") return super()._close() + + +class NicCapability(NoAliasEnum): +"""A mapping between capability names and the associated :class:`TestPmdShell` methods. + +The :class:`TestPmdShell` capability checking method executes the command that checks +whether the capability is supported. +A decorator may optionally be added to the method that will add and remove configuration +that's necessary to retrieve the capability support status. +The Enum members may be assigned the method itself or a tuple of +(capability_checking_method, decorator_function). + +The signature of each :class:`TestPmdShell` capability checking method must be:: + +fn(self, supported_capabilities: MutableSet, unsupported_capabilities: MutableSet) -> None + +The capability checking method must get whether a capability is supported or not +from a testpmd command. If multiple capabilities can be obtained from a testpmd command, +each should be obtained in the method. These capabilities should then +be added to `supported_capabilities` or `unsupported_capabilities` based on their support. + +The two dictionaries are shared across all capability discovery function calls in a given +test run so that we don't call the same function multiple times. +""" + +def __call__( +self, +testpmd_shell: TestPmdShell, +supported_capabilities: MutableSet[Self], +unsupported_capabilities: MutableSet[Self], +) -> None: +"""Execute the associated capability retrieval function. + +Args: +testpmd_shell: :class:`TestPmdShell` object to which the function will be bound to. +supported_capabilities: The dictionary storing the supported capabilities +of a given test run. +unsupported_capabilities: The dictionary storing the unsupported capabilities +of a given test run. +""" +self.value(testpmd_shell, supported_capabilities, unsupported_capabilities) diff --git a/dts/framework/testbed_model/capability.py b/dts/framework/testbed_model/capability.py index 8899f07f76..fceec4440e 100644 --- a/dts/framework/testbed_model/capability.py +++ b/dts/framework/testbed_model/capability.py @@ -5,14 +5,29 @@ This module provides a protocol that defines the common attributes of test cases and suites and support for test environment capabilities. + +Many test cases are testing features not available on all hardware. + +The module also allows developers to mark test cases or suites as requiring certain +hardware capabilities with the :func:`requires` decorator. """ from abc import ABC, abstractmethod -from collections.abc import Sequence +from collections.abc import MutableS
[PATCH v4 04/11] dts: add support for simpler topologies
We currently assume there are two links between the SUT and TG nodes, but that's too strict, even for some of the already existing test cases. Add support for topologies with less than two links. For topologies with no links, dummy ports are used. The expectation is that test suites or cases that don't require any links won't be using methods that use ports. Any test suites or cases requiring links will be skipped in topologies with no links, but this feature is not implemented in this commit. Signed-off-by: Juraj Linkeš Reviewed-by: Jeremy Spewock Reviewed-by: Dean Marx --- dts/framework/runner.py | 6 +- dts/framework/test_suite.py | 32 +++ dts/framework/testbed_model/node.py | 2 +- dts/framework/testbed_model/port.py | 4 +- dts/framework/testbed_model/topology.py | 100 ++ dts/tests/TestSuite_pmd_buffer_scatter.py | 2 +- 6 files changed, 119 insertions(+), 27 deletions(-) create mode 100644 dts/framework/testbed_model/topology.py diff --git a/dts/framework/runner.py b/dts/framework/runner.py index 57284e510e..bf2bfa156f 100644 --- a/dts/framework/runner.py +++ b/dts/framework/runner.py @@ -54,6 +54,7 @@ TestSuiteWithCases, ) from .test_suite import TestCase, TestSuite +from .testbed_model.topology import Topology class DTSRunner: @@ -476,6 +477,7 @@ def _run_test_suites( test_suites_with_cases: The test suites with test cases to run. """ end_build_target = False +topology = Topology(sut_node.ports, tg_node.ports) for test_suite_with_cases in test_suites_with_cases: test_suite_result = build_target_result.add_test_suite(test_suite_with_cases) try: @@ -483,6 +485,7 @@ def _run_test_suites( self._run_test_suite( sut_node, tg_node, +topology, test_suite_result, test_suite_with_cases, ) @@ -508,6 +511,7 @@ def _run_test_suite( self, sut_node: SutNode, tg_node: TGNode, +topology: Topology, test_suite_result: TestSuiteResult, test_suite_with_cases: TestSuiteWithCases, ) -> None: @@ -535,7 +539,7 @@ def _run_test_suite( self._logger.set_stage( DtsStage.test_suite_setup, Path(SETTINGS.output_dir, test_suite_name) ) -test_suite = test_suite_with_cases.test_suite_class(sut_node, tg_node) +test_suite = test_suite_with_cases.test_suite_class(sut_node, tg_node, topology) try: self._logger.info(f"Starting test suite setup: {test_suite_name}") test_suite.set_up_suite() diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index 5154bb0514..367203f67e 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -25,9 +25,10 @@ from scapy.packet import Packet, Padding, raw # type: ignore[import-untyped] from framework.testbed_model.capability import TestProtocol -from framework.testbed_model.port import Port, PortLink +from framework.testbed_model.port import Port from framework.testbed_model.sut_node import SutNode from framework.testbed_model.tg_node import TGNode +from framework.testbed_model.topology import Topology, TopologyType from framework.testbed_model.traffic_generator.capturing_traffic_generator import ( PacketFilteringConfig, ) @@ -73,7 +74,7 @@ class TestSuite(TestProtocol): #: will block the execution of all subsequent test suites in the current build target. is_blocking: ClassVar[bool] = False _logger: DTSLogger -_port_links: list[PortLink] +_topology_type: TopologyType _sut_port_ingress: Port _sut_port_egress: Port _sut_ip_address_ingress: Union[IPv4Interface, IPv6Interface] @@ -87,6 +88,7 @@ def __init__( self, sut_node: SutNode, tg_node: TGNode, +topology: Topology, ): """Initialize the test suite testbed information and basic configuration. @@ -96,35 +98,21 @@ def __init__( Args: sut_node: The SUT node where the test suite will run. tg_node: The TG node where the test suite will run. +topology: The topology where the test suite will run. """ self.sut_node = sut_node self.tg_node = tg_node self._logger = get_dts_logger(self.__class__.__name__) -self._port_links = [] -self._process_links() -self._sut_port_ingress, self._tg_port_egress = ( -self._port_links[0].sut_port, -self._port_links[0].tg_port, -) -self._sut_port_egress, self._tg_port_ingress = ( -self._port_links[1].sut_port, -self._port_links[1].tg_port, -) +self._topology_type = topology.type +self._tg_port_egress = topology.tg_port_egress +
[PATCH v4 07/11] dts: add NIC capabilities from show rxq info
Add parsing for the show rxq info tespmd command and add support for the Scattered Rx capability. Signed-off-by: Juraj Linkeš Reviewed-by: Dean Marx --- dts/framework/remote_session/testpmd_shell.py | 137 +- dts/framework/testbed_model/capability.py | 12 ++ dts/tests/TestSuite_pmd_buffer_scatter.py | 2 + 3 files changed, 150 insertions(+), 1 deletion(-) diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py index 3401adcc28..3550734ebc 100644 --- a/dts/framework/remote_session/testpmd_shell.py +++ b/dts/framework/remote_session/testpmd_shell.py @@ -49,6 +49,10 @@ TestPmdShellDecorator: TypeAlias = Callable[[TestPmdShellMethod], TestPmdShellMethod] +TestPmdShellNicCapability = ( +TestPmdShellCapabilityMethod | tuple[TestPmdShellCapabilityMethod, TestPmdShellDecorator] +) + class TestPmdDevice: """The data of a device that testpmd can recognize. @@ -392,6 +396,81 @@ def _validate(info: str): return TextParser.wrap(TextParser.find(r"Device private info:\s+([\s\S]+)"), _validate) +class RxQueueState(StrEnum): +"""RX queue states. + +References: +DPDK lib: ``lib/ethdev/rte_ethdev.h`` +testpmd display function: ``app/test-pmd/config.c:get_queue_state_name()`` +""" + +#: +stopped = auto() +#: +started = auto() +#: +hairpin = auto() +#: +unknown = auto() + +@classmethod +def make_parser(cls) -> ParserFn: +"""Makes a parser function. + +Returns: +ParserFn: A dictionary for the `dataclasses.field` metadata argument containing a +parser function that makes an instance of this enum from text. +""" +return TextParser.wrap(TextParser.find(r"Rx queue state: ([^\r\n]+)"), cls) + + +@dataclass +class TestPmdRxqInfo(TextParser): +"""Representation of testpmd's ``show rxq info `` command. + +References: +testpmd command function: ``app/test-pmd/cmdline.c:cmd_showqueue()`` +testpmd display function: ``app/test-pmd/config.c:rx_queue_infos_display()`` +""" + +#: +port_id: int = field(metadata=TextParser.find_int(r"Infos for port (\d+)\b ?, RX queue \d+\b")) +#: +queue_id: int = field(metadata=TextParser.find_int(r"Infos for port \d+\b ?, RX queue (\d+)\b")) +#: Mempool used by that queue +mempool: str = field(metadata=TextParser.find(r"Mempool: ([^\r\n]+)")) +#: Ring prefetch threshold +rx_prefetch_threshold: int = field( +metadata=TextParser.find_int(r"RX prefetch threshold: (\d+)\b") +) +#: Ring host threshold +rx_host_threshold: int = field(metadata=TextParser.find_int(r"RX host threshold: (\d+)\b")) +#: Ring writeback threshold +rx_writeback_threshold: int = field( +metadata=TextParser.find_int(r"RX writeback threshold: (\d+)\b") +) +#: Drives the freeing of Rx descriptors +rx_free_threshold: int = field(metadata=TextParser.find_int(r"RX free threshold: (\d+)\b")) +#: Drop packets if no descriptors are available +rx_drop_packets: bool = field(metadata=TextParser.find(r"RX drop packets: on")) +#: Do not start queue with rte_eth_dev_start() +rx_deferred_start: bool = field(metadata=TextParser.find(r"RX deferred start: on")) +#: Scattered packets Rx enabled +rx_scattered_packets: bool = field(metadata=TextParser.find(r"RX scattered packets: on")) +#: The state of the queue +rx_queue_state: str = field(metadata=RxQueueState.make_parser()) +#: Configured number of RXDs +number_of_rxds: int = field(metadata=TextParser.find_int(r"Number of RXDs: (\d+)\b")) +#: Hardware receive buffer size +rx_buffer_size: int | None = field( +default=None, metadata=TextParser.find_int(r"RX buffer size: (\d+)\b") +) +#: Burst mode information +burst_mode: str | None = field( +default=None, metadata=TextParser.find(r"Burst mode: ([^\r\n]+)") +) + + @dataclass class TestPmdPort(TextParser): """Dataclass representing the result of testpmd's ``show port info`` command.""" @@ -635,6 +714,30 @@ def _wrapper(self: "TestPmdShell", *args: P.args, **kwargs: P.kwargs): return _wrapper +def add_remove_mtu(mtu: int = 1500) -> Callable[[TestPmdShellMethod], TestPmdShellMethod]: +"""Configure MTU to `mtu` on all ports, run the decorated function, then revert. + +Args: +mtu: The MTU to configure all ports on. + +Returns: +The method decorated with setting and reverting MTU. +""" + +def decorator(func: TestPmdShellMethod) -> TestPmdShellMethod: +@functools.wraps(func) +def wrapper(self: "TestPmdShell", *args: P.args, **kwargs: P.kwargs): +original_mtu = self.ports[0].mtu +self.set_port_mtu_all(mtu=mtu, verify=False) +retval = func(self, *args, **kwargs) +self.set_port_mtu_all(original_mtu if original_mtu el
[PATCH v4 08/11] dts: add topology capability
Add support for marking test cases as requiring a certain topology. The default topology is a two link topology and the other supported topologies are one link and no link topologies. The TestProtocol of test suites and cases is extended with the topology type each test suite or case requires. Each test case starts out as requiring a two link topology and can be marked as requiring as topology directly (by decorating the test case) or through its test suite. If a test suite is decorated as requiring a certain topology, all its test cases are marked as such. If both test suite and a test case are decorated as requiring a topology, the test case cannot require a more complex topology than the whole suite (but it can require a less complex one). If a test suite is not decorated, this has no effect on required test case topology. Since the default topology is defined as a reference to one of the actual topologies, the NoAliasEnum from the aenum package is utilized, which removes the aliasing of Enums so that TopologyType.two_links and TopologyType.default are distinct. This is needed to distinguish between a user passed value and the default value being used (which is used when a test suite is or isn't decorated). Signed-off-by: Juraj Linkeš Reviewed-by: Dean Marx --- dts/framework/test_suite.py | 6 +- dts/framework/testbed_model/capability.py | 193 +- dts/framework/testbed_model/topology.py | 35 +++- dts/tests/TestSuite_hello_world.py| 2 + dts/tests/TestSuite_pmd_buffer_scatter.py | 8 +- dts/tests/TestSuite_smoke_tests.py| 2 + 6 files changed, 230 insertions(+), 16 deletions(-) diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index bdc57a6339..da16953f69 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -28,7 +28,7 @@ from framework.testbed_model.port import Port from framework.testbed_model.sut_node import SutNode from framework.testbed_model.tg_node import TGNode -from framework.testbed_model.topology import Topology, TopologyType +from framework.testbed_model.topology import Topology from framework.testbed_model.traffic_generator.capturing_traffic_generator import ( PacketFilteringConfig, ) @@ -74,7 +74,6 @@ class TestSuite(TestProtocol): #: will block the execution of all subsequent test suites in the current build target. is_blocking: ClassVar[bool] = False _logger: DTSLogger -_topology_type: TopologyType _sut_port_ingress: Port _sut_port_egress: Port _sut_ip_address_ingress: Union[IPv4Interface, IPv6Interface] @@ -103,7 +102,6 @@ def __init__( self.sut_node = sut_node self.tg_node = tg_node self._logger = get_dts_logger(self.__class__.__name__) -self._topology_type = topology.type self._tg_port_egress = topology.tg_port_egress self._sut_port_ingress = topology.sut_port_ingress self._sut_port_egress = topology.sut_port_egress @@ -528,6 +526,8 @@ def _decorator(func: TestSuiteMethodType) -> type[TestCase]: test_case.skip = cls.skip test_case.skip_reason = cls.skip_reason test_case.required_capabilities = set() +test_case.topology_type = cls.topology_type +test_case.topology_type.add_to_required(test_case) test_case.test_type = test_case_type return test_case diff --git a/dts/framework/testbed_model/capability.py b/dts/framework/testbed_model/capability.py index 6a7a6cdbee..2207957a7a 100644 --- a/dts/framework/testbed_model/capability.py +++ b/dts/framework/testbed_model/capability.py @@ -7,11 +7,32 @@ and support for test environment capabilities. Many test cases are testing features not available on all hardware. +On the other hand, some test cases or suites may not need the most complex topology available. + +The module allows developers to mark test cases or suites a requiring certain hardware capabilities +or a particular topology with the :func:`requires` decorator. + +There are differences between hardware and topology capabilities: + +* Hardware capabilities are assumed to not be required when not specified. +* However, some topology is always available, so each test case or suite is assigned + a default topology if no topology is specified in the decorator. The module also allows developers to mark test cases or suites as requiring certain hardware capabilities with the :func:`requires` decorator. -Example: +Examples: +.. code:: python + +from framework.test_suite import TestSuite, func_test +from framework.testbed_model.capability import TopologyType, requires +# The whole test suite (each test case within) doesn't require any links. +@requires(topology_type=TopologyType.no_link) +@func_test +class TestHelloWorld(TestSuite): +def hello_world_single_core(self): +... + ..
[PATCH v4 09/11] doc: add DTS capability doc sources
Add new files to generate DTS API documentation from. Signed-off-by: Juraj Linkeš Reviewed-by: Jeremy Spewock Reviewed-by: Dean Marx --- doc/api/dts/framework.testbed_model.capability.rst | 6 ++ doc/api/dts/framework.testbed_model.rst| 2 ++ doc/api/dts/framework.testbed_model.topology.rst | 6 ++ 3 files changed, 14 insertions(+) create mode 100644 doc/api/dts/framework.testbed_model.capability.rst create mode 100644 doc/api/dts/framework.testbed_model.topology.rst diff --git a/doc/api/dts/framework.testbed_model.capability.rst b/doc/api/dts/framework.testbed_model.capability.rst new file mode 100644 index 00..326fed9104 --- /dev/null +++ b/doc/api/dts/framework.testbed_model.capability.rst @@ -0,0 +1,6 @@ +capability - Testbed Capabilities += + +.. automodule:: framework.testbed_model.capability + :members: + :show-inheritance: diff --git a/doc/api/dts/framework.testbed_model.rst b/doc/api/dts/framework.testbed_model.rst index 4b024e47e6..e1f9543b28 100644 --- a/doc/api/dts/framework.testbed_model.rst +++ b/doc/api/dts/framework.testbed_model.rst @@ -21,6 +21,8 @@ testbed\_model - Testbed Modelling Package framework.testbed_model.node framework.testbed_model.sut_node framework.testbed_model.tg_node + framework.testbed_model.capability framework.testbed_model.cpu framework.testbed_model.port + framework.testbed_model.topology framework.testbed_model.virtual_device diff --git a/doc/api/dts/framework.testbed_model.topology.rst b/doc/api/dts/framework.testbed_model.topology.rst new file mode 100644 index 00..f3d9d1f418 --- /dev/null +++ b/doc/api/dts/framework.testbed_model.topology.rst @@ -0,0 +1,6 @@ +topology - Testbed Topology +=== + +.. automodule:: framework.testbed_model.topology + :members: + :show-inheritance: -- 2.43.0
[PATCH v4 11/11] dts: add NIC capabilities from show port info
Add the capabilities advertised by the testpmd command "show port info" so that test cases may be marked as requiring those capabilities: RUNTIME_RX_QUEUE_SETUP RUNTIME_TX_QUEUE_SETUP RXQ_SHARE FLOW_RULE_KEEP FLOW_SHARED_OBJECT_KEEP These names are copy pasted from the existing DeviceCapabilitiesFlag class. Dynamic addition of Enum members runs into problems with typing (mypy doesn't know about the members) and documentation generation (Sphinx doesn't know about the members). Signed-off-by: Juraj Linkeš Reviewed-by: Jeremy Spewock Reviewed-by: Dean Marx --- dts/framework/remote_session/testpmd_shell.py | 38 +++ 1 file changed, 38 insertions(+) diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py index e111a67663..60d017fc72 100644 --- a/dts/framework/remote_session/testpmd_shell.py +++ b/dts/framework/remote_session/testpmd_shell.py @@ -1279,6 +1279,24 @@ def get_capabilities_rxq_info( else: unsupported_capabilities.add(NicCapability.SCATTERED_RX_ENABLED) +def get_capabilities_show_port_info( +self, +supported_capabilities: MutableSet["NicCapability"], +unsupported_capabilities: MutableSet["NicCapability"], +) -> None: +"""Get all capabilities from show port info and divide them into supported and unsupported. + +Args: +supported_capabilities: Supported capabilities will be added to this set. +unsupported_capabilities: Unsupported capabilities will be added to this set. +""" +self._update_capabilities_from_flag( +supported_capabilities, +unsupported_capabilities, +DeviceCapabilitiesFlag, +self.ports[0].device_capabilities, +) + class NicCapability(NoAliasEnum): """A mapping between capability names and the associated :class:`TestPmdShell` methods. @@ -1390,6 +1408,26 @@ class NicCapability(NoAliasEnum): RX_OFFLOAD_VLAN: TestPmdShellCapabilityMethod = functools.partial( TestPmdShell.get_capabilities_rx_offload ) +#: Device supports Rx queue setup after device started. +RUNTIME_RX_QUEUE_SETUP: TestPmdShellCapabilityMethod = functools.partial( +TestPmdShell.get_capabilities_show_port_info +) +#: Device supports Tx queue setup after device started. +RUNTIME_TX_QUEUE_SETUP: TestPmdShellCapabilityMethod = functools.partial( +TestPmdShell.get_capabilities_show_port_info +) +#: Device supports shared Rx queue among ports within Rx domain and switch domain. +RXQ_SHARE: TestPmdShellCapabilityMethod = functools.partial( +TestPmdShell.get_capabilities_show_port_info +) +#: Device supports keeping flow rules across restart. +FLOW_RULE_KEEP: TestPmdShellCapabilityMethod = functools.partial( +TestPmdShell.get_capabilities_show_port_info +) +#: Device supports keeping shared flow objects across restart. +FLOW_SHARED_OBJECT_KEEP: TestPmdShellCapabilityMethod = functools.partial( +TestPmdShell.get_capabilities_show_port_info +) def __call__( self, -- 2.43.0
[PATCH v4 10/11] dts: add Rx offload capabilities
The scatter Rx offload capability is needed for the pmd_buffer_scatter test suite. The command that retrieves the capability is: show port rx_offload capabilities The command also retrieves a lot of other capabilities (RX_OFFLOAD_*) which are all added into a Flag. The Flag members correspond to NIC capability names so a convenience function that looks for the supported Flags in a testpmd output is also added. The NIC capability names (mentioned above) are copy-pasted from the Flag. Dynamic addition of Enum members runs into problems with typing (mypy doesn't know about the members) and documentation generation (Sphinx doesn't know about the members). Signed-off-by: Juraj Linkeš Reviewed-by: Dean Marx --- dts/framework/remote_session/testpmd_shell.py | 233 ++ dts/tests/TestSuite_pmd_buffer_scatter.py | 1 + 2 files changed, 234 insertions(+) diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py index 3550734ebc..e111a67663 100644 --- a/dts/framework/remote_session/testpmd_shell.py +++ b/dts/framework/remote_session/testpmd_shell.py @@ -672,6 +672,123 @@ class TestPmdPortStats(TextParser): tx_bps: int = field(metadata=TextParser.find_int(r"Tx-bps:\s+(\d+)")) +class RxOffloadCapability(Flag): +"""Rx offload capabilities of a device. + +The flags are taken from ``lib/ethdev/rte_ethdev.h``. +They're prefixed with ``RTE_ETH_RX_OFFLOAD`` in ``lib/ethdev/rte_ethdev.h`` +instead of ``RX_OFFLOAD``, which is what testpmd changes the prefix to. +The values are not contiguous, so the correspondence is preserved +by specifying concrete values interspersed between auto() values. + +The ``RX_OFFLOAD`` prefix has been preserved so that the same flag names can be used +in :class:`NicCapability`. The prefix is needed in :class:`NicCapability` since there's +no other qualifier which would sufficiently distinguish it from other capabilities. + +References: +DPDK lib: ``lib/ethdev/rte_ethdev.h`` +testpmd display function: ``app/test-pmd/cmdline.c:print_rx_offloads()`` +""" + +#: +RX_OFFLOAD_VLAN_STRIP = auto() +#: Device supports L3 checksum offload. +RX_OFFLOAD_IPV4_CKSUM = auto() +#: Device supports L4 checksum offload. +RX_OFFLOAD_UDP_CKSUM = auto() +#: Device supports L4 checksum offload. +RX_OFFLOAD_TCP_CKSUM = auto() +#: Device supports Large Receive Offload. +RX_OFFLOAD_TCP_LRO = auto() +#: Device supports QinQ (queue in queue) offload. +RX_OFFLOAD_QINQ_STRIP = auto() +#: Device supports inner packet L3 checksum. +RX_OFFLOAD_OUTER_IPV4_CKSUM = auto() +#: Device supports MACsec. +RX_OFFLOAD_MACSEC_STRIP = auto() +#: Device supports filtering of a VLAN Tag identifier. +RX_OFFLOAD_VLAN_FILTER = 1 << 9 +#: Device supports VLAN offload. +RX_OFFLOAD_VLAN_EXTEND = auto() +#: Device supports receiving segmented mbufs. +RX_OFFLOAD_SCATTER = 1 << 13 +#: Device supports Timestamp. +RX_OFFLOAD_TIMESTAMP = auto() +#: Device supports crypto processing while packet is received in NIC. +RX_OFFLOAD_SECURITY = auto() +#: Device supports CRC stripping. +RX_OFFLOAD_KEEP_CRC = auto() +#: Device supports L4 checksum offload. +RX_OFFLOAD_SCTP_CKSUM = auto() +#: Device supports inner packet L4 checksum. +RX_OFFLOAD_OUTER_UDP_CKSUM = auto() +#: Device supports RSS hashing. +RX_OFFLOAD_RSS_HASH = auto() +#: Device supports +RX_OFFLOAD_BUFFER_SPLIT = auto() +#: Device supports all checksum capabilities. +RX_OFFLOAD_CHECKSUM = RX_OFFLOAD_IPV4_CKSUM | RX_OFFLOAD_UDP_CKSUM | RX_OFFLOAD_TCP_CKSUM +#: Device supports all VLAN capabilities. +RX_OFFLOAD_VLAN = ( +RX_OFFLOAD_VLAN_STRIP +| RX_OFFLOAD_VLAN_FILTER +| RX_OFFLOAD_VLAN_EXTEND +| RX_OFFLOAD_QINQ_STRIP +) + +@classmethod +def from_string(cls, line: str) -> Self: +"""Make an instance from a string containing the flag names separated with a space. + +Args: +line: The line to parse. + +Returns: +A new instance containing all found flags. +""" +flag = cls(0) +for flag_name in line.split(): +flag |= cls[f"RX_OFFLOAD_{flag_name}"] +return flag + +@classmethod +def make_parser(cls, per_port: bool) -> ParserFn: +"""Make a parser function. + +Args: +per_port: If :data:`True`, will return capabilities per port. If :data:`False`, +will return capabilities per queue. + +Returns: +ParserFn: A dictionary for the `dataclasses.field` metadata argument containing a +parser function that makes an instance of this flag from text. +""" +granularity = "Port" if per_port else "Queue" +return TextParser.wrap( +TextParser.find(rf"Per {granularity}\
Re: [v4] net/zxdh: Provided zxdh basic init
Hi, Ferruh Could you please provide feedback on the patch I submitted? Any suggestions for improvement would be appreciated. Thanks
RE: [PATCH] [RFC] cryptodev: replace LIST_END enumerators with APIs
Hi Morten, Apologies for delayed response. > Maybe a combination, returning the lowest end of the two versions of the list, > would work... > > -- > Common header file (rte_common.h): > -- > > /* Add at end of enum list in the header file. */ > #define RTE_ENUM_LIST_END(name) \ > _ # name # _ENUM_LIST_END /**< @internal */ > > /* Add somewhere in header file, preferably after the enum list. */ > #define rte_declare_enum_list_end(name) \ > /** @internal */ \ > int _# name # _enum_list_end(void); \ > \ > static int name # _enum_list_end(void) \ > { \ > static int cached = 0; \ > \ > if (likely(cached != 0)) \ > return cached; \ > \ > return cached = RTE_MIN( \ > RTE_ENUM_LIST_END(name), \ > _ # name # _enum_list_end()); \ > } \ > \ > int _# name # _enum_list_end(void) > > /* Add in the library/driver implementation. */ > #define rte_define_enum_list_end(name) \ > int _# name # _enum_list_end(void) \ > { \ > return RTE_ENUM_LIST_END(name); \ > } \ > \ > int _# name # _enum_list_end(void) > > > Library header file: > > > enum rte_crypto_asym_op_type { > RTE_CRYPTO_ASYM_OP_VERIFY, > /**< Signature Verification operation */ > RTE_ENUM_LIST_END(rte_crypto_asym_op) Will the ABI check be ok for adding anything in between RTE_CRYPTO_ASYM_OP_VERIFY and RTE_ENUM_LIST_END(rte_crypto_asym_op)? Don’t we need to add exception for that if we somehow make it internal by adding a comment only? Library is actually not restricting the application to not use RTE_ENUM_LIST_END(rte_crypto_asym_op) directly. Also we may need to expose the .c file internal function as experimental in version.map > } > > rte_declare_enum_list_end(rte_crypto_asym_op); > > --- > Library C file: > --- > > rte_define_enum_list_end(rte_crypto_asym_op); If we want to make it a generic thing in rte_common.h Will the below change be ok? -- Common header file (rte_common.h): -- #define rte_define_enum_list_end(name, last_value) \ static inline int name ## _enum_list_end(void) \ { \ return last_value + 1; \ } Lib header file //After the enum definition define the list end as below rte_define_enum_list_end(rte_crypto_asym_op, RTE_CRYPTO_ASYM_OP_VERIFY); And wherever list end is needed use rte_crypto_asym_op_enum_list_end()? With this change, abi check will not complain for any new addition at the end of enum. And we do not need to expose any internal API in version.map.
[PATCH v3 1/2] vhost: add logging mechanism for reconnection
This patch introduces a way for backend to keep track of the needed information to be able to reconnect without frontend cooperation. It will be used for VDUSE, which does not provide interface for the backend to save and later recover local virtqueues metadata needed to reconnect. Vhost-user support could also be added for improved packed ring reconnection support. Signed-off-by: Maxime Coquelin Reviewed-by: Chenbo Xia Reviewed-by: David Marchand --- lib/vhost/vhost.c | 2 ++ lib/vhost/vhost.h | 41 ++--- lib/vhost/vhost_user.c | 4 lib/vhost/virtio_net.c | 8 lib/vhost/virtio_net_ctrl.c | 2 ++ 5 files changed, 54 insertions(+), 3 deletions(-) diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index ac71d17784..5a50a06f8d 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -1712,9 +1712,11 @@ rte_vhost_set_vring_base(int vid, uint16_t queue_id, vq->avail_wrap_counter = !!(last_avail_idx & (1 << 15)); vq->last_used_idx = last_used_idx & 0x7fff; vq->used_wrap_counter = !!(last_used_idx & (1 << 15)); + vhost_virtqueue_reconnect_log_packed(vq); } else { vq->last_avail_idx = last_avail_idx; vq->last_used_idx = last_used_idx; + vhost_virtqueue_reconnect_log_split(vq); } return 0; diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index cd3fa55f1b..1f4192f5d1 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -269,6 +269,24 @@ struct vhost_async { }; }; +#define VHOST_RECONNECT_VERSION0x0 +#define VHOST_MAX_VRING0x100 +#define VHOST_MAX_QUEUE_PAIRS 0x80 + +struct __rte_cache_aligned vhost_reconnect_vring { + uint16_t last_avail_idx; + bool avail_wrap_counter; +}; + +struct vhost_reconnect_data { + uint32_t version; + uint64_t features; + uint8_t status; + struct virtio_net_config config; + uint32_t nr_vrings; + struct vhost_reconnect_vring vring[VHOST_MAX_VRING]; +}; + /** * Structure contains variables relevant to RX/TX virtqueues. */ @@ -351,6 +369,7 @@ struct __rte_cache_aligned vhost_virtqueue { struct virtqueue_stats stats; RTE_ATOMIC(bool) irq_pending; + struct vhost_reconnect_vring *reconnect_log; }; /* Virtio device status as per Virtio specification */ @@ -362,9 +381,6 @@ struct __rte_cache_aligned vhost_virtqueue { #define VIRTIO_DEVICE_STATUS_DEV_NEED_RESET0x40 #define VIRTIO_DEVICE_STATUS_FAILED0x80 -#define VHOST_MAX_VRING0x100 -#define VHOST_MAX_QUEUE_PAIRS 0x80 - /* Declare IOMMU related bits for older kernels */ #ifndef VIRTIO_F_IOMMU_PLATFORM @@ -538,8 +554,26 @@ struct __rte_cache_aligned virtio_net { struct rte_vhost_user_extern_ops extern_ops; struct vhost_backend_ops *backend_ops; + + struct vhost_reconnect_data *reconnect_log; }; +static __rte_always_inline void +vhost_virtqueue_reconnect_log_split(struct vhost_virtqueue *vq) +{ + if (vq->reconnect_log != NULL) + vq->reconnect_log->last_avail_idx = vq->last_avail_idx; +} + +static __rte_always_inline void +vhost_virtqueue_reconnect_log_packed(struct vhost_virtqueue *vq) +{ + if (vq->reconnect_log != NULL) { + vq->reconnect_log->last_avail_idx = vq->last_avail_idx; + vq->reconnect_log->avail_wrap_counter = vq->avail_wrap_counter; + } +} + static inline void vq_assert_lock__(struct virtio_net *dev, struct vhost_virtqueue *vq, const char *func) __rte_assert_exclusive_lock(&vq->access_lock) @@ -584,6 +618,7 @@ vq_inc_last_avail_packed(struct vhost_virtqueue *vq, uint16_t num) vq->avail_wrap_counter ^= 1; vq->last_avail_idx -= vq->size; } + vhost_virtqueue_reconnect_log_packed(vq); } void __vhost_log_cache_write(struct virtio_net *dev, diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index 5f470da38a..d20b9e8497 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -954,6 +954,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq->last_used_idx, vq->used->idx); vq->last_used_idx = vq->used->idx; vq->last_avail_idx = vq->used->idx; + vhost_virtqueue_reconnect_log_split(vq); VHOST_CONFIG_LOG(dev->ifname, WARNING, "some packets maybe resent for Tx and dropped for Rx"); } @@ -1039,9 +1040,11 @@ vhost_user_set_vring_base(struct virtio_net **pdev, */ vq->last_used_idx = vq->last_avail_idx; vq->used_wrap_counter = vq->avail_wrap_counter; + vhost_virtqueue_reconnect_log_packed(vq); } else { vq->last_used_idx = ctx->ms
[PATCH v3 2/2] vhost: add reconnection support to VDUSE
This patch enables VDUSE reconnection support making use of the newly introduced reconnection mechanism in Vhost library. At DPDK VDUSE device creation time, there are two possibilities: 1. The Kernel VDUSE device does not exist: a. A reconnection file named after the VDUSE device name is created in VDUSE tmpfs. b. The file is truncated to 'struct vhost_reconnect_data' size, and mmapped. c. Negotiated features, Virtio status... are saved for sanity checks at reconnect time. 2. The Kernel VDUSE device already exists: a. Exit with failure if no reconnect file exists for this device. b. Open and mmap the reconnect file. c. Perform sanity check to ensure features are compatible. d. Restore virtqueues' available indexes at startup time. Then at runtime, the virtqueues' available index are logged by the Vhost reconnection mechanism. At DPDK VDUSE device destruction time, there are two possibilities: 1. The Kernel VDUSE device destruction succeeded, which means it is no more attached to the vDPA bus. The reconnection file is unmapped and then removed. 2. The Kernel VDUSE device destruction failed, meaning it is no more attached to the vDPA bus. The reconnection file is unmapped but not removed to make possible later reconnection. Signed-off-by: Maxime Coquelin Reviewed-by: Chenbo Xia Reviewed-by: David Marchand --- lib/vhost/vduse.c | 308 -- 1 file changed, 268 insertions(+), 40 deletions(-) diff --git a/lib/vhost/vduse.c b/lib/vhost/vduse.c index c66602905c..f9ac317438 100644 --- a/lib/vhost/vduse.c +++ b/lib/vhost/vduse.c @@ -136,7 +136,7 @@ vduse_control_queue_event(int fd, void *arg, int *remove __rte_unused) } static void -vduse_vring_setup(struct virtio_net *dev, unsigned int index) +vduse_vring_setup(struct virtio_net *dev, unsigned int index, bool reconnect) { struct vhost_virtqueue *vq = dev->virtqueue[index]; struct vhost_vring_addr *ra = &vq->ring_addrs; @@ -152,6 +152,19 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) return; } + if (reconnect) { + vq->last_avail_idx = vq->reconnect_log->last_avail_idx; + vq->last_used_idx = vq->reconnect_log->last_avail_idx; + } else { + vq->last_avail_idx = vq_info.split.avail_index; + vq->last_used_idx = vq_info.split.avail_index; + } + vq->size = vq_info.num; + vq->ready = true; + vq->enabled = vq_info.ready; + ra->desc_user_addr = vq_info.desc_addr; + ra->avail_user_addr = vq_info.driver_addr; + ra->used_user_addr = vq_info.device_addr; VHOST_CONFIG_LOG(dev->ifname, INFO, "VQ %u info:", index); VHOST_CONFIG_LOG(dev->ifname, INFO, "\tnum: %u", vq_info.num); VHOST_CONFIG_LOG(dev->ifname, INFO, "\tdesc_addr: %llx", @@ -160,17 +173,9 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) (unsigned long long)vq_info.driver_addr); VHOST_CONFIG_LOG(dev->ifname, INFO, "\tdevice_addr: %llx", (unsigned long long)vq_info.device_addr); - VHOST_CONFIG_LOG(dev->ifname, INFO, "\tavail_idx: %u", vq_info.split.avail_index); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tavail_idx: %u", vq->last_avail_idx); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tused_idx: %u", vq->last_used_idx); VHOST_CONFIG_LOG(dev->ifname, INFO, "\tready: %u", vq_info.ready); - - vq->last_avail_idx = vq_info.split.avail_index; - vq->size = vq_info.num; - vq->ready = true; - vq->enabled = vq_info.ready; - ra->desc_user_addr = vq_info.desc_addr; - ra->avail_user_addr = vq_info.driver_addr; - ra->used_user_addr = vq_info.device_addr; - vq->kickfd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); if (vq->kickfd < 0) { VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to init kickfd for VQ %u: %s", @@ -267,7 +272,7 @@ vduse_vring_cleanup(struct virtio_net *dev, unsigned int index) } static void -vduse_device_start(struct virtio_net *dev) +vduse_device_start(struct virtio_net *dev, bool reconnect) { unsigned int i, ret; @@ -287,6 +292,15 @@ vduse_device_start(struct virtio_net *dev) return; } + if (reconnect && dev->features != dev->reconnect_log->features) { + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Mismatch between reconnect file features 0x%" PRIx64 " & device features 0x%" PRIx64, + dev->reconnect_log->features, dev->features); + return; + } + + dev->reconnect_log->features = dev->features; + VHOST_CONFIG_LOG(dev->ifname, INFO, "Negotiated Virtio features: 0x%" PRIx64, dev->features); @@ -300,7 +314,7 @@ vduse_device_start(struct virtio_net *dev) } for (i = 0; i < dev->
Re: [PATCH v2 1/2] vhost: add logging mechanism for reconnection
On 9/23/24 17:42, David Marchand wrote: On Fri, Sep 20, 2024 at 11:09 AM Maxime Coquelin wrote: This patch introduces a way for backend to keep track of the needed information to be able to reconnect without frontend cooperation. It will be used for VDUSE, which does not provide interface for the backend to save and later recover local virtqueues metadata needed to reconnect. Vhost-user support could also be added for improved packed ring reconnection support. Signed-off-by: Maxime Coquelin vq->last_avail_idx gets updated in other places and I suspect we are missing some calls to vhost_virtqueue_reconnect_log_split/packed. I spotted: - lib/vhost/vhost.c: rte_vhost_set_vring_base() - lib/vhost/vhost_user.c: translate_ring_addresses(), vhost_user_set_vring_base(). The rest lgtm. Thanks, I patched the missing avail index updates you identified. And also added your R-by as asked off-list. Maxime
[PATCH v3 0/2] vhost: add VDUSE reconnection support
This series adds support for VDUSE reconnection. First patch introduces the reconnection file layout and track the virtqueues available index updates in the datapath and control queue. Second patch adds VDUSE reconnect intialization and some sanity checks to prevent incompatible reconnections. Changes in v3: == - Fixed missing avail index updates (David) - Fixed typos in commit message (David) - Applied R-by's Changes in v2: == - Added more sanity checks at reconnection - Improve versionning - Fix error loggin (Chenbo) - Clarify why offloading device start is required (Chenbo) - Change runtime path to /vduse instead of /dpdk/vduse Maxime Coquelin (2): vhost: add logging mechanism for reconnection vhost: add reconnection support to VDUSE lib/vhost/vduse.c | 308 +++- lib/vhost/vhost.c | 2 + lib/vhost/vhost.h | 41 - lib/vhost/vhost_user.c | 4 + lib/vhost/virtio_net.c | 8 + lib/vhost/virtio_net_ctrl.c | 2 + 6 files changed, 322 insertions(+), 43 deletions(-) -- 2.46.0
Re: [PATCH v4 02/11] dts: add test case decorators
One super nit-pick comment below, even without that though I still think this looks good. Reviewed-by: Jeremy Spewock On Mon, Sep 23, 2024 at 11:02 AM Juraj Linkeš wrote: > +def is_test_case(function: Callable) -> bool: > +if inspect.isfunction(function): > +# TestCase is not used at runtime, so we can't use > isinstance() with `function`. > +# But function.test_type exists. > +if hasattr(function, "test_type"): > +return isinstance(function.test_type, TestCaseType) > +return False > + > +if test_case_sublist is None: > +test_case_sublist = [] > + > +# the copy is needed so that the condition "elif test_case_sublist" > doesn't > +# change mid-cycle > +test_case_sublist_copy = list(test_case_sublist) > +func_test_cases = set() > +perf_test_cases = set() > + > +for test_case_name, test_case_function in inspect.getmembers(cls, > is_test_case): > +if test_case_name in test_case_sublist_copy: > +# if test_case_sublist_copy is non-empty, remove the found > test case > +# so that we can look at the remainder at the end > +test_case_sublist_copy.remove(test_case_name) > +elif test_case_sublist: > +# the original list not being empty means we're filtering > test cases This might read a little better if there was a period at the end, but I still think this gets the point across as is. > +# since we didn't remove test_case_name in the previous > branch, > +# it doesn't match the filter and we don't want to remove it > +continue > + > +match test_case_function.test_type: > +case TestCaseType.PERFORMANCE: > +perf_test_cases.add(test_case_function) > +case TestCaseType.FUNCTIONAL: > +func_test_cases.add(test_case_function) > + > 2.43.0 >
Re: [PATCH v4 03/11] dts: add mechanism to skip test cases or suites
On Mon, Sep 23, 2024 at 11:02 AM Juraj Linkeš wrote: > > If a test case is not relevant to the testing environment (such as when > a NIC doesn't support a tested feature), the framework should skip it. > The mechanism is a skeleton without actual logic that would set a test > case or suite to be skipped. > > The mechanism uses a protocol to extend test suites and test cases with > additional attributes that track whether the test case or suite should > be skipped the reason for skipping it. > > Also update the results module with the new SKIP result. > > Signed-off-by: Juraj Linkeš > Reviewed-by: Dean Marx Reviewed-by: Jeremy Spewock
Re: [PATCH v4 07/11] dts: add NIC capabilities from show rxq info
On Mon, Sep 23, 2024 at 11:02 AM Juraj Linkeš wrote: > > Add parsing for the show rxq info tespmd command > and add support for the Scattered Rx capability. > > Signed-off-by: Juraj Linkeš > Reviewed-by: Dean Marx Reviewed-by: Jeremy Spewock
Re: [PATCH v4 08/11] dts: add topology capability
On Mon, Sep 23, 2024 at 11:02 AM Juraj Linkeš wrote: > > Add support for marking test cases as requiring a certain topology. The > default topology is a two link topology and the other supported > topologies are one link and no link topologies. > > The TestProtocol of test suites and cases is extended with the topology > type each test suite or case requires. Each test case starts out as > requiring a two link topology and can be marked as requiring as > topology directly (by decorating the test case) or through its test > suite. If a test suite is decorated as requiring a certain topology, all > its test cases are marked as such. If both test suite and a test case > are decorated as requiring a topology, the test case cannot require a > more complex topology than the whole suite (but it can require a less > complex one). If a test suite is not decorated, this has no effect on > required test case topology. > > Since the default topology is defined as a reference to one of the > actual topologies, the NoAliasEnum from the aenum package is utilized, > which removes the aliasing of Enums so that TopologyType.two_links and > TopologyType.default are distinct. This is needed to distinguish between > a user passed value and the default value being used (which is used when > a test suite is or isn't decorated). > > Signed-off-by: Juraj Linkeš > Reviewed-by: Dean Marx Reviewed-by: Jeremy Spewock
Re: [PATCH v4 06/11] dts: add NIC capability support
On Mon, Sep 23, 2024 at 11:02 AM Juraj Linkeš wrote: > > Some test cases or suites may be testing a NIC feature that is not > supported on all NICs, so add support for marking test cases or suites > as requiring NIC capabilities. > > The marking is done with a decorator, which populates the internal > required_capabilities attribute of TestProtocol. The NIC capability > itself is a wrapper around the NicCapability defined in testpmd_shell. > The reason is Enums cannot be extended and the class implements the > methods of its abstract base superclass. > > The decorator API is designed to be simple to use. The arguments passed > to it are all from the testpmd shell. Everything else (even the actual > capability object creation) is done internally. > > Signed-off-by: Juraj Linkeš > Reviewed-by: Dean Marx Thank you for addressing all my comments! Reviewed-by: Jeremy Spewock
Re: [PATCH v4 10/11] dts: add Rx offload capabilities
On Mon, Sep 23, 2024 at 11:02 AM Juraj Linkeš wrote: > > The scatter Rx offload capability is needed for the pmd_buffer_scatter > test suite. The command that retrieves the capability is: > show port rx_offload capabilities > > The command also retrieves a lot of other capabilities (RX_OFFLOAD_*) > which are all added into a Flag. The Flag members correspond to NIC > capability names so a convenience function that looks for the supported > Flags in a testpmd output is also added. > > The NIC capability names (mentioned above) are copy-pasted from the > Flag. Dynamic addition of Enum members runs into problems with typing > (mypy doesn't know about the members) and documentation generation > (Sphinx doesn't know about the members). > > Signed-off-by: Juraj Linkeš > Reviewed-by: Dean Marx Reviewed-by: Jeremy Spewock
[PATCH V2 1/3] net/mlx5: set rte errno if malloc failed
rte_errno should be set if anything wrong happened in under layer so that user can figure out what's going on. There were some cases that did not set it when ipool allocation failed. To fix the issue, set rte_errno to ENOMEM if mlx5_ipool_malloc failed to allocate ID. Fixes: c40c061a02 ("net/mlx5: add basic flow queue operation") Fixes: 48fbb0e93d ("net/mlx5: support flow meter mark indirect action with HWS") cc: sta...@dpdk.org Signed-off-by: Minggang Li(Gavin) Acked-by: Bing Zhao --- drivers/net/mlx5/mlx5_flow_hw.c | 31 --- 1 file changed, 24 insertions(+), 7 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index a275154d4b..f34670b3ec 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -1905,7 +1905,7 @@ flow_hw_meter_mark_alloc(struct rte_eth_dev *dev, uint32_t queue, const struct rte_flow_action_meter_mark *meter_mark = action->conf; struct mlx5_aso_mtr *aso_mtr; struct mlx5_flow_meter_info *fm; - uint32_t mtr_id; + uint32_t mtr_id = 0; uintptr_t handle = (uintptr_t)MLX5_INDIRECT_ACTION_TYPE_METER_MARK << MLX5_INDIRECT_ACTION_TYPE_OFFSET; @@ -1917,8 +1917,15 @@ flow_hw_meter_mark_alloc(struct rte_eth_dev *dev, uint32_t queue, if (meter_mark->profile == NULL) return NULL; aso_mtr = mlx5_ipool_malloc(pool->idx_pool, &mtr_id); - if (!aso_mtr) + if (!aso_mtr) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "failed to allocate aso meter entry"); + if (mtr_id) + mlx5_ipool_free(pool->idx_pool, mtr_id); return NULL; + } /* Fill the flow meter parameters. */ aso_mtr->type = ASO_METER_INDIRECT; fm = &aso_mtr->fm; @@ -3926,8 +3933,10 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, return NULL; } flow = mlx5_ipool_malloc(table->flow, &flow_idx); - if (!flow) + if (!flow) { + rte_errno = ENOMEM; goto error; + } rule_acts = flow_hw_get_dr_action_buffer(priv, table, action_template_index, queue); /* * Set the table here in order to know the destination table @@ -3938,8 +3947,10 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, flow->idx = flow_idx; if (table->resource) { mlx5_ipool_malloc(table->resource, &res_idx); - if (!res_idx) + if (!res_idx) { + rte_errno = ENOMEM; goto error; + } flow->res_idx = res_idx; } else { flow->res_idx = flow_idx; @@ -4070,8 +4081,10 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, return NULL; } flow = mlx5_ipool_malloc(table->flow, &flow_idx); - if (!flow) + if (!flow) { + rte_errno = ENOMEM; goto error; + } rule_acts = flow_hw_get_dr_action_buffer(priv, table, action_template_index, queue); /* * Set the table here in order to know the destination table @@ -4082,8 +4095,10 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, flow->idx = flow_idx; if (table->resource) { mlx5_ipool_malloc(table->resource, &res_idx); - if (!res_idx) + if (!res_idx) { + rte_errno = ENOMEM; goto error; + } flow->res_idx = res_idx; } else { flow->res_idx = flow_idx; @@ -4218,8 +4233,10 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, nf->idx = of->idx; if (table->resource) { mlx5_ipool_malloc(table->resource, &res_idx); - if (!res_idx) + if (!res_idx) { + rte_errno = ENOMEM; goto error; + } nf->res_idx = res_idx; } else { nf->res_idx = of->res_idx; -- 2.34.1
[PATCH V2 2/3] net/mlx5/hws: add log for failing to create rule in HWS
Signed-off-by: Minggang Li(Gavin) Acked-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr_rule.c | 6 ++ 1 file changed, 6 insertions(+) diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c index 1edb7eac74..5d66d81ea5 100644 --- a/drivers/net/mlx5/hws/mlx5dr_rule.c +++ b/drivers/net/mlx5/hws/mlx5dr_rule.c @@ -638,6 +638,7 @@ static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, /* Rule is not completed yet */ if (rule->status == MLX5DR_RULE_STATUS_CREATING) { + DR_LOG(NOTICE, "Cannot destroy, rule creation still in progress"); rte_errno = EBUSY; return rte_errno; } @@ -806,12 +807,14 @@ static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_rule *rule, struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; if (unlikely(!attr->user_data)) { + DR_LOG(DEBUG, "User data must be provided for rule operations"); rte_errno = EINVAL; return rte_errno; } /* Check if there is room in queue */ if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) { + DR_LOG(NOTICE, "No room in queue[%d]", attr->queue_id); rte_errno = EBUSY; return rte_errno; } @@ -823,6 +826,7 @@ static int mlx5dr_rule_enqueue_precheck_move(struct mlx5dr_rule *rule, struct mlx5dr_rule_attr *attr) { if (unlikely(rule->status != MLX5DR_RULE_STATUS_CREATED)) { + DR_LOG(DEBUG, "Cannot move, rule status is invalid"); rte_errno = EINVAL; return rte_errno; } @@ -835,6 +839,7 @@ static int mlx5dr_rule_enqueue_precheck_create(struct mlx5dr_rule *rule, { if (unlikely(mlx5dr_matcher_is_in_resize(rule->matcher))) { /* Matcher in resize - new rules are not allowed */ + DR_LOG(NOTICE, "Resizing in progress, cannot create rule"); rte_errno = EAGAIN; return rte_errno; } @@ -1068,6 +1073,7 @@ int mlx5dr_rule_hash_calculate(struct mlx5dr_matcher *matcher, mlx5dr_table_is_root(matcher->tbl) || matcher->tbl->ctx->caps->access_index_mode == MLX5DR_MATCHER_INSERT_BY_HASH || matcher->tbl->ctx->caps->flow_table_hash_type != MLX5_FLOW_TABLE_HASH_TYPE_CRC32) { + DR_LOG(DEBUG, "Matcher is not supported"); rte_errno = ENOTSUP; return -rte_errno; } -- 2.34.1
[PATCH V2 3/3] net/mlx5/hws: print CQE error syndrome and more information
Signed-off-by: Minggang Li(Gavin) Acked-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr_send.c | 9 - 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c index 3022c50260..e9abf3dddb 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.c +++ b/drivers/net/mlx5/hws/mlx5dr_send.c @@ -598,8 +598,15 @@ static void mlx5dr_send_engine_poll_cq(struct mlx5dr_send_engine *queue, cqe_owner != sw_own) return; - if (unlikely(cqe_opcode != MLX5_CQE_REQ)) + if (unlikely(cqe_opcode != MLX5_CQE_REQ)) { + struct mlx5_err_cqe *err_cqe = (struct mlx5_err_cqe *)cqe; + + DR_LOG(ERR, "CQE ERR:0x%x, Vendor_ERR:0x%x, OP:0x%x, QPN:0x%x, WQE_CNT:0x%x", + err_cqe->syndrome, err_cqe->vendor_err_synd, cqe_opcode, + (rte_be_to_cpu_32(err_cqe->s_wqe_opcode_qpn) & 0xff), + rte_be_to_cpu_16(err_cqe->wqe_counter)); queue->err = true; + } rte_io_rmb(); -- 2.34.1
[PATCH V2 0/3] Error report improvement and fix
This patch set is to improve error handling in pmd and under layer. Gavin Li (3): net/mlx5: set rte errno if malloc failed --- changelog: v0->v1 - Fix typo in commit message v1->v2 - Fix signoff warning --- net/mlx5/hws: add log for failing to create rule in HWS --- changelog: v1->v2 - Fix signoff warning --- net/mlx5/hws: print CQE error syndrome and more information --- changelog: v1->v2 - Fix typo in log message - Fix signoff warning --- drivers/net/mlx5/hws/mlx5dr_rule.c | 6 ++ drivers/net/mlx5/hws/mlx5dr_send.c | 9 - drivers/net/mlx5/mlx5_flow_hw.c| 31 +++--- 3 files changed, 38 insertions(+), 8 deletions(-) -- 2.34.1
[PATCH] net/r8169: implement core logic for Tx/Rx
Add RX/TX function prototypes for further datapath development. Signed-off-by: Howard Wang --- drivers/net/r8169/meson.build| 1 + drivers/net/r8169/r8169_ethdev.c | 20 +-- drivers/net/r8169/r8169_ethdev.h | 3 ++ drivers/net/r8169/r8169_rxtx.c | 57 4 files changed, 79 insertions(+), 2 deletions(-) create mode 100644 drivers/net/r8169/r8169_rxtx.c diff --git a/drivers/net/r8169/meson.build b/drivers/net/r8169/meson.build index f659e56192..ff7d6ca4b8 100644 --- a/drivers/net/r8169/meson.build +++ b/drivers/net/r8169/meson.build @@ -4,5 +4,6 @@ sources = files( 'r8169_ethdev.c', 'r8169_hw.c', + 'r8169_rxtx.c', ) diff --git a/drivers/net/r8169/r8169_ethdev.c b/drivers/net/r8169/r8169_ethdev.c index 755d06f907..70066b8d12 100644 --- a/drivers/net/r8169/r8169_ethdev.c +++ b/drivers/net/r8169/r8169_ethdev.c @@ -27,6 +27,8 @@ #include "r8169_ethdev.h" #include "r8169_base.h" +#include "r8169_logs.h" +#include "r8169_hw.h" static int rtl_dev_configure(struct rte_eth_dev *dev __rte_unused); static int rtl_dev_start(struct rte_eth_dev *dev); @@ -68,10 +70,24 @@ rtl_dev_start(struct rte_eth_dev *dev) { struct rtl_adapter *adapter = RTL_DEV_PRIVATE(dev); struct rtl_hw *hw = &adapter->hw; + int err; + + /* Initialize transmission unit */ + rtl_tx_init(dev); + + /* This can fail when allocating mbufs for descriptor rings */ + err = rtl_rx_init(dev); + if (err) { + PMD_INIT_LOG(ERR, "Unable to initialize RX hardware"); + goto error; + } hw->adapter_stopped = 0; return 0; +error: + //rtl_stop_queues(dev); + return -EIO; } /* @@ -117,8 +133,8 @@ rtl_dev_init(struct rte_eth_dev *dev) struct rtl_hw *hw = &adapter->hw; dev->dev_ops = &rtl_eth_dev_ops; - //dev->tx_pkt_burst = &rtl_xmit_pkts; - //dev->rx_pkt_burst = &rtl_recv_pkts; + dev->tx_pkt_burst = &rtl_xmit_pkts; + dev->rx_pkt_burst = &rtl_recv_pkts; /* For secondary processes, the primary process has done all the work */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) diff --git a/drivers/net/r8169/r8169_ethdev.h b/drivers/net/r8169/r8169_ethdev.h index 04458dc497..7c6e110e7f 100644 --- a/drivers/net/r8169/r8169_ethdev.h +++ b/drivers/net/r8169/r8169_ethdev.h @@ -35,6 +35,9 @@ struct rtl_adapter { #define RTL_DEV_PRIVATE(eth_dev) \ ((struct rtl_adapter *)((eth_dev)->data->dev_private)) +int rtl_rx_init(struct rte_eth_dev *dev); +int rtl_tx_init(struct rte_eth_dev *dev); + uint16_t rtl_xmit_pkts(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); uint16_t rtl_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); diff --git a/drivers/net/r8169/r8169_rxtx.c b/drivers/net/r8169/r8169_rxtx.c new file mode 100644 index 00..cce78d4e60 --- /dev/null +++ b/drivers/net/r8169/r8169_rxtx.c @@ -0,0 +1,57 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 Realtek Corporation. All rights reserved + */ + +#include +#include +#include +#include +#include + +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "r8169_ethdev.h" +#include "r8169_hw.h" +#include "r8169_logs.h" + +/* -RX-- */ +int +rtl_rx_init(struct rte_eth_dev *dev) +{ + return 0; +} + +uint16_t +rtl_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + return 0; +} + +/* -TX-- */ +int +rtl_tx_init(struct rte_eth_dev *dev) +{ + return 0; +} + +uint16_t +rtl_xmit_pkts(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + return 0; +} + -- 2.34.1
RE: [PATCH 1/2] ethdev: add Rx packet type offload control flag
> On 6/19/2024 11:11 AM, Chaoyong He wrote: > > From: Long Wu > > > > The Rx packet type offload feature may affect the performance, so add > > a control flag for applications to turn it on or off. > > > > Signed-off-by: Long Wu > > --- > > lib/ethdev/rte_ethdev.h | 1 + > > 1 file changed, 1 insertion(+) > > > > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index > > 548fada1c7..be86983e24 100644 > > --- a/lib/ethdev/rte_ethdev.h > > +++ b/lib/ethdev/rte_ethdev.h > > @@ -1555,6 +1555,7 @@ struct rte_eth_conf { #define > > RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM RTE_BIT64(18) > > #define RTE_ETH_RX_OFFLOAD_RSS_HASH RTE_BIT64(19) > > #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT RTE_BIT64(20) > > +#define RTE_ETH_RX_OFFLOAD_PTYPES RTE_BIT64(21) > > > > #define RTE_ETH_RX_OFFLOAD_CHECKSUM > (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \ > > RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \ > > Hi Chaoyong, > > Instead of having an offload for ptypes, we have APIs for this, > > First one is 'rte_eth_dev_get_supported_ptypes()' that application can learn > the supported packet types. > > Second one is more related to above flag, it is 'rte_eth_dev_set_ptypes()' > which application can set which pytpes is required, it can be set to disable > all > packet type parsing, can be similar to not requesting > 'RTE_ETH_RX_OFFLOAD_PTYPES'. > > With above two APIs, do we still need the offload flag? > At present, the purpose of the ops 'rte_eth_dev_set_ptypes()' is to set the range of packet types to handle. Of course, we can maintain a flag for each application and driver based on the return value of this ops; but this is a bit troublesome. So, we hope to follow the example of RSS, in addition to 'rte_eth_dev_rss_hash_update()' and 'rte_eth_dev_rss_hash_conf_get()', we also want to set a flag for the ptype function similar to RTE_ETH_RX_OFFLOAD_RSS_HASH. > > Another concern with adding new offload flag is backward compatibility, all > existing drivers that support packet type parsing should be updated to list > this > offload flag as capability. Also they need to be updated to configure packet > parsing based on user requested offload configuration. > If you agree with this design suggestion, we will adapt all the related code to ptypes for each PMDs and 'test-pmd' applications in the next patch. Do you think this okay? > Briefly, we can't just introduce a new offload flag for an existing > capability and > update only one driver, all drivers needs to be updated.
[RFC PATCH v5] mempool: fix mempool cache size
This patch refactors the mempool cache to fix two bugs: 1. When a mempool is created with a cache size of N objects, the cache was actually created with a size of 1.5 * N objects. 2. The mempool cache field names did not reflect their purpose; the "flushthresh" field held the size, and the "size" field held the number of objects remaining in the cache when returning from a get operation refilling it from the backend. Especially the first item could be fatal: When more objects than a mempool's configured cache size is held in the mempool's caches associated with other lcores, a rightsized mempool may unexpectedly run out of objects, causing the application to fail. Furthermore, this patch introduces two optimizations: 1. The mempool caches are flushed to/filled from the backend in their entirety, so backend accesses are CPU cache line aligned. (Assuming the mempool cache size is a multiplum of a CPU cache line size divided by the size of a pointer.) 2. The unlikely paths in the get and put functions, where the cache is flushed to/filled from the backend, are moved from the inline functions to separate helper functions, thereby reducing the code size of the inline functions. Note: Accessing the backend for cacheless mempools remains inline. Various drivers accessing the mempool directly have been updated accordingly. These drivers did not update mempool statistics when accessing the mempool directly, so that is fixed too. Note: Performance not yet benchmarked. Signed-off-by: Morten Brørup --- v5: * Moved helper functions back into the header file, for improved performance. * Pass large requests directly to the backend. This also simplifies the code. v4: * Updated subject to reflect that misleading names are considered bugs. * Rewrote patch description to provide more details about the bugs fixed. (Mattias Rönnblom) * Moved helper functions, not to be inlined, to mempool C file. (Mattias Rönnblom) * Pass requests for n >= RTE_MEMPOOL_CACHE_MAX_SIZE objects known at build time directly to backend driver, to avoid calling the helper functions. This also fixes the compiler warnings about out of bounds array access. v3: * Removed __attribute__(assume). v2: * Removed mempool perf test; not part of patch set. --- drivers/common/idpf/idpf_common_rxtx_avx512.c | 54 +--- drivers/mempool/dpaa/dpaa_mempool.c | 16 +- drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 14 - drivers/net/i40e/i40e_rxtx_vec_avx512.c | 17 +- drivers/net/iavf/iavf_rxtx_vec_avx512.c | 27 +- drivers/net/ice/ice_rxtx_vec_avx512.c | 27 +- lib/mempool/mempool_trace.h | 1 - lib/mempool/rte_mempool.c | 12 +- lib/mempool/rte_mempool.h | 284 +++--- 9 files changed, 223 insertions(+), 229 deletions(-) diff --git a/drivers/common/idpf/idpf_common_rxtx_avx512.c b/drivers/common/idpf/idpf_common_rxtx_avx512.c index 3b5e124ec8..98535a48f3 100644 --- a/drivers/common/idpf/idpf_common_rxtx_avx512.c +++ b/drivers/common/idpf/idpf_common_rxtx_avx512.c @@ -1024,21 +1024,13 @@ idpf_tx_singleq_free_bufs_avx512(struct idpf_tx_queue *txq) rte_lcore_id()); void **cache_objs; - if (cache == NULL || cache->len == 0) - goto normal; - - cache_objs = &cache->objs[cache->len]; - - if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) { - rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n); + if (!cache || unlikely(n + cache->len > cache->size)) { + rte_mempool_generic_put(mp, (void *)txep, n, cache); goto done; } - /* The cache follows the following algorithm -* 1. Add the objects to the cache -* 2. Anything greater than the cache min value (if it crosses the -* cache flush threshold) is flushed to the ring. -*/ + cache_objs = &cache->objs[cache->len]; + /* Add elements back into the cache */ uint32_t copied = 0; /* n is multiple of 32 */ @@ -1056,16 +1048,13 @@ idpf_tx_singleq_free_bufs_avx512(struct idpf_tx_queue *txq) } cache->len += n; - if (cache->len >= cache->flushthresh) { - rte_mempool_ops_enqueue_bulk(mp, -&cache->objs[cache->size], -cache->len - cache->size); - cache->len = cache->size; - } + /* Increment stat. */ + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1); + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n); + goto done; } -normal: m = rte_pktmbuf_prefree_seg(txep[0].mbuf);