Re: [PATCH v5 3/4] test: add test for packet dissector
On Thu, Aug 01, 2024 at 12:04:42PM -0700, Stephen Hemminger wrote: > Some tests for new packet dissector. > > Signed-off-by: Stephen Hemminger > --- > app/test/meson.build| 1 + > app/test/test_dissect.c | 245 > 2 files changed, 246 insertions(+) > create mode 100644 app/test/test_dissect.c > > diff --git a/app/test/meson.build b/app/test/meson.build > index e29258e6ec..9cd2051320 100644 > --- a/app/test/meson.build > +++ b/app/test/meson.build > @@ -62,6 +62,7 @@ source_file_deps = { > 'test_debug.c': [], > 'test_devargs.c': ['kvargs'], > 'test_dispatcher.c': ['dispatcher'], > +'test_dissect.c': ['net'], > 'test_distributor.c': ['distributor'], > 'test_distributor_perf.c': ['distributor'], > 'test_dmadev.c': ['dmadev', 'bus_vdev'], > diff --git a/app/test/test_dissect.c b/app/test/test_dissect.c > new file mode 100644 > index 00..2c79acf766 > --- /dev/null > +++ b/app/test/test_dissect.c > +static int > +test_simple(void) > +{ > + struct rte_mbuf mb; > + uint8_t buf[RTE_MBUF_DEFAULT_BUF_SIZE]; > + uint32_t data_len = PACKET_LEN; > + uint16_t src_port = rte_rand(); > + const uint16_t dst_port = rte_cpu_to_be_16(9); /* Discard port */ > + char obuf[LINE_MAX]; > + > + /* make a dummy packet */ > + mbuf_prep(&mb, buf, sizeof(buf)); > + add_header(&mb, data_len, src_port, dst_port); > + fill_data(&mb, data_len - mb.data_off); > + > + rte_dissect_mbuf(obuf, sizeof(obuf), &mb, 0); > + > + return TEST_SUCCESS; > +} What are these test cases actually verifying - can any of them actually fail? I don't see the output being checked anywhere. /Bruce
RE: [PATCH 3/3] examples/l3fwd: fix maximum acceptable port ID in routes
Hi, > -Original Message- > From: Konstantin Ananyev > Sent: Wednesday, July 24, 2024 1:32 PM > To: Konstantin Ananyev ; Gagandeep > Singh ; dev@dpdk.org; Konstantin Ananyev > ; Sean Morrissey > > Cc: sta...@dpdk.org > Subject: RE: [PATCH 3/3] examples/l3fwd: fix maximum acceptable port ID > in routes > > > > > > > > > > > Application is accepting routes for port ID up to > > > > > > > > UINT8_MAX for LPM amd EM routes on parsing the given rule > > > > > > > > file, but only up to > > > > > > > > 32 ports can be enabled as per the variable > > > > > > > > enabled_port_mask which is defined as uint32_t. > > > > > > > > > > > > > > > > This patch restricts the rules parsing code to accept > > > > > > > > routes for port ID up to 31 only to avoid any unnecessary > > > > > > > > maintenance of rules which will never be used. > > > > > > > > > > > > > > If we want to add this extra check, probably better to do it in > setup_lpm(). > > > > > > > Where we already check that port is enabled, and If not, > > > > > > > then this route rule will be skipped: > > > > > > > > > > > > > > /* populate the LPM table */ > > > > > > > for (i = 0; i < route_num_v4; i++) { > > > > > > > struct in_addr in; > > > > > > > > > > > > > > /* skip unused ports */ > > > > > > > if ((1 << route_base_v4[i].if_out & > > > > > > > enabled_port_mask) == 0) > > > > > > > continue; > > > > > > > > > > > > > > Same for EM. > > > > > > I am trying to update the check for MAX if_out value in rules > > > > > > config file parsing > > > > > which will be before setup_lpm(). > > > > > > The reason is, restricting and adding only those rules which > > > > > > can be used by the application while populating the > > > > > > route_base_v4/v6 at first step and avoid unnecessary memory > > > > > > allocation for local variables to store more > > > > > not required rules. > > > > > > > > > > Hmm... but why it is a problem? > > > > Not really a problem, Just trying to optimize wherever it Is possible. > > > > > > > > > > > > > > > > > > > > > > ((1 << route_base_v4[i].if_out & > > > > > > > enabled_port_mask) > > > > > > By looking into this check, it seems restriction to maximum 31 > > > > > > port ID while parsing rule file becomes more valid as this > > > > > > check can pass due to overflow in case value of > route_base_v4[i].if_out Is 31+. > > > > > > > > > > Agree, I think we need both, and it probably need to be in > setup_lpm(). > > > > > Something like: > > > > > > > > > > if (route_base_v4[i].if_out >= sizeof(enabled_port_mask) * CHAR_BIT > || > > > > >((1 << route_base_v4[i].if_out & enabled_port_mask) == 0) { > > > > > /* print some error message here*/ > > > > > rte_exiit(...); /* or return an error */ } > > > > > > > > > Yes, I can change it to this. > > > > > > I re-checked the code, IMO we should restrict the rules in " > read_config_files" > > > May be we can move this check to read_config_files. > > > As having this check in the setup can result in rte_exit() call when > > > no user rule file Is given and application is using the default > > > rules. In that case route_base_v4 will Have 16 rules for 16 ports (default > rules). > > > So this check will fails always unless user enable all the 16 ports with > > > -p > option. > > > > Ah yes, you are right. > > That's why probably right now we probably just do 'continue;' here... > > Yeh, probably the easiest way is to put this check before setup_lpm() > > - in parsing code, or straight after that. > > Can I ask you for one more thing: can we add a new function that would > > do this check and use it everywhere (lpm/em/acl). > > As alternative thought - we might add to setup_lpm() an extra parameter to > indicate what do we want to do on rule with invalid/disabled port - just skip > it or fail. > Another alternative - remove default route ability at all, though that one is > a > change in behavior and probably there would be some complaints. Sorry for late reply, first option looks ok to me. I can add a user given option in Setup functions to decide skip or continue. In V2, also will try to create a common function for this check for all the setup Functions. > > > > > > > > > > > > > > > > > > > > Another question here - why we just silently skip the rule with > invalid port? > > > > > > In read_config_files_lpm() we are calling the rte_exit in case port > > > > > > ID > is 31+. > > > > > > In setup_lpm, skipping the rules for the ports which are not > > > > > > enabled and not giving error, I guess probably because of ease of > use. > > > > > > e.g. user has only single ipv4_routes config file with route > > > > > > rules for port ID 0,1,2,3,4 and want to use same file for > > > > > > multiple test cases like 1. when only port 0 enabled 2. when > > > > > > only port 0 and 1 enabled and so on. > > > > > >
Re: [PATCH v8 5/5] dts: add API doc generation
On 1. 8. 2024 17:07, Thomas Monjalon wrote: 01/08/2024 15:03, Juraj Linkeš: On 30. 7. 2024 15:51, Thomas Monjalon wrote: 12/07/2024 10:57, Juraj Linkeš: The tool used to generate DTS API docs is Sphinx, which is already in use in DPDK. The same configuration is used to preserve style with one DTS-specific configuration (so that the DPDK docs are unchanged) that modifies how the sidebar displays the content. What is changed in the sidebar? These are the two changes: html_theme_options = { 'collapse_navigation': False, 'navigation_depth': -1, } The first allows you to explore the structure without needing to enter any specific section - it puts the + at each section so everything is expandable. The second just means that each section can be fully expanded (there's no limit). OK interesting, you may add a comment # unlimited depth Ack. +# A local reference must be relative to the main index.html page +# The path below can't be taken from the DTS meson file as that would +# require recursive subdir traversal (doc, dts, then doc again) This comment is really obscure. I guess it is. I just wanted to explain that there's not way to do this without spelling out the path this way. At least I didn't find a way. Should I remove the comment or reword it? May be removed I think. Ok, I'll remove it. +cdata.set('DTS_API_MAIN_PAGE', join_paths('..', 'dts', 'html', 'index.html')) Oh I think I get it: - DTS_API_MAIN_PAGE is the Meson variable - dts_api_main_page is the Doxygen variable Yes, this is a way to make it work. Maybe there's something else (I'm not that familiar with Doxygen), but from what I can tell, there wasn't a command line option that would set a variable (passing the path form Meson to Doxygen) and nothing else I found worked. Is this solution ok? If we want to explore something else, is there someone with more experience with Doxygen who could help? Yes it's OK like that. Ack. +dts_root = environ.get('DTS_ROOT') Why does it need to be passed as an environment variable? Isn't it a fixed absolute path? The path to DTS needs to be passed in some way (and added to sys.path) so that Sphinx knows where the sources are in order to import them. Do you want us to not pass the path, but just hardcode it here? I didn't really think about that, maybe that could work. I think hardcode is better here. I tried implementing this, but I ran into an issue with this: dts_root = environ.get('DTS_ROOT') if dts_root: path.append(dts_root) # DTS Sidebar config html_theme_options = { 'collapse_navigation': False, 'navigation_depth': -1, } The sidebar configuration is conditional, so we have to pass something to indicate dts build. I'll change it so that we look for 'dts' in src in call-sphinx-build.py (we're in the dts doc directory, indicating dts build) and set the DTS_BUILD env var which we can use in conf.py. I didn't find a better way to do this as conf.py doesn't have any information about the build itself (and no path that conf.py has access to points to anything dts). Here's how it'll look: if environ.get('DTS_BUILD'): path.append(path_join(dirname(dirname(dirname(__file__))), 'dts')) # DTS Sidebar config. html_theme_options = { 'collapse_navigation': False, 'navigation_depth': -1, # unlimited depth } +To build DTS API docs, install the dependencies with Poetry, then enter its shell: I don't plan to use Poetry on my machine. Can we simply describe the dependencies even if the versions are not specified? The reason we don't list the dependencies anywhere is that doing it with Poetry is much easier (and a bit safer, as Poetry is going to install tested versions). But I can add references to the two relevant sections of dts/pyproject.toml which contain the dependencies with a note that they can be installed with pip (and I guess that would be another dependency), but at that point it's that not much different than using Poetry. I want to use my system package manager. I am from this old school thinking we should have a single package manager in a system. I understand and would also prefer that, but it just doesn't work for Python. Not all packages are available from the package managers, and Python projects should not use system packages as there are frequently version mismatches between the system packages and what the project needs (the APIs could be different as well as behavior; a problem we've seen with Scapy). Poetry is one of the tools that tries to solve this well-known Python limitation. I've done a quick search of what's available in Ubuntu and two packages aren't available, types-PyYAML (which maybe we could do without, I'll have to test) and aenum (which is currently needed for the capabilities patch; if absolutely necessary, maybe I could find a solution without aenum). But even with this we can't be sure that the system packag
RE: [PATCH v5] virtio: optimize stats counters performance
> From: lihuisong (C) [mailto:lihuis...@huawei.com] > Sent: Friday, 2 August 2024 04.23 > > 在 2024/8/2 0:03, Morten Brørup 写道: > > void > > -virtio_update_packet_stats(struct virtnet_stats *stats, struct > rte_mbuf *mbuf) > > +virtio_update_packet_stats(struct virtnet_stats *const stats, > > + const struct rte_mbuf *const mbuf) > The two const is also for performace? Is there gain? The "const struct rte_mbuf * mbuf" informs the optimizer that the function does not modify the mbuf. This is for the benefit of callers of the function; they can rely on the mbuf being unchanged when the function returns. So, if the optimizer has cached some mbuf field before calling the function, it does not need to read the mbuf field again, but can continue using the cached value after the function call. The two "type *const ptr" probably make no performance difference with the compilers and function call conventions used by the CPU architectures supported by DPDK. > > + RTE_BUILD_BUG_ON(offsetof(struct virtnet_stats, broadcast) != > > + offsetof(struct virtnet_stats, multicast) + > sizeof(uint64_t)); > > + if (unlikely(rte_is_multicast_ether_addr(ea))) > > + (&stats->multicast)[rte_is_broadcast_ether_addr(ea)]++; > The "rte_is_broadcast_ether_addr(ea) " will be calculated twice if > packet is mulitcast. > How about coding like: > --> > is_mulitcast = rte_is_multicast_ether_addr(ea); > if (unlikely(is_mulitcast)) > (&stats->multicast)[rte_is_broadcast_ether_addr(ea)]++; I don't think "rte_is_broadcast_ether_addr(ea)" is calculated twice for multicast packets. My code essentially does this: if (mc(ea)) stats[bc(ea)]++; Changing to this shouldn't make a difference: m = mc(ea); if (m) stats[bc(ea)]++;
[PATCH v2 0/7] record and rework component dependencies
As part of the meson build, we can record the dependencies for each component as we process it, logging them to a file. This file can be used as input to a number of other scripts and tools, for example, to graph the dependencies, or to allow higher-level build-config tools to automatically enable component requirements, etc. The first patch of this set generates the basic dependency tree. The second patch does some processing of that dependency tree to identify cases where dependencies are being unnecessarily specified. Reducing these makes it easier to have readable dependency graphs in future, without affecting the build. The following 4 patches are based on the output of the second patch, and greatly cut down the number of direct dependency links between components. Even with the cut-down dependencies, the full dependency graph is nigh-unreadable, so the final patch adds a new script to generate dependency tree subgraphs, creating dot files for e.g. the dependencies of a particular component, or a component class such as mempool drivers. Bruce Richardson (7): build: output a dependency log in build directory devtools: add script to flag unneeded dependencies build: remove kvargs from driver class dependencies build: reduce library dependencies build: reduce driver dependencies build: reduce app dependencies devtools: add script to generate DPDK dependency graphs app/dumpcap/meson.build| 2 +- app/graph/meson.build | 2 +- app/meson.build| 1 + app/pdump/meson.build | 2 +- app/proc-info/meson.build | 2 +- app/test-crypto-perf/meson.build | 2 +- app/test-fib/meson.build | 2 +- app/test-sad/meson.build | 2 +- app/test/meson.build | 2 +- buildtools/log-deps.py | 43 +++ buildtools/meson.build | 2 + devtools/draw-dependency-graphs.py | 136 + devtools/find-duplicate-deps.py| 53 drivers/baseband/fpga_5gnr_fec/meson.build | 2 +- drivers/baseband/fpga_lte_fec/meson.build | 2 +- drivers/baseband/la12xx/meson.build| 2 +- drivers/baseband/null/meson.build | 2 +- drivers/baseband/turbo_sw/meson.build | 2 +- drivers/bus/auxiliary/meson.build | 2 - drivers/bus/dpaa/meson.build | 2 +- drivers/bus/fslmc/meson.build | 2 +- drivers/bus/ifpga/meson.build | 2 +- drivers/bus/pci/meson.build| 4 +- drivers/bus/platform/meson.build | 1 - drivers/bus/uacce/meson.build | 2 - drivers/bus/vdev/meson.build | 2 - drivers/common/cnxk/meson.build| 4 +- drivers/common/cpt/meson.build | 2 +- drivers/common/idpf/meson.build| 2 +- drivers/common/mlx5/meson.build| 2 +- drivers/compress/mlx5/meson.build | 2 +- drivers/compress/nitrox/meson.build| 2 +- drivers/compress/octeontx/meson.build | 2 +- drivers/crypto/bcmfs/meson.build | 2 +- drivers/crypto/cnxk/meson.build| 2 +- drivers/crypto/dpaa_sec/meson.build| 2 +- drivers/crypto/ipsec_mb/meson.build| 2 +- drivers/crypto/mlx5/meson.build| 2 +- drivers/crypto/nitrox/meson.build | 2 +- drivers/dma/cnxk/meson.build | 2 +- drivers/dma/dpaa/meson.build | 2 +- drivers/dma/dpaa2/meson.build | 2 +- drivers/dma/odm/meson.build| 2 +- drivers/dma/skeleton/meson.build | 2 +- drivers/event/cnxk/meson.build | 2 +- drivers/event/dlb2/meson.build | 2 +- drivers/event/dpaa2/meson.build| 2 +- drivers/event/meson.build | 2 +- drivers/event/octeontx/meson.build | 3 +- drivers/event/sw/meson.build | 2 +- drivers/mempool/cnxk/meson.build | 2 +- drivers/mempool/dpaa/meson.build | 2 +- drivers/mempool/dpaa2/meson.build | 2 +- drivers/mempool/octeontx/meson.build | 2 +- drivers/meson.build| 1 + drivers/net/cnxk/meson.build | 3 +- drivers/net/iavf/meson.build | 2 +- drivers/net/ice/meson.build| 2 +- drivers/net/mana/meson.build | 2 +- drivers/net/meson.build| 2 +- drivers/net/mlx5/meson.build | 2 +- drivers/net/sfc/meson.build| 2 +- drivers/net/softnic/meson.build| 2 +- drivers/raw/cnxk_bphy/meson.build | 2 +- drivers/raw/cnxk_gpio/meson.build | 2 +- drivers/raw/ntb/meson.build| 2 +- drivers/raw/skeleton/meson.build | 2 +- drivers/regex/mes
[PATCH v2 1/7] build: output a dependency log in build directory
As meson processes our DPDK source tree it manages dependencies specified by each individual driver. To enable future analysis of the dependency links between components, log the dependencies of each DPDK component as it gets processed. This could potentially allow other tools to automatically enable or disable components based on the desired end components to be built, e.g. if the user requests net/ice, ensure that common/iavf is also enabled in the drivers. The output file produced is in "dot" or "graphviz" format, which allows producing a graphical representation of the DPDK dependency tree if so desired. For example: "dot -Tpng -O build/deps.dot" to produce the image file "build/deps.dot.png". Signed-off-by: Bruce Richardson Acked-by: Konstantin Ananyev --- app/meson.build| 1 + buildtools/log-deps.py | 43 ++ buildtools/meson.build | 2 ++ drivers/meson.build| 1 + lib/meson.build| 1 + 5 files changed, 48 insertions(+) create mode 100644 buildtools/log-deps.py diff --git a/app/meson.build b/app/meson.build index 5b2c80c7a1..837a57ad0a 100644 --- a/app/meson.build +++ b/app/meson.build @@ -76,6 +76,7 @@ foreach app:apps if build subdir(name) +run_command([log_deps_cmd, 'dpdk-' + name, deps], check: false) if not build and require_apps error('Cannot build explicitly requested app "@0@".\n'.format(name) + '\tReason: ' + reason) diff --git a/buildtools/log-deps.py b/buildtools/log-deps.py new file mode 100644 index 00..a4331fa15b --- /dev/null +++ b/buildtools/log-deps.py @@ -0,0 +1,43 @@ +#! /usr/bin/env python3 +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2024 Intel Corporation + +"""Utility script to build up a list of dependencies from meson.""" + +import os +import sys + + +def file_to_list(filename): +"""Read file into a list of strings.""" +with open(filename) as f: +return f.readlines() + + +def list_to_file(filename, lines): +"""Write a list of strings out to a file.""" +with open(filename, 'w') as f: +f.writelines(lines) + + +depsfile = f'{os.environ["MESON_BUILD_ROOT"]}/deps.dot' + +# to reset the deps file on each build, the script is called without any params +if len(sys.argv) == 1: +os.remove(depsfile) +sys.exit(0) + +try: +contents = file_to_list(depsfile) +except FileNotFoundError: +contents = ['digraph {\n', '}\n'] + +component = sys.argv[1] +if len(sys.argv) > 2: +contents[-1] = f'"{component}" -> {{ "{"\", \"".join(sys.argv[2:])}" }}\n' +else: +contents[-1] = f'"{component}"\n' + +contents.append('}\n') + +list_to_file(depsfile, contents) diff --git a/buildtools/meson.build b/buildtools/meson.build index 3adf34e1a8..e2ba9d0ad4 100644 --- a/buildtools/meson.build +++ b/buildtools/meson.build @@ -24,6 +24,8 @@ get_numa_count_cmd = py3 + files('get-numa-count.py') get_test_suites_cmd = py3 + files('get-test-suites.py') has_hugepages_cmd = py3 + files('has-hugepages.py') cmdline_gen_cmd = py3 + files('dpdk-cmdline-gen.py') +log_deps_cmd = py3 + files('log-deps.py') +run_command(log_deps_cmd, check: false) # call with no parameters to reset the file # install any build tools that end-users might want also install_data([ diff --git a/drivers/meson.build b/drivers/meson.build index 66931d4241..d56308f1c1 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -154,6 +154,7 @@ foreach subpath:subdirs if build # pull in driver directory which should update all the local variables subdir(drv_path) +run_command([log_deps_cmd, class + '_' + name, deps], check: false) if dpdk_conf.get('RTE_IOVA_IN_MBUF') == 0 and require_iova_in_mbuf build = false diff --git a/lib/meson.build b/lib/meson.build index 162287753f..106c2a947c 100644 --- a/lib/meson.build +++ b/lib/meson.build @@ -160,6 +160,7 @@ foreach l:libraries if build subdir(l) +run_command([log_deps_cmd, l, deps], check: false) if not build and require_libs error('Cannot build explicitly requested lib "@0@".\n'.format(name) +'\tReason: ' + reason) -- 2.43.0
[PATCH v2 2/7] devtools: add script to flag unneeded dependencies
While not a serious problem, DPDK components often list more dependencies than are actually necessary to build, due to the use of recursive dependencies. In extreme cases, such as with core libraries, this can lead to longer configuration times due to meson having to deduplicate long lists of dependencies. Therefore we can add a script to identify when a component has got unnecessary dependencies listed. Signed-off-by: Bruce Richardson --- devtools/find-duplicate-deps.py | 53 + 1 file changed, 53 insertions(+) create mode 100755 devtools/find-duplicate-deps.py diff --git a/devtools/find-duplicate-deps.py b/devtools/find-duplicate-deps.py new file mode 100755 index 00..b1eacf21ce --- /dev/null +++ b/devtools/find-duplicate-deps.py @@ -0,0 +1,53 @@ +#! /usr/bin/env python3 +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2024 Intel Corporation + +"""Identify any superfluous dependencies listed in DPDK deps graph.""" + +import sys + +all_deps = {} + + +class dep: +"""Holds a component and its dependencies.""" + +def __init__(self, name, dep_names): +"""Create and process a component and its deps.""" +self.name = name.strip('" ') +self.base_deps = [all_deps[dn.strip('" ')] for dn in dep_names] +self.recursive_deps = [] +for d in self.base_deps: +self.recursive_deps.extend(d.base_deps) +self.recursive_deps.extend(d.recursive_deps) +self.extra_deps = [] +for d in self.base_deps: +if d in self.recursive_deps: +self.extra_deps.append(d.name) +if self.extra_deps: +print(f'{self.name}: extra deps {self.extra_deps}') + +def dict_add(self, d): +"""Add this object to a dictionary by name.""" +d[self.name] = self + + +def main(argv): +"""Read the dependency tree from a dot file and process it.""" +if len(argv) != 2: +print(f'Usage: {argv[0]} /deps.dot', file=sys.stderr) +sys.exit(1) + +with open(argv[1]) as f: +for ln in f.readlines(): +ln = ln.strip() +if '->' in ln: +name, deps = ln.split('->') +deps = deps.strip(' {}') +dep(name, deps.split(',')).dict_add(all_deps) +elif ln.startswith('"') and ln.endswith('"'): +dep(ln.strip('"'), []).dict_add(all_deps) + + +if __name__ == '__main__': +main(sys.argv) -- 2.43.0
[PATCH v2 3/7] build: remove kvargs from driver class dependencies
The kvargs library is used by EAL, and therefore is implicitly a dependency of every DPDK driver. Remove it from the minimum set of dependencies for each driver class as it's unnecessary to call it out there. Signed-off-by: Bruce Richardson --- drivers/event/meson.build | 2 +- drivers/net/meson.build | 2 +- drivers/regex/meson.build | 2 +- drivers/vdpa/meson.build | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/event/meson.build b/drivers/event/meson.build index d6706b57f7..2708833adf 100644 --- a/drivers/event/meson.build +++ b/drivers/event/meson.build @@ -19,4 +19,4 @@ if not (toolchain == 'gcc' and cc.version().version_compare('<4.8.6') and dpdk_conf.has('RTE_ARCH_ARM64')) drivers += 'octeontx' endif -std_deps = ['eventdev', 'kvargs'] +std_deps = ['eventdev'] diff --git a/drivers/net/meson.build b/drivers/net/meson.build index fb6d34b782..ebd12364df 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -63,6 +63,6 @@ drivers = [ 'virtio', 'vmxnet3', ] -std_deps = ['ethdev', 'kvargs'] # 'ethdev' also pulls in mbuf, net, eal etc +std_deps = ['ethdev'] # 'ethdev' also pulls in mbuf, net, eal etc std_deps += ['bus_pci'] # very many PMDs depend on PCI, so make std std_deps += ['bus_vdev']# same with vdev bus diff --git a/drivers/regex/meson.build b/drivers/regex/meson.build index ff2a8fea89..10192e7c77 100644 --- a/drivers/regex/meson.build +++ b/drivers/regex/meson.build @@ -5,4 +5,4 @@ drivers = [ 'mlx5', 'cn9k', ] -std_deps = ['ethdev', 'kvargs', 'regexdev'] # 'ethdev' also pulls in mbuf, net, eal etc +std_deps = ['ethdev', 'regexdev'] # 'ethdev' also pulls in mbuf, net, eal etc diff --git a/drivers/vdpa/meson.build b/drivers/vdpa/meson.build index 896e8e0304..e01c277b9e 100644 --- a/drivers/vdpa/meson.build +++ b/drivers/vdpa/meson.build @@ -11,5 +11,5 @@ drivers = [ 'nfp', 'sfc', ] -std_deps = ['bus_pci', 'kvargs'] +std_deps = ['bus_pci'] std_deps += ['vhost'] -- 2.43.0
[PATCH v2 4/7] build: reduce library dependencies
Rather than having each library depend up on EAL + any extra libs, we can take advantage of recursive dependency support in meson and just assign the dependencies of each directory directly, rather than appending to the array. For libraries which only depend upon EAL, keep that as a default, but for libraries which depend upon even a single extra lib, that EAL dependency is unnecessary. Going further, we can identify using the find_duplicate_deps.py script any unnecessary deps in each library's list, and remove them to slim the dependency tree down. Reducing number of dependencies means that meson takes less time processing and deduplicating the dependency tree for each component, and also shrinks the dependency graph for DPDK itself. Signed-off-by: Bruce Richardson --- lib/argparse/meson.build | 2 +- lib/bbdev/meson.build| 2 +- lib/bitratestats/meson.build | 2 +- lib/bpf/meson.build | 2 +- lib/cmdline/meson.build | 2 +- lib/compressdev/meson.build | 2 +- lib/cryptodev/meson.build| 2 +- lib/dispatcher/meson.build | 2 +- lib/distributor/meson.build | 2 +- lib/dmadev/meson.build | 2 -- lib/eal/meson.build | 5 + lib/efd/meson.build | 2 +- lib/ethdev/meson.build | 2 +- lib/eventdev/meson.build | 3 +-- lib/fib/meson.build | 2 +- lib/gpudev/meson.build | 2 +- lib/graph/meson.build| 2 +- lib/gro/meson.build | 2 +- lib/gso/meson.build | 2 +- lib/hash/meson.build | 4 +--- lib/ip_frag/meson.build | 2 +- lib/ipsec/meson.build| 2 +- lib/kvargs/meson.build | 2 +- lib/latencystats/meson.build | 2 +- lib/lpm/meson.build | 3 +-- lib/mbuf/meson.build | 2 +- lib/member/meson.build | 2 +- lib/mempool/meson.build | 2 +- lib/metrics/meson.build | 2 +- lib/mldev/meson.build| 2 +- lib/net/meson.build | 2 +- lib/node/meson.build | 2 +- lib/pcapng/meson.build | 2 +- lib/pdcp/meson.build | 2 +- lib/pdump/meson.build| 2 +- lib/pipeline/meson.build | 2 +- lib/port/meson.build | 2 +- lib/power/meson.build| 2 +- lib/rawdev/meson.build | 2 -- lib/rcu/meson.build | 2 +- lib/regexdev/meson.build | 2 +- lib/reorder/meson.build | 2 +- lib/rib/meson.build | 2 +- lib/ring/meson.build | 1 - lib/sched/meson.build| 2 +- lib/security/meson.build | 2 +- lib/table/meson.build| 2 +- lib/telemetry/meson.build| 2 +- lib/vhost/meson.build| 2 +- 49 files changed, 46 insertions(+), 58 deletions(-) diff --git a/lib/argparse/meson.build b/lib/argparse/meson.build index 8ab4c408ee..5d602c1f2a 100644 --- a/lib/argparse/meson.build +++ b/lib/argparse/meson.build @@ -10,4 +10,4 @@ endif sources = files('rte_argparse.c') headers = files('rte_argparse.h') -deps += ['log'] +deps = ['log'] diff --git a/lib/bbdev/meson.build b/lib/bbdev/meson.build index 07685e7578..2e68aa7873 100644 --- a/lib/bbdev/meson.build +++ b/lib/bbdev/meson.build @@ -11,4 +11,4 @@ sources = files('rte_bbdev.c') headers = files('rte_bbdev.h', 'rte_bbdev_pmd.h', 'rte_bbdev_op.h') -deps += ['mbuf'] +deps = ['mbuf'] diff --git a/lib/bitratestats/meson.build b/lib/bitratestats/meson.build index ede7e0a579..8defcd53bf 100644 --- a/lib/bitratestats/meson.build +++ b/lib/bitratestats/meson.build @@ -3,4 +3,4 @@ sources = files('rte_bitrate.c') headers = files('rte_bitrate.h') -deps += ['ethdev', 'metrics'] +deps = ['metrics'] diff --git a/lib/bpf/meson.build b/lib/bpf/meson.build index aa258a9061..82127bc657 100644 --- a/lib/bpf/meson.build +++ b/lib/bpf/meson.build @@ -31,7 +31,7 @@ headers = files('bpf_def.h', 'rte_bpf.h', 'rte_bpf_ethdev.h') -deps += ['mbuf', 'net', 'ethdev'] +deps = ['ethdev'] dep = dependency('libelf', required: false, method: 'pkg-config') if dep.found() diff --git a/lib/cmdline/meson.build b/lib/cmdline/meson.build index 63fb69100d..4451f3da29 100644 --- a/lib/cmdline/meson.build +++ b/lib/cmdline/meson.build @@ -31,4 +31,4 @@ else sources += files('cmdline_os_unix.c') endif -deps += ['net'] +deps = ['net'] diff --git a/lib/compressdev/meson.build b/lib/compressdev/meson.build index c80295dc0d..4b86955baf 100644 --- a/lib/compressdev/meson.build +++ b/lib/compressdev/meson.build @@ -16,4 +16,4 @@ driver_sdk_headers = files( 'rte_compressdev_pmd.h', 'rte_compressdev_internal.h', ) -deps += ['kvargs', 'mbuf'] +deps = ['mbuf'] diff --git a/lib/cryptodev/meson.build b/lib/cryptodev/meson.build index 4734acf321..74e42ac700 100644 --- a/lib/cryptodev/meson.build +++ b/lib/cryptodev/meson.build @@ -20,4 +20,4 @@ driver_sdk_headers += files( 'cryptodev_pmd.h', ) -deps += ['kvargs', 'mbuf', 'rcu', 'telemetry'] +deps = ['mbuf', 'rcu'] diff --git a/lib/dispatcher/meson.build b/lib/dispatcher/meson.build index ffaef26a6d..4dc1759951
[PATCH v2 5/7] build: reduce driver dependencies
Remove any unnecessary dependencies from the driver dependency lists. This will give each driver a near-minimum set of required dependencies. Signed-off-by: Bruce Richardson --- drivers/baseband/fpga_5gnr_fec/meson.build | 2 +- drivers/baseband/fpga_lte_fec/meson.build | 2 +- drivers/baseband/la12xx/meson.build| 2 +- drivers/baseband/null/meson.build | 2 +- drivers/baseband/turbo_sw/meson.build | 2 +- drivers/bus/auxiliary/meson.build | 2 -- drivers/bus/dpaa/meson.build | 2 +- drivers/bus/fslmc/meson.build | 2 +- drivers/bus/ifpga/meson.build | 2 +- drivers/bus/pci/meson.build| 4 +--- drivers/bus/platform/meson.build | 1 - drivers/bus/uacce/meson.build | 2 -- drivers/bus/vdev/meson.build | 2 -- drivers/common/cnxk/meson.build| 4 ++-- drivers/common/cpt/meson.build | 2 +- drivers/common/idpf/meson.build| 2 +- drivers/common/mlx5/meson.build| 2 +- drivers/compress/mlx5/meson.build | 2 +- drivers/compress/nitrox/meson.build| 2 +- drivers/compress/octeontx/meson.build | 2 +- drivers/crypto/bcmfs/meson.build | 2 +- drivers/crypto/cnxk/meson.build| 2 +- drivers/crypto/dpaa_sec/meson.build| 2 +- drivers/crypto/ipsec_mb/meson.build| 2 +- drivers/crypto/mlx5/meson.build| 2 +- drivers/crypto/nitrox/meson.build | 2 +- drivers/dma/cnxk/meson.build | 2 +- drivers/dma/dpaa/meson.build | 2 +- drivers/dma/dpaa2/meson.build | 2 +- drivers/dma/odm/meson.build| 2 +- drivers/dma/skeleton/meson.build | 2 +- drivers/event/cnxk/meson.build | 2 +- drivers/event/dlb2/meson.build | 2 +- drivers/event/dpaa2/meson.build| 2 +- drivers/event/octeontx/meson.build | 3 +-- drivers/event/sw/meson.build | 2 +- drivers/mempool/cnxk/meson.build | 2 +- drivers/mempool/dpaa/meson.build | 2 +- drivers/mempool/dpaa2/meson.build | 2 +- drivers/mempool/octeontx/meson.build | 2 +- drivers/net/cnxk/meson.build | 3 +-- drivers/net/iavf/meson.build | 2 +- drivers/net/ice/meson.build| 2 +- drivers/net/mana/meson.build | 2 +- drivers/net/mlx5/meson.build | 2 +- drivers/net/sfc/meson.build| 2 +- drivers/net/softnic/meson.build| 2 +- drivers/raw/cnxk_bphy/meson.build | 2 +- drivers/raw/cnxk_gpio/meson.build | 2 +- drivers/raw/ntb/meson.build| 2 +- drivers/raw/skeleton/meson.build | 2 +- drivers/regex/mlx5/meson.build | 2 +- drivers/vdpa/ifc/meson.build | 2 +- drivers/vdpa/meson.build | 3 +-- drivers/vdpa/mlx5/meson.build | 2 +- drivers/vdpa/sfc/meson.build | 2 +- 56 files changed, 53 insertions(+), 65 deletions(-) diff --git a/drivers/baseband/fpga_5gnr_fec/meson.build b/drivers/baseband/fpga_5gnr_fec/meson.build index c3678d23eb..31b9e92fbb 100644 --- a/drivers/baseband/fpga_5gnr_fec/meson.build +++ b/drivers/baseband/fpga_5gnr_fec/meson.build @@ -1,7 +1,7 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2020 Intel Corporation -deps += ['bus_vdev', 'ring', 'pci', 'bus_pci'] +deps += ['bus_vdev', 'bus_pci'] sources = files('rte_fpga_5gnr_fec.c') diff --git a/drivers/baseband/fpga_lte_fec/meson.build b/drivers/baseband/fpga_lte_fec/meson.build index 14e07826ef..fbf24755db 100644 --- a/drivers/baseband/fpga_lte_fec/meson.build +++ b/drivers/baseband/fpga_lte_fec/meson.build @@ -1,5 +1,5 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2019 Intel Corporation -deps += ['bus_vdev', 'ring', 'pci', 'bus_pci'] +deps += ['bus_vdev', 'bus_pci'] sources = files('fpga_lte_fec.c') diff --git a/drivers/baseband/la12xx/meson.build b/drivers/baseband/la12xx/meson.build index 7b7e41c961..e1dbdd0fa7 100644 --- a/drivers/baseband/la12xx/meson.build +++ b/drivers/baseband/la12xx/meson.build @@ -1,6 +1,6 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright 2020-2021 NXP -deps += ['bus_vdev', 'ring'] +deps += ['bus_vdev'] sources = files('bbdev_la12xx.c') diff --git a/drivers/baseband/null/meson.build b/drivers/baseband/null/meson.build index 22863f0bd8..716d6c6fdb 100644 --- a/drivers/baseband/null/meson.build +++ b/drivers/baseband/null/meson.build @@ -1,5 +1,5 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2018 Luca Boccassi -deps += ['bus_vdev', 'ring'] +deps += ['bus_vdev'] sources = files('bbdev_null.c') diff --git a/drivers/baseband/turbo_sw/meson.build b/drivers/baseband/turbo_sw/meson.build index a9035a753e..ac18e8a9b6 100644 --- a/drivers/baseband/turbo_sw/meson.build +++ b/drivers/baseband/turbo_sw/meson.build @@ -26,5 +26,5 @@ i
[PATCH v2 6/7] build: reduce app dependencies
Remove any unnecessary dependencies from the app dependency lists. This will give each app a near-minimum set of required dependencies. Signed-off-by: Bruce Richardson --- app/dumpcap/meson.build | 2 +- app/graph/meson.build| 2 +- app/pdump/meson.build| 2 +- app/proc-info/meson.build| 2 +- app/test-crypto-perf/meson.build | 2 +- app/test-fib/meson.build | 2 +- app/test-sad/meson.build | 2 +- app/test/meson.build | 2 +- 8 files changed, 8 insertions(+), 8 deletions(-) diff --git a/app/dumpcap/meson.build b/app/dumpcap/meson.build index 69c016c780..6204cf051a 100644 --- a/app/dumpcap/meson.build +++ b/app/dumpcap/meson.build @@ -14,4 +14,4 @@ endif ext_deps += pcap_dep sources = files('main.c') -deps += ['ethdev', 'pdump', 'pcapng', 'bpf'] +deps += ['pdump'] diff --git a/app/graph/meson.build b/app/graph/meson.build index 6dc54d5ee6..9c4cd080d9 100644 --- a/app/graph/meson.build +++ b/app/graph/meson.build @@ -9,7 +9,7 @@ if not build subdir_done() endif -deps += ['graph', 'eal', 'lpm', 'ethdev', 'node', 'cmdline'] +deps += ['node', 'cmdline'] sources = files( 'cli.c', 'conn.c', diff --git a/app/pdump/meson.build b/app/pdump/meson.build index fb282bba1f..a10f9d6124 100644 --- a/app/pdump/meson.build +++ b/app/pdump/meson.build @@ -8,4 +8,4 @@ if is_windows endif sources = files('main.c') -deps += ['ethdev', 'kvargs', 'pdump'] +deps += ['pdump'] diff --git a/app/proc-info/meson.build b/app/proc-info/meson.build index 4f83f29a64..b309109291 100644 --- a/app/proc-info/meson.build +++ b/app/proc-info/meson.build @@ -8,7 +8,7 @@ if is_windows endif sources = files('main.c') -deps += ['ethdev', 'security', 'eventdev'] +deps += ['security', 'eventdev'] if dpdk_conf.has('RTE_LIB_METRICS') deps += 'metrics' endif diff --git a/app/test-crypto-perf/meson.build b/app/test-crypto-perf/meson.build index 7b02b518f0..51794521fb 100644 --- a/app/test-crypto-perf/meson.build +++ b/app/test-crypto-perf/meson.build @@ -19,7 +19,7 @@ sources = files( 'cperf_test_verify.c', 'main.c', ) -deps += ['cryptodev', 'net', 'security'] +deps += ['cryptodev'] if dpdk_conf.has('RTE_CRYPTO_SCHEDULER') deps += 'crypto_scheduler' endif diff --git a/app/test-fib/meson.build b/app/test-fib/meson.build index eb36772cf3..25e2ea1a1d 100644 --- a/app/test-fib/meson.build +++ b/app/test-fib/meson.build @@ -8,4 +8,4 @@ if is_windows endif sources = files('main.c') -deps += ['fib', 'lpm', 'net'] +deps += ['fib', 'lpm'] diff --git a/app/test-sad/meson.build b/app/test-sad/meson.build index a50616a9c7..414e2a05cb 100644 --- a/app/test-sad/meson.build +++ b/app/test-sad/meson.build @@ -8,4 +8,4 @@ if is_windows endif sources = files('main.c') -deps += ['ipsec', 'net'] +deps += ['ipsec'] diff --git a/app/test/meson.build b/app/test/meson.build index e29258e6ec..027008e047 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -2,7 +2,7 @@ # Copyright(c) 2017-2023 Intel Corporation # the main test files [test.c and commands.c] relies on these libraries -deps += ['cmdline', 'ring', 'mempool', 'mbuf'] +deps += ['cmdline'] sources += files('commands.c', 'test.c') # optional dependencies: some files may use these - and so we should link them in - -- 2.43.0
[PATCH v2 7/7] devtools: add script to generate DPDK dependency graphs
Rather than the single monolithic graph that would be output from the deps.dot file in a build directory, we can post-process that to generate simpler graphs for different tasks. This new "draw_dependency_graphs" script takes the "deps.dot" as input and generates an output file that has the nodes categorized, filtering them based off the requested node or category. For example, use "--component net_ice" to show the dependency tree from that driver, or "--category lib" to show just the library dependency tree. Signed-off-by: Bruce Richardson --- devtools/draw-dependency-graphs.py | 136 + 1 file changed, 136 insertions(+) create mode 100755 devtools/draw-dependency-graphs.py diff --git a/devtools/draw-dependency-graphs.py b/devtools/draw-dependency-graphs.py new file mode 100755 index 00..826d89d5ce --- /dev/null +++ b/devtools/draw-dependency-graphs.py @@ -0,0 +1,136 @@ +#! /usr/bin/env python3 +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2024 Intel Corporation + +import argparse +import ast + + +driver_classes = [ +"baseband", +"bus", +"common", +"compress", +"crypto", +"dma", +"event", +"gpu", +"mempool", +"ml", +"net", +"raw", +"regex", +"vdpa", +] + + +def component_type(name: str): +if name.startswith("dpdk-"): +return "app" + +nameparts = name.split("_", 1) +if len(nameparts) > 1 and nameparts[0] in driver_classes: +return f"drivers/{nameparts[0]}" + +return "lib" + + +def read_deps_list(lines: list[str]): +deps_data = {} +for ln in lines: +if ln.startswith("digraph") or ln == "}": +continue + +if "->" in ln: +component, deps = [s.strip() for s in ln.split("->")] +deps = ast.literal_eval(deps) +else: +component, deps = ln, {} + +component = component.strip('"') +comp_class = component_type(component) + +if comp_class not in deps_data.keys(): +deps_data[comp_class] = {} +deps_data[comp_class][component] = deps +return deps_data + + +def create_classified_graph(deps_data: dict[dict[list[str]]]): +yield ("digraph dpdk_dependencies {\n overlap=false\n model=subset\n") +for n, category in enumerate(deps_data.keys()): +yield (f' subgraph cluster_{n} {{\nlabel = "{category}"\n') +for component in deps_data[category].keys(): +yield ( +f'"{component}" -> {deps_data[category][component]}\n'.replace( +"'", '"' +) +) +yield (" }\n") +yield ("}\n") + + +def get_deps_for_component( +dep_data: dict[dict[list[str]]], component: str, comp_deps: set +): +categories = dep_data.keys() +comp_deps.add(component) +for cat in categories: +if component in dep_data[cat].keys(): +for dep in dep_data[cat][component]: +get_deps_for_component(dep_data, dep, comp_deps) + + +def filter_deps( +dep_data: dict[dict[list[str]]], component: list[str] +) -> dict[dict[list[str]]]: +components = set() +for comp in component: +get_deps_for_component(dep_data, comp, components) + +retval = {} +for category in dep_data.keys(): +for comp in dep_data[category].keys(): +if comp in components: +if category not in retval: +retval[category] = {} +retval[category][comp] = dep_data[category][comp] +return retval + + +def main(): +parser = argparse.ArgumentParser( +description="Utility to generate dependency tree graphs for DPDK" +) +parser.add_argument( +"--component", +type=str, +help="Only output hierarchy from specified component down.", +) +parser.add_argument( +"--category", +type=str, +help="Output hierarchy for all components in given category, e.g. lib, app, drivers/net, etc.", +) +parser.add_argument( +"input_file", +type=argparse.FileType("r"), +help="Path to the deps.dot file from a DPDK build directory", +) +parser.add_argument( +"output_file", +type=argparse.FileType("w"), +help="Path to the desired output dot file", +) +args = parser.parse_args() + +deps = read_deps_list([ln.strip() for ln in args.input_file.readlines()]) +if args.component: +deps = filter_deps(deps, [args.component]) +elif args.category: +deps = filter_deps(deps, deps[args.category].keys()) +args.output_file.writelines(create_classified_graph(deps)) + + +if __name__ == "__main__": +main() -- 2.43.0
RE: [PATCH v2 0/7] record and rework component dependencies
> From: Bruce Richardson [mailto:bruce.richard...@intel.com] > Sent: Friday, 2 August 2024 14.44 > > As part of the meson build, we can record the dependencies for each > component as we process it, logging them to a file. This file can be > used as input to a number of other scripts and tools, for example, to > graph the dependencies, or to allow higher-level build-config tools to > automatically enable component requirements, etc. > > The first patch of this set generates the basic dependency tree. The > second patch does some processing of that dependency tree to identify > cases where dependencies are being unnecessarily specified. Reducing > these makes it easier to have readable dependency graphs in future, > without affecting the build. > > The following 4 patches are based on the output of the second patch, and > greatly cut down the number of direct dependency links between > components. Even with the cut-down dependencies, the full dependency > graph is nigh-unreadable, so the final patch adds a new script to > generate dependency tree subgraphs, creating dot files for e.g. the > dependencies of a particular component, or a component class such as > mempool drivers. Thank you for the work on this, Bruce. For the series, Acked-by: Morten Brørup
Re: DTS WG Meeting Minutes - August 1, 2024
01/08/2024 21:39, Patrick Robb: > * Do we want to add DTS items to release notes? >* What is the process for building the release notes? Should the > people working on DTS aggregate the updates and submit them? Patrick > Robbshould check the website for the release notes process, or email > the dev mailing list asking. The release notes are part of the builtin documentation. It should be updated atomically while adding a feature, i.e. this file should be updated in the same patch as the related DTS code. If any questions, please ask.
Re: [PATCH v2 1/1] dts: add text parser for testpmd verbose output
On Thu, Aug 1, 2024 at 4:41 AM Luca Vizzarro wrote: > > Great work Jeremy! Just a couple of minor passable improvement points. > > On 30/07/2024 14:34, jspew...@iol.unh.edu wrote: > > > +@dataclass > > +class TestPmdVerbosePacket(TextParser): > > +"""Packet information provided by verbose output in Testpmd. > > + > > +The "receive/sent queue" information is not included in this dataclass > > because this value is > > +captured on the outer layer of input found in > > :class:`TestPmdVerboseOutput`. > > +""" > > + > > +#: > > +src_mac: str = > > field(metadata=TextParser.find(r"src=({})".format(REGEX_FOR_MAC_ADDRESS))) > Just a(n optional) nit: TextParser.find(f"src=({REGEX_FOR_MAC_ADDRESS})") > The raw string is only needed to prevent escaping, which we don't do here. Ack. I really just left it this way because it also adjusts highlighting in some IDEs, but there isn't much to see here anyway. > > +#: > > +dst_mac: str = > > field(metadata=TextParser.find(r"dst=({})".format(REGEX_FOR_MAC_ADDRESS))) > As above. Ack. > > +#: Memory pool the packet was handled on. > > +pool: str = field(metadata=TextParser.find(r"pool=(\S+)")) > > +#: Packet type in hex. > > +p_type: int = > > field(metadata=TextParser.find_int(r"type=(0x[a-fA-F\d]+)")) > > +#: > > > > > +@staticmethod > > +def extract_verbose_output(output: str) -> list[TestPmdVerboseOutput]: > > +"""Extract the verbose information present in given testpmd output. > > + > > +This method extracts sections of verbose output that begin with > > the line > > +"port X/queue Y: sent/received Z packets" and end with the > > ol_flags of a packet. > > + > > +Args: > > +output: Testpmd output that contains verbose information > > + > > +Returns: > > +List of parsed packet information gathered from verbose > > information in `output`. > > +""" > > +iter = re.finditer(r"(port \d+/queue \d+:.*?(?=port \d+/queue > > \d+|$))", output, re.S) > > How about using a regex that matches what you described? ;) Keeping re.S: > > (port \d+/queue \d+.+?ol_flags: [\w ]+) > > Would spare you from using complex lookahead constructs and 4.6x less > steps. Maybe it doesn't work with every scenario? Looks like it works > well with the sample output I have. Let me know if it works for you. > I tried using something like this actually which is probably why the docstring reflects that type of language, but I didn't use it because it doesn't match one specific case. I'm not sure how common it is, and I haven't seen it happen in my recent testing, but since the verbose output specifically states that it sent/received X number of packets, I presume there is a case where that number will be more than 1, and there will be more than one set of packet information after that first line. I think I've seen it happen in the past, but I couldn't recreate it in testing. For context to this next section, if it wasn't clear, I consider the `port \d+/queue \d+` line to be the header line and the start of a "block" of verbose output. Basically though the problem with this is that if there are more than one packet under that header line, the lazy approach will only consume up until the ol_flags of the first packet of a block, and the greedy approach consumes everything until the last packet of the entire output. You could use the lazy approach with the next port/queue line as your delimiter, but then the opening line of the next block of output is included in the previous block's group. The only way I could find to get around this and go with the idea of "take everything from the start of this group until another group starts" but without capturing the opening of the next block was a look ahead. Again though, I definitely don't love the regex that I wrote and would love to find a better alternative. > Best, > Luca >
Re: [PATCH v2 1/1] dts: add text parser for testpmd verbose output
On Thu, Aug 1, 2024 at 4:43 AM Luca Vizzarro wrote: > > On 30/07/2024 22:33, Jeremy Spewock wrote: > > On Tue, Jul 30, 2024 at 9:37 AM wrote: > > > >> +class VerboseOLFlag(Flag): > >> +"""Flag representing the OL flags of a packet from Testpmd verbose > >> output.""" > >> + > >> +#: > >> +RTE_MBUF_F_RX_RSS_HASH = auto() > >> + > >> +#: > >> +RTE_MBUF_F_RX_L4_CKSUM_GOOD = auto() > >> +#: > >> +RTE_MBUF_F_RX_L4_CKSUM_BAD = auto() > >> +#: > >> +RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN = auto() > >> + > >> +#: > >> +RTE_MBUF_F_RX_IP_CKSUM_GOOD = auto() > >> +#: > >> +RTE_MBUF_F_RX_IP_CKSUM_BAD = auto() > >> +#: > >> +RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN = auto() > >> + > >> +#: > >> +RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD = auto() > >> +#: > >> +RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD = auto() > >> +#: > >> +RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN = auto() > >> + > > After reading more of the API and using this patch to write a test > > suite, I believe there is more expansion of these OL flags that should > > take place. For starters, there are the Tx OL flags that, while not > > seeming to be very useful for the current test suites we are writing, > > wouldn't hurt to also include as they seem to be fairly different. > > Additionally, there are some other less common RX OL flags that should > > be included here just to cover all options. I will work on adding > > these into the next version. > > If you wanted to cover even more, hw_ptype and sw_ptype look like they > could use a data structure. I reckon a flag like the above would also work. Ack, it's probably worth making those a little more structured as well. >
Re: [PATCH v8 5/5] dts: add API doc generation
02é x/08/2024 12:48, Juraj Linkeš: > On 1. 8. 2024 17:07, Thomas Monjalon wrote: > > 01/08/2024 15:03, Juraj Linkeš: > >> On 30. 7. 2024 15:51, Thomas Monjalon wrote: > >>> 12/07/2024 10:57, Juraj Linkeš: > +dts_root = environ.get('DTS_ROOT') > >>> > >>> Why does it need to be passed as an environment variable? > >>> Isn't it a fixed absolute path? > >> > >> The path to DTS needs to be passed in some way (and added to sys.path) > >> so that Sphinx knows where the sources are in order to import them. > >> > >> Do you want us to not pass the path, but just hardcode it here? I didn't > >> really think about that, maybe that could work. > > > > I think hardcode is better here. xFalse, > 'navigation_depth': -1, > } > > The sidebar configuration is conditional, so we have to pass something > to indicate dts build. I'll change it so that we look for 'dts' in src > in call-sphinx-build.py (we're in the dts doc directory, indicating dts > build) and set the DTS_BUILD env var which we can use in conf.py. I > didn't find a better way to do this as conf.py doesn't have any > information about the build itself (and no path that conf.py has access > to points to anything dts). Here's how it'll look: > > if environ.get('DTS_BUILD'): > path.append(path_join(dirname(dirname(dirname(__file__))), 'dts')) > # DTS Sidebar config. > html_theme_options = { > 'collapse_navigation': False, > 'navigation_depth': -1, # unlimited depth > } OK > +To build DTS API docs, install the dependencies with Poetry, then enter > its shell: > >>> > >>> I don't plan to use Poetry on my machine. > >>> Can we simply describe the dependencies even if the versions are not > >>> specified? > >> > >> The reason we don't list the dependencies anywhere is that doing it with > >> Poetry is much easier (and a bit safer, as Poetry is going to install > >> tested versions). > >> > >> But I can add references to the two relevant sections of > >> dts/pyproject.toml which contain the dependencies with a note that they > >> can be installed with pip (and I guess that would be another > >> dependency), but at that point it's that not much different than using > >> Poetry. > > > > I want to use my system package manager. > > I am from this old school thinking we should have a single package manager > > in a system. > > > > I understand and would also prefer that, but it just doesn't work for > Python. Not all packages are available from the package managers, and > Python projects should not use system packages as there are frequently > version mismatches between the system packages and what the project > needs (the APIs could be different as well as behavior; a problem we've > seen with Scapy). Poetry is one of the tools that tries to solve this > well-known Python limitation. I fully agree for DTS runtime. I'm expecting the dependencies are more tolerant for DTS doc. > I've done a quick search of what's available in Ubuntu and two packages > aren't available, types-PyYAML (which maybe we could do without, I'll > have to test) and aenum (which is currently needed for the capabilities > patch; if absolutely necessary, maybe I could find a solution without > aenum). But even with this we can't be sure that the system package > versions will work. We need them all to generate the documentation? > +.. code-block:: console > + > + poetry install --no-root --with docs > + poetry shell > + > +The documentation is built using the standard DPDK build system. > +After executing the meson command and entering Poetry's shell, build > the documentation with: > + > +.. code-block:: console > + > + ninja -C build dts-doc > >>> > >>> Don't we rely on the Meson option "enable_docs"? > >> > >> I had a discussion about this with Bruce, but I can't find it anywhere, > >> so here's what I remember: > >> 1. We didn't want to tie the dts api doc build to dpdk doc build because > >> of the dependencies. > > > > Sure > > But we could just skip if dependencies are not met? > > Maybe we could add a script that would check the dependencies. I'll see > what I can do. OK thanks
Re: DTS WG Meeting Minutes - August 1, 2024
> On Aug 2, 2024, at 8:34 AM, Thomas Monjalon wrote: > > 01/08/2024 21:39, Patrick Robb: >> * Do we want to add DTS items to release notes? >> * What is the process for building the release notes? Should the >> people working on DTS aggregate the updates and submit them? Patrick >> Robbshould check the website for the release notes process, or email >> the dev mailing list asking. > > The release notes are part of the builtin documentation. > It should be updated atomically while adding a feature, > i.e. this file should be updated in the same patch as the related DTS code. > > If any questions, please ask. I guess we should add a new section for DTS in the release notes? >
[PATCH] vhost-user: optimize stats counters performance
Optimized the performance of updating the statistics counters by reducing the number of branches. Ordered the packet size comparisons according to the probability with typical internet traffic mix. Signed-off-by: Morten Brørup --- lib/vhost/virtio_net.c | 40 ++-- 1 file changed, 14 insertions(+), 26 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 370402d849..25a495df56 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -53,7 +53,7 @@ is_valid_virt_queue_idx(uint32_t idx, int is_tx, uint32_t nr_vring) } static inline void -vhost_queue_stats_update(struct virtio_net *dev, struct vhost_virtqueue *vq, +vhost_queue_stats_update(const struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint16_t count) __rte_shared_locks_required(&vq->access_lock) { @@ -64,37 +64,25 @@ vhost_queue_stats_update(struct virtio_net *dev, struct vhost_virtqueue *vq, return; for (i = 0; i < count; i++) { - struct rte_ether_addr *ea; - struct rte_mbuf *pkt = pkts[i]; + const struct rte_ether_addr *ea; + const struct rte_mbuf *pkt = pkts[i]; uint32_t pkt_len = rte_pktmbuf_pkt_len(pkt); stats->packets++; stats->bytes += pkt_len; - if (pkt_len == 64) { - stats->size_bins[1]++; - } else if (pkt_len > 64 && pkt_len < 1024) { - uint32_t bin; - - /* count zeros, and offset into correct bin */ - bin = (sizeof(pkt_len) * 8) - rte_clz32(pkt_len) - 5; - stats->size_bins[bin]++; - } else { - if (pkt_len < 64) - stats->size_bins[0]++; - else if (pkt_len < 1519) - stats->size_bins[6]++; - else - stats->size_bins[7]++; - } + if (pkt_len >= 1024) + stats->size_bins[6 + (pkt_len > 1518)]++; + else if (pkt_len <= 64) + stats->size_bins[pkt_len >> 6]++; + else + stats->size_bins[32UL - rte_clz32(pkt_len) - 5]++; - ea = rte_pktmbuf_mtod(pkt, struct rte_ether_addr *); - if (rte_is_multicast_ether_addr(ea)) { - if (rte_is_broadcast_ether_addr(ea)) - stats->broadcast++; - else - stats->multicast++; - } + ea = rte_pktmbuf_mtod(pkt, const struct rte_ether_addr *); + RTE_BUILD_BUG_ON(offsetof(struct virtqueue_stats, broadcast) != + offsetof(struct virtqueue_stats, multicast) + sizeof(uint64_t)); + if (unlikely(rte_is_multicast_ether_addr(ea))) + (&stats->multicast)[rte_is_broadcast_ether_addr(ea)]++; } } -- 2.43.0
[PATCH] netvsc: optimize stats counters performance
Optimized the performance of updating the statistics counters by reducing the number of branches. Ordered the packet size comparisons according to the probability with typical internet traffic mix. Signed-off-by: Morten Brørup --- drivers/net/netvsc/hn_rxtx.c | 32 ++-- 1 file changed, 10 insertions(+), 22 deletions(-) diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c index 9bf1ec5509..b704b2c971 100644 --- a/drivers/net/netvsc/hn_rxtx.c +++ b/drivers/net/netvsc/hn_rxtx.c @@ -110,30 +110,18 @@ hn_update_packet_stats(struct hn_stats *stats, const struct rte_mbuf *m) uint32_t s = m->pkt_len; const struct rte_ether_addr *ea; - if (s == 64) { - stats->size_bins[1]++; - } else if (s > 64 && s < 1024) { - uint32_t bin; - - /* count zeros, and offset into correct bin */ - bin = (sizeof(s) * 8) - rte_clz32(s) - 5; - stats->size_bins[bin]++; - } else { - if (s < 64) - stats->size_bins[0]++; - else if (s < 1519) - stats->size_bins[6]++; - else - stats->size_bins[7]++; - } + if (s >= 1024) + stats->size_bins[6 + (s > 1518)]++; + else if (s <= 64) + stats->size_bins[s >> 6]++; + else + stats->size_bins[32UL - rte_clz32(s) - 5]++; ea = rte_pktmbuf_mtod(m, const struct rte_ether_addr *); - if (rte_is_multicast_ether_addr(ea)) { - if (rte_is_broadcast_ether_addr(ea)) - stats->broadcast++; - else - stats->multicast++; - } + RTE_BUILD_BUG_ON(offsetof(struct hn_stats, broadcast) != + offsetof(struct hn_stats, multicast) + sizeof(uint64_t)); + if (unlikely(rte_is_multicast_ether_addr(ea))) + (&stats->multicast)[rte_is_broadcast_ether_addr(ea)]++; } static inline unsigned int hn_rndis_pktlen(const struct rndis_packet_msg *pkt) -- 2.43.0
Re: DTS WG Meeting Minutes - August 1, 2024
02/08/2024 16:31, Honnappa Nagarahalli: > > > On Aug 2, 2024, at 8:34 AM, Thomas Monjalon wrote: > > > > 01/08/2024 21:39, Patrick Robb: > >> * Do we want to add DTS items to release notes? > >> * What is the process for building the release notes? Should the > >> people working on DTS aggregate the updates and submit them? Patrick > >> Robbshould check the website for the release notes process, or email > >> the dev mailing list asking. > > > > The release notes are part of the builtin documentation. > > It should be updated atomically while adding a feature, > > i.e. this file should be updated in the same patch as the related DTS code. > > > > If any questions, please ask. > I guess we should add a new section for DTS in the release notes? There is no section, it just has to be added at the end in "New Features", as tooling is usually sorted at the end.
RE: [PATCH v5] virtio: optimize stats counters performance
> From: Morten Brørup [mailto:m...@smartsharesystems.com] > Sent: Thursday, 1 August 2024 22.38 > > > From: Stephen Hemminger [mailto:step...@networkplumber.org] > > Sent: Thursday, 1 August 2024 18.18 > > > > On Thu, 1 Aug 2024 16:03:12 + > > Morten Brørup wrote: > > > > > Optimized the performance of updating the virtio statistics counters > > by > > > reducing the number of branches. > > > > > > Ordered the packet size comparisons according to the probability > with > > > typical internet traffic mix. > > > > > > Signed-off-by: Morten Brørup > > > > LGTM > > Acked-by: Stephen Hemminger > > > > I wonder if other software drivers could use similar counters? > > While working on this, I noticed the netvsc driver and vhost lib also > have size_bins [1], so they are likely candidates. > > The netvsc's hn_update_packet_stats() function [2] seems like 100 % > copy-paste, and should be easy to paste over with the new > implementation. > The vhost lib's vhost_queue_stats_update() function [3] also looks > rather similar to the original variant, and could benefit too. > > [1]: https://elixir.bootlin.com/dpdk/v24.07/A/ident/size_bins > [2]: > https://elixir.bootlin.com/dpdk/v24.07/source/drivers/net/netvsc/hn_rxtx > .c#L108 > [3]: > https://elixir.bootlin.com/dpdk/v24.07/source/lib/vhost/virtio_net.c#L56 > > I'll take a look around and add similar patches for what I find. Thanks > for the reminder. Separate patches for vhost-user [4] and netvsc [5] now provided. [4]: https://inbox.dpdk.org/dev/20240802143259.269827-1...@smartsharesystems.com/T/#u [5]: https://inbox.dpdk.org/dev/20240802144048.270152-1...@smartsharesystems.com/T/#u
Re: [PATCH v2 1/1] dts: add text parser for testpmd verbose output
> You're right that in most cases it would come from the stop output, > but the output from that stop command has one other thing as well that > I would consider valuable which is statistics of packets handled by > ports specifically for the duration of the packet forwarding you are > stopping. It is also true that sending other testpmd commands while > verbose output is being sent will change what is collected, so I > didn't want to tie the method specifically to the stop command since > if you did a start command then checked port statistics for example, > it would consume all of the verbose output up until the command to > check port statistics. > > For the reason stated above I think it actually might make sense to > make it so that every testpmd method (including ones that currently > return dataclasses) return their original, unmodified collected output > from the testpmd shell as well. Things like port stats can return both > in a tuple potentially. This way if there is asynchronous output like > there is with verbose output other commands don't completely remove > it. I agree! I think giving each testpmd method its own output would add more consistency. An idea I had floating around that kind of relates to your suggestion above was introducing some instance variables that could enable the testpmd shell object to be smart enough to automatically scan, and keep a record of, any verbose output that comes out across any command run. The TestPMDShell class could track whether verbose mode is on or not, and if True, run additional logic to scan for verbose output and add it to a data structure for access every time a command is run. Then users, from the perspective of writing a test suite, could do something like 'output in testpmd.verbose_output', where verbose_output is an instance data structure of the TestPMDShell. This might be overcomplicated to implement, but it was an idea I had that might make using verbose mode more streamlined. What are your thoughts?
Re: [PATCH v2 0/7] record and rework component dependencies
I saw there was one fail for this series for the Marvell MVNETA container. I thought I should also mention that the build did fail when our automation tried to run DTS on the Octeon CN10K DPU (which is also Marvell). So, that is why you won't get a report for the CN10K DTS results. Let me know if you can't get what you need from the MVNETA build logs failure, and I can manually get more info about why this won't compile. Cheers.
Re: [PATCH v2 0/7] record and rework component dependencies
On Fri, Aug 02, 2024 at 11:05:58AM -0400, Patrick Robb wrote: > I saw there was one fail for this series for the Marvell MVNETA container. > > I thought I should also mention that the build did fail when our > automation tried to run DTS on the Octeon CN10K DPU (which is also > Marvell). So, that is why you won't get a report for the CN10K DTS > results. > > Let me know if you can't get what you need from the MVNETA build logs > failure, and I can manually get more info about why this won't > compile. Cheers. I have already got multiple emails about failures from this series - though all look to be about the same issue. I think it should be an easy fix for v3, but I'll waiting till all reports come back. /Bruce
Re: [PATCH v4 1/3] buildtools: add helper to convert text file to header
On Thu, Aug 01, 2024 at 10:29:08AM -0700, Stephen Hemminger wrote: > Simple script to read a file and make it into a initialized > C string in a header file. > > Signed-off-by: Stephen Hemminger > --- > buildtools/gen-header.py | 36 > buildtools/meson.build | 2 +- > 2 files changed, 37 insertions(+), 1 deletion(-) > create mode 100644 buildtools/gen-header.py > Resulting header output LGTM Acked-by: Bruce Richardson
Re: [PATCH v4 2/3] test: remove unused resource API
On Thu, Aug 01, 2024 at 10:29:09AM -0700, Stephen Hemminger wrote: > This API was used only for cfgfile tests and was never built > after the conversion to meson. Will be replaced by simpler > method of doing cfgfile tests. > > Signed-off-by: Stephen Hemminger > --- It's always there in git history if we ever need it back again! :-) Acked-by: Bruce Richardson
Re: [PATCH v4 3/3] test: restore cfgfile tests
On Thu, Aug 01, 2024 at 10:29:10AM -0700, Stephen Hemminger wrote: > These tests were not built since the conversion to meson. > Instead of using embedded resource functions, put data in include > file and generate temporary file before the test. > > Signed-off-by: Stephen Hemminger > --- > app/meson.build| 3 +- > app/test/meson.build | 6 +- > app/test/test_cfgfile.c| 153 +++-- > app/test/test_cfgfiles/meson.build | 19 > 4 files changed, 129 insertions(+), 52 deletions(-) > create mode 100644 app/test/test_cfgfiles/meson.build > Tested-by: Bruce Richardson Acked-by: Bruce Richardson A further cleanup that could be done with the test files is to remove the unnecessary "etc" subdirectory. > diff --git a/app/meson.build b/app/meson.build > index 5b2c80c7a1..e2db888ae1 100644 > --- a/app/meson.build > +++ b/app/meson.build > @@ -55,6 +55,7 @@ foreach app:apps > build = true > reason = '' # set if build == false to explain > sources = [] > +resources = [] > includes = [] > cflags = default_cflags > ldflags = default_ldflags > @@ -115,7 +116,7 @@ foreach app:apps > endif > > exec = executable('dpdk-' + name, > -sources, > +[ sources, resources ], > c_args: cflags, > link_args: ldflags, > link_whole: link_libs, > diff --git a/app/test/meson.build b/app/test/meson.build > index 62478c0bb6..b2bb7c36f6 100644 > --- a/app/test/meson.build > +++ b/app/test/meson.build > @@ -35,7 +35,7 @@ source_file_deps = { > 'test_bitratestats.c': ['metrics', 'bitratestats', 'ethdev'] + > sample_packet_forward_deps, > 'test_bpf.c': ['bpf', 'net'], > 'test_byteorder.c': [], > -#'test_cfgfile.c': ['cfgfile'], > +'test_cfgfile.c': ['cfgfile'], > 'test_cksum.c': ['net'], > 'test_cksum_perf.c': ['net'], > 'test_cmdline.c': [], > @@ -261,3 +261,7 @@ if not is_windows > build_by_default: true, > install: false) > endif > + > +subdir('test_cfgfiles') > + > +resources += test_cfgfile_h > diff --git a/app/test/test_cfgfile.c b/app/test/test_cfgfile.c > index a5e3d8699c..7cfcaf348a 100644 > --- a/app/test/test_cfgfile.c > +++ b/app/test/test_cfgfile.c > @@ -5,48 +5,54 @@ > #include > #include > #include > -#include > +#include > + > +#ifdef RTE_EXEC_ENV_WINDOWS > +#include > +#endif > > #include > > #include "test.h" > -#include "resource.h" > - > - > -#define CFG_FILES_ETC "test_cfgfiles/etc" > > -REGISTER_LINKED_RESOURCE(test_cfgfiles); > +#include "test_cfgfiles.h" > > static int > -test_cfgfile_setup(void) > +make_tmp_file(char *filename, const char *prefix, const char *data) > { > - const struct resource *r; > - int ret; > + size_t len = strlen(data); > + size_t count; > + FILE *f; > > - r = resource_find("test_cfgfiles"); > - TEST_ASSERT_NOT_NULL(r, "missing resource test_cfgfiles"); > +#ifdef RTE_EXEC_ENV_WINDOWS > + char tempDirName[MAX_PATH - 14]; > > - ret = resource_untar(r); > - TEST_ASSERT_SUCCESS(ret, "failed to untar %s", r->name); > + if (GetTempPathA(sizeof(tempDirName), tempDirName) == 0) > + return -1; > > - return 0; > -} > + if (GetTempFileNameA(tempDirName, prefix, 0, filename) == 0) > + return -1; > > -static int > -test_cfgfile_cleanup(void) > -{ > - const struct resource *r; > - int ret; > + f = fopen(filename, "wt"); > +#else > + snprintf(filename, PATH_MAX, "/tmp/%s_XXX", prefix); > > - r = resource_find("test_cfgfiles"); > - TEST_ASSERT_NOT_NULL(r, "missing resource test_cfgfiles"); > + int fd = mkstemp(filename); > + if (fd < 0) > + return -1; > > - ret = resource_rm_by_tar(r); > - TEST_ASSERT_SUCCESS(ret, "Failed to delete resource %s", r->name); > + f = fdopen(fd, "w"); > +#endif > + if (f == NULL) > + return -1; > > - return 0; > + count = fwrite(data, sizeof(char), len, f); > + fclose(f); > + > + return (count == len) ? 0 : -1; > } > > + > static int > _test_cfgfile_sample(struct rte_cfgfile *cfgfile) > { > @@ -87,9 +93,13 @@ static int > test_cfgfile_sample1(void) > { > struct rte_cfgfile *cfgfile; > + char filename[PATH_MAX]; > int ret; > > - cfgfile = rte_cfgfile_load(CFG_FILES_ETC "/sample1.ini", 0); > + ret = make_tmp_file(filename, "sample1", sample1_ini); > + TEST_ASSERT_SUCCESS(ret, "Failed to setup temp file"); > + > + cfgfile = rte_cfgfile_load(filename, 0); > TEST_ASSERT_NOT_NULL(cfgfile, "Failed to load config file"); > > ret = _test_cfgfile_sample(cfgfile); > @@ -98,6 +108,8 @@ test_cfgfile_sample1(void) > ret = rte_cfgfile_close(cfgfile); > TEST_ASSERT_SUCCESS(ret, "Failed to close cfgfile"); > > + remove(filename); > + > return 0; > } > > @@ -106,15 +
[PATCH v5 0/4] restore unused cfgfile tests
The cfgfile tests did not get built since conversion to meson and they used an awkward way to manage the test data. This patchset converts the tests to use a helper to take text file and make it into a C header. Then use the C header to generate temporary files as needed. v5 - rearrange tests input files - use unit_suite_runner - more Windows fixes Stephen Hemminger (4): buildtools: add helper to convert text file to header test: remove unused resource API test: restore cfgfile tests test: rearrange test_cfgfiles cases app/meson.build | 3 +- app/test/meson.build | 8 +- app/test/resource.c | 276 -- app/test/resource.h | 106 --- app/test/test_cfgfile.c | 213 -- app/test/test_cfgfiles/{etc => }/empty.ini| 0 .../{etc => }/empty_key_value.ini | 0 .../{etc => }/invalid_section.ini | 0 .../test_cfgfiles/{etc => }/line_too_long.ini | 0 app/test/test_cfgfiles/meson.build| 19 ++ .../{etc => }/missing_section.ini | 0 .../{etc => }/realloc_sections.ini| 0 app/test/test_cfgfiles/{etc => }/sample1.ini | 0 app/test/test_cfgfiles/{etc => }/sample2.ini | 0 app/test/test_resource.c | 104 --- buildtools/gen-header.py | 36 +++ buildtools/meson.build| 2 +- 17 files changed, 194 insertions(+), 573 deletions(-) delete mode 100644 app/test/resource.c delete mode 100644 app/test/resource.h rename app/test/test_cfgfiles/{etc => }/empty.ini (100%) rename app/test/test_cfgfiles/{etc => }/empty_key_value.ini (100%) rename app/test/test_cfgfiles/{etc => }/invalid_section.ini (100%) rename app/test/test_cfgfiles/{etc => }/line_too_long.ini (100%) create mode 100644 app/test/test_cfgfiles/meson.build rename app/test/test_cfgfiles/{etc => }/missing_section.ini (100%) rename app/test/test_cfgfiles/{etc => }/realloc_sections.ini (100%) rename app/test/test_cfgfiles/{etc => }/sample1.ini (100%) rename app/test/test_cfgfiles/{etc => }/sample2.ini (100%) delete mode 100644 app/test/test_resource.c create mode 100644 buildtools/gen-header.py -- 2.43.0
[PATCH v5 1/4] buildtools: add helper to convert text file to header
Simple script to read a file and make it into a initialized C string in a header file. Signed-off-by: Stephen Hemminger Acked-by: Bruce Richardson --- buildtools/gen-header.py | 36 buildtools/meson.build | 2 +- 2 files changed, 37 insertions(+), 1 deletion(-) create mode 100644 buildtools/gen-header.py diff --git a/buildtools/gen-header.py b/buildtools/gen-header.py new file mode 100644 index 00..06e645863c --- /dev/null +++ b/buildtools/gen-header.py @@ -0,0 +1,36 @@ +#! /usr/bin/env python3 +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2023 Stephen Hemminger + +""" +Script to read a text file and convert it into a header file. +""" +import sys +import os + + +def main(): +'''program main function''' +print(f'/* File autogenerated by {sys.argv[0]} */') +for path in sys.argv[1:]: +name = os.path.basename(path) +print() +print(f'/* generated from {name} */') +with open(path, "r") as f: +array = name.replace(".", "_") +print(f'static const char {array}[] = ' + '{') +line = f.readline() + +# make sure empty string is null terminated +if not line: +print('""') + +while line: +s = repr(line) +print('{}'.format(s.replace("'", '"'))) +line = f.readline() +print('};') + + +if __name__ == "__main__": +main() diff --git a/buildtools/meson.build b/buildtools/meson.build index 3adf34e1a8..bc818a71d5 100644 --- a/buildtools/meson.build +++ b/buildtools/meson.build @@ -24,6 +24,7 @@ get_numa_count_cmd = py3 + files('get-numa-count.py') get_test_suites_cmd = py3 + files('get-test-suites.py') has_hugepages_cmd = py3 + files('has-hugepages.py') cmdline_gen_cmd = py3 + files('dpdk-cmdline-gen.py') +header_gen_cmd = py3 + files('gen-header.py') # install any build tools that end-users might want also install_data([ @@ -48,4 +49,3 @@ else pmdinfo += 'ar' pmdinfogen += 'elf' endif - -- 2.43.0
[PATCH v5 2/4] test: remove unused resource API
This API was used only for cfgfile tests and was never built after the conversion to meson. Will be replaced by simpler method of doing cfgfile tests. Signed-off-by: Stephen Hemminger Acked-by: Bruce Richardson --- app/test/meson.build | 2 - app/test/resource.c | 276 --- app/test/resource.h | 106 --- app/test/test_resource.c | 104 --- 4 files changed, 488 deletions(-) delete mode 100644 app/test/resource.c delete mode 100644 app/test/resource.h delete mode 100644 app/test/test_resource.c diff --git a/app/test/meson.build b/app/test/meson.build index e29258e6ec..62478c0bb6 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -20,7 +20,6 @@ test_cryptodev_deps = ['bus_vdev', 'net', 'cryptodev', 'security'] source_file_deps = { # The C files providing functionality to other test cases 'packet_burst_generator.c': packet_burst_generator_deps, -#'resource.c': [], # unused currently. 'sample_packet_forward.c': sample_packet_forward_deps, 'virtual_pmd.c': virtual_pmd_deps, @@ -154,7 +153,6 @@ source_file_deps = { 'test_reciprocal_division_perf.c': [], 'test_red.c': ['sched'], 'test_reorder.c': ['reorder'], -#'test_resource.c': [], 'test_rib.c': ['net', 'rib'], 'test_rib6.c': ['net', 'rib'], 'test_ring.c': ['ptr_compress'], diff --git a/app/test/resource.c b/app/test/resource.c deleted file mode 100644 index 34465f1668..00 --- a/app/test/resource.c +++ /dev/null @@ -1,276 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2016 RehiveTech. All rights reserved. - */ - -#include -#include -#include -#include - -#include - -#include "resource.h" - -struct resource_list resource_list = TAILQ_HEAD_INITIALIZER(resource_list); - -size_t resource_size(const struct resource *r) -{ - return r->end - r->begin; -} - -const struct resource *resource_find(const char *name) -{ - struct resource *r; - - TAILQ_FOREACH(r, &resource_list, next) { - RTE_VERIFY(r->name); - - if (!strcmp(r->name, name)) - return r; - } - - return NULL; -} - -int resource_fwrite(const struct resource *r, FILE *f) -{ - const size_t goal = resource_size(r); - size_t total = 0; - - while (total < goal) { - size_t wlen = fwrite(r->begin + total, 1, goal - total, f); - if (wlen == 0) { - perror(__func__); - return -1; - } - - total += wlen; - } - - return 0; -} - -int resource_fwrite_file(const struct resource *r, const char *fname) -{ - FILE *f; - int ret; - - f = fopen(fname, "w"); - if (f == NULL) { - perror(__func__); - return -1; - } - - ret = resource_fwrite(r, f); - fclose(f); - return ret; -} - -#ifdef RTE_APP_TEST_RESOURCE_TAR -#include -#include - -static int do_copy(struct archive *r, struct archive *w) -{ - const void *buf; - size_t len; -#if ARCHIVE_VERSION_NUMBER >= 300 - int64_t off; -#else - off_t off; -#endif - int ret; - - while (1) { - ret = archive_read_data_block(r, &buf, &len, &off); - if (ret == ARCHIVE_RETRY) - continue; - - if (ret == ARCHIVE_EOF) - return 0; - - if (ret != ARCHIVE_OK) - return ret; - - do { - ret = archive_write_data_block(w, buf, len, off); - if (ret != ARCHIVE_OK && ret != ARCHIVE_RETRY) - return ret; - } while (ret != ARCHIVE_OK); - } -} - -int resource_untar(const struct resource *res) -{ - struct archive *r; - struct archive *w; - struct archive_entry *e; - void *p; - int flags = 0; - int ret; - - p = malloc(resource_size(res)); - if (p == NULL) - rte_panic("Failed to malloc %zu B\n", resource_size(res)); - - memcpy(p, res->begin, resource_size(res)); - - r = archive_read_new(); - if (r == NULL) { - free(p); - return -1; - } - - archive_read_support_format_all(r); - archive_read_support_filter_all(r); - - w = archive_write_disk_new(); - if (w == NULL) { - archive_read_free(r); - free(p); - return -1; - } - - flags |= ARCHIVE_EXTRACT_PERM; - flags |= ARCHIVE_EXTRACT_FFLAGS; - archive_write_disk_set_options(w, flags); - archive_write_disk_set_standard_lookup(w); - - ret = archive_read_open_memory(r, p, resource_size(res)); - if (ret != ARCHIVE_OK) - goto fail; - - while (1) { - ret = archive_read_next_header(r, &e); -
[PATCH v5 3/4] test: restore cfgfile tests
These tests were not built since the conversion to meson. Instead of using embedded resource functions, put data in include file and generate temporary file before the test. The changes to app/test/meson.build are to handle auto-generated files (resources) differently. Don't scan these files to look for test input. Using common unit test macro allows for simpler management of more tests. Signed-off-by: Stephen Hemminger --- app/meson.build| 3 +- app/test/meson.build | 6 +- app/test/test_cfgfile.c| 213 ++--- app/test/test_cfgfiles/meson.build | 19 +++ 4 files changed, 157 insertions(+), 84 deletions(-) create mode 100644 app/test/test_cfgfiles/meson.build diff --git a/app/meson.build b/app/meson.build index 5b2c80c7a1..e2db888ae1 100644 --- a/app/meson.build +++ b/app/meson.build @@ -55,6 +55,7 @@ foreach app:apps build = true reason = '' # set if build == false to explain sources = [] +resources = [] includes = [] cflags = default_cflags ldflags = default_ldflags @@ -115,7 +116,7 @@ foreach app:apps endif exec = executable('dpdk-' + name, -sources, +[ sources, resources ], c_args: cflags, link_args: ldflags, link_whole: link_libs, diff --git a/app/test/meson.build b/app/test/meson.build index 62478c0bb6..b2bb7c36f6 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -35,7 +35,7 @@ source_file_deps = { 'test_bitratestats.c': ['metrics', 'bitratestats', 'ethdev'] + sample_packet_forward_deps, 'test_bpf.c': ['bpf', 'net'], 'test_byteorder.c': [], -#'test_cfgfile.c': ['cfgfile'], +'test_cfgfile.c': ['cfgfile'], 'test_cksum.c': ['net'], 'test_cksum_perf.c': ['net'], 'test_cmdline.c': [], @@ -261,3 +261,7 @@ if not is_windows build_by_default: true, install: false) endif + +subdir('test_cfgfiles') + +resources += test_cfgfile_h diff --git a/app/test/test_cfgfile.c b/app/test/test_cfgfile.c index a5e3d8699c..8146435033 100644 --- a/app/test/test_cfgfile.c +++ b/app/test/test_cfgfile.c @@ -5,48 +5,54 @@ #include #include #include -#include +#include + +#ifdef RTE_EXEC_ENV_WINDOWS +#include +#endif #include #include "test.h" -#include "resource.h" - -#define CFG_FILES_ETC "test_cfgfiles/etc" - -REGISTER_LINKED_RESOURCE(test_cfgfiles); +#include "test_cfgfiles.h" static int -test_cfgfile_setup(void) +make_tmp_file(char *filename, const char *prefix, const char *data) { - const struct resource *r; - int ret; + size_t len = strlen(data); + size_t count; + FILE *f; - r = resource_find("test_cfgfiles"); - TEST_ASSERT_NOT_NULL(r, "missing resource test_cfgfiles"); +#ifdef RTE_EXEC_ENV_WINDOWS + char tempDirName[MAX_PATH - 14]; - ret = resource_untar(r); - TEST_ASSERT_SUCCESS(ret, "failed to untar %s", r->name); + if (GetTempPathA(sizeof(tempDirName), tempDirName) == 0) + return -1; - return 0; -} + if (GetTempFileNameA(tempDirName, prefix, 0, filename) == 0) + return -1; -static int -test_cfgfile_cleanup(void) -{ - const struct resource *r; - int ret; + f = fopen(filename, "wt"); +#else + snprintf(filename, PATH_MAX, "/tmp/%s_XXX", prefix); - r = resource_find("test_cfgfiles"); - TEST_ASSERT_NOT_NULL(r, "missing resource test_cfgfiles"); + int fd = mkstemp(filename); + if (fd < 0) + return -1; - ret = resource_rm_by_tar(r); - TEST_ASSERT_SUCCESS(ret, "Failed to delete resource %s", r->name); + f = fdopen(fd, "w"); +#endif + if (f == NULL) + return -1; - return 0; + count = fwrite(data, sizeof(char), len, f); + fclose(f); + + return (count == len) ? 0 : -1; } + static int _test_cfgfile_sample(struct rte_cfgfile *cfgfile) { @@ -87,9 +93,13 @@ static int test_cfgfile_sample1(void) { struct rte_cfgfile *cfgfile; + char filename[PATH_MAX]; int ret; - cfgfile = rte_cfgfile_load(CFG_FILES_ETC "/sample1.ini", 0); + ret = make_tmp_file(filename, "sample1", sample1_ini); + TEST_ASSERT_SUCCESS(ret, "Failed to setup temp file"); + + cfgfile = rte_cfgfile_load(filename, 0); TEST_ASSERT_NOT_NULL(cfgfile, "Failed to load config file"); ret = _test_cfgfile_sample(cfgfile); @@ -98,6 +108,8 @@ test_cfgfile_sample1(void) ret = rte_cfgfile_close(cfgfile); TEST_ASSERT_SUCCESS(ret, "Failed to close cfgfile"); + remove(filename); + return 0; } @@ -106,15 +118,18 @@ test_cfgfile_sample2(void) { struct rte_cfgfile_parameters params; struct rte_cfgfile *cfgfile; + char filename[PATH_MAX]; int ret; + ret = make_tmp_file(filename, "sample2", sample2_ini); + TEST_AS
[PATCH v5 4/4] test: rearrange test_cfgfiles cases
The input files don't need to be in a separate subdirectory. Signed-off-by: Stephen Hemminger --- app/test/test_cfgfiles/{etc => }/empty.ini | 0 .../test_cfgfiles/{etc => }/empty_key_value.ini | 0 .../test_cfgfiles/{etc => }/invalid_section.ini | 0 .../test_cfgfiles/{etc => }/line_too_long.ini| 0 app/test/test_cfgfiles/meson.build | 16 .../test_cfgfiles/{etc => }/missing_section.ini | 0 .../test_cfgfiles/{etc => }/realloc_sections.ini | 0 app/test/test_cfgfiles/{etc => }/sample1.ini | 0 app/test/test_cfgfiles/{etc => }/sample2.ini | 0 9 files changed, 8 insertions(+), 8 deletions(-) rename app/test/test_cfgfiles/{etc => }/empty.ini (100%) rename app/test/test_cfgfiles/{etc => }/empty_key_value.ini (100%) rename app/test/test_cfgfiles/{etc => }/invalid_section.ini (100%) rename app/test/test_cfgfiles/{etc => }/line_too_long.ini (100%) rename app/test/test_cfgfiles/{etc => }/missing_section.ini (100%) rename app/test/test_cfgfiles/{etc => }/realloc_sections.ini (100%) rename app/test/test_cfgfiles/{etc => }/sample1.ini (100%) rename app/test/test_cfgfiles/{etc => }/sample2.ini (100%) diff --git a/app/test/test_cfgfiles/etc/empty.ini b/app/test/test_cfgfiles/empty.ini similarity index 100% rename from app/test/test_cfgfiles/etc/empty.ini rename to app/test/test_cfgfiles/empty.ini diff --git a/app/test/test_cfgfiles/etc/empty_key_value.ini b/app/test/test_cfgfiles/empty_key_value.ini similarity index 100% rename from app/test/test_cfgfiles/etc/empty_key_value.ini rename to app/test/test_cfgfiles/empty_key_value.ini diff --git a/app/test/test_cfgfiles/etc/invalid_section.ini b/app/test/test_cfgfiles/invalid_section.ini similarity index 100% rename from app/test/test_cfgfiles/etc/invalid_section.ini rename to app/test/test_cfgfiles/invalid_section.ini diff --git a/app/test/test_cfgfiles/etc/line_too_long.ini b/app/test/test_cfgfiles/line_too_long.ini similarity index 100% rename from app/test/test_cfgfiles/etc/line_too_long.ini rename to app/test/test_cfgfiles/line_too_long.ini diff --git a/app/test/test_cfgfiles/meson.build b/app/test/test_cfgfiles/meson.build index 068b61044a..348c78c7d9 100644 --- a/app/test/test_cfgfiles/meson.build +++ b/app/test/test_cfgfiles/meson.build @@ -1,14 +1,14 @@ # SPDX-License-Identifier: BSD-3-Clause test_cfgfiles = files( -'etc/empty.ini', -'etc/empty_key_value.ini', -'etc/invalid_section.ini', -'etc/line_too_long.ini', -'etc/missing_section.ini', -'etc/realloc_sections.ini', -'etc/sample1.ini', -'etc/sample2.ini', +'empty.ini', +'empty_key_value.ini', +'invalid_section.ini', +'line_too_long.ini', +'missing_section.ini', +'realloc_sections.ini', +'sample1.ini', +'sample2.ini', ) # generate the header file used in cfgfile test diff --git a/app/test/test_cfgfiles/etc/missing_section.ini b/app/test/test_cfgfiles/missing_section.ini similarity index 100% rename from app/test/test_cfgfiles/etc/missing_section.ini rename to app/test/test_cfgfiles/missing_section.ini diff --git a/app/test/test_cfgfiles/etc/realloc_sections.ini b/app/test/test_cfgfiles/realloc_sections.ini similarity index 100% rename from app/test/test_cfgfiles/etc/realloc_sections.ini rename to app/test/test_cfgfiles/realloc_sections.ini diff --git a/app/test/test_cfgfiles/etc/sample1.ini b/app/test/test_cfgfiles/sample1.ini similarity index 100% rename from app/test/test_cfgfiles/etc/sample1.ini rename to app/test/test_cfgfiles/sample1.ini diff --git a/app/test/test_cfgfiles/etc/sample2.ini b/app/test/test_cfgfiles/sample2.ini similarity index 100% rename from app/test/test_cfgfiles/etc/sample2.ini rename to app/test/test_cfgfiles/sample2.ini -- 2.43.0
RE: [PATCH] netvsc: optimize stats counters performance
> Subject: [PATCH] netvsc: optimize stats counters performance > > Optimized the performance of updating the statistics counters by reducing the > number of branches. > > Ordered the packet size comparisons according to the probability with typical > internet traffic mix. > > Signed-off-by: Morten Brørup > --- > drivers/net/netvsc/hn_rxtx.c | 32 ++-- > 1 file changed, 10 insertions(+), 22 deletions(-) > > diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c index > 9bf1ec5509..b704b2c971 100644 > --- a/drivers/net/netvsc/hn_rxtx.c > +++ b/drivers/net/netvsc/hn_rxtx.c > @@ -110,30 +110,18 @@ hn_update_packet_stats(struct hn_stats *stats, > const struct rte_mbuf *m) > uint32_t s = m->pkt_len; > const struct rte_ether_addr *ea; > > - if (s == 64) { > - stats->size_bins[1]++; > - } else if (s > 64 && s < 1024) { > - uint32_t bin; > - > - /* count zeros, and offset into correct bin */ > - bin = (sizeof(s) * 8) - rte_clz32(s) - 5; > - stats->size_bins[bin]++; > - } else { > - if (s < 64) > - stats->size_bins[0]++; > - else if (s < 1519) > - stats->size_bins[6]++; > - else > - stats->size_bins[7]++; > - } > + if (s >= 1024) > + stats->size_bins[6 + (s > 1518)]++; > + else if (s <= 64) > + stats->size_bins[s >> 6]++; > + else > + stats->size_bins[32UL - rte_clz32(s) - 5]++; This part looks good. > > ea = rte_pktmbuf_mtod(m, const struct rte_ether_addr *); > - if (rte_is_multicast_ether_addr(ea)) { > - if (rte_is_broadcast_ether_addr(ea)) > - stats->broadcast++; > - else > - stats->multicast++; > - } > + RTE_BUILD_BUG_ON(offsetof(struct hn_stats, broadcast) != > + offsetof(struct hn_stats, multicast) + > sizeof(uint64_t)); > + if (unlikely(rte_is_multicast_ether_addr(ea))) > + (&stats->multicast)[rte_is_broadcast_ether_addr(ea)]++; > } This makes the code a little harder to read. How about just add "unlikely" to rte_is_multicast_ether_addr(ea) and keep the rest unchanged? Thank you, Long
Re: [PATCH v5 4/4] test: rearrange test_cfgfiles cases
On Fri, Aug 02, 2024 at 09:45:03AM -0700, Stephen Hemminger wrote: > The input files don't need to be in a separate subdirectory. > > Signed-off-by: Stephen Hemminger Small suggestion - I'd move this up to be patch 3, rather than patch 4, which would save editing the list in the meson.build file to remove the "etc/" prefix Acked-by: Bruce Richardson > --- > app/test/test_cfgfiles/{etc => }/empty.ini | 0 > .../test_cfgfiles/{etc => }/empty_key_value.ini | 0 > .../test_cfgfiles/{etc => }/invalid_section.ini | 0 > .../test_cfgfiles/{etc => }/line_too_long.ini| 0 > app/test/test_cfgfiles/meson.build | 16 > .../test_cfgfiles/{etc => }/missing_section.ini | 0 > .../test_cfgfiles/{etc => }/realloc_sections.ini | 0 > app/test/test_cfgfiles/{etc => }/sample1.ini | 0 > app/test/test_cfgfiles/{etc => }/sample2.ini | 0 > 9 files changed, 8 insertions(+), 8 deletions(-) > rename app/test/test_cfgfiles/{etc => }/empty.ini (100%) > rename app/test/test_cfgfiles/{etc => }/empty_key_value.ini (100%) > rename app/test/test_cfgfiles/{etc => }/invalid_section.ini (100%) > rename app/test/test_cfgfiles/{etc => }/line_too_long.ini (100%) > rename app/test/test_cfgfiles/{etc => }/missing_section.ini (100%) > rename app/test/test_cfgfiles/{etc => }/realloc_sections.ini (100%) > rename app/test/test_cfgfiles/{etc => }/sample1.ini (100%) > rename app/test/test_cfgfiles/{etc => }/sample2.ini (100%) > > diff --git a/app/test/test_cfgfiles/etc/empty.ini > b/app/test/test_cfgfiles/empty.ini > similarity index 100% > rename from app/test/test_cfgfiles/etc/empty.ini > rename to app/test/test_cfgfiles/empty.ini > diff --git a/app/test/test_cfgfiles/etc/empty_key_value.ini > b/app/test/test_cfgfiles/empty_key_value.ini > similarity index 100% > rename from app/test/test_cfgfiles/etc/empty_key_value.ini > rename to app/test/test_cfgfiles/empty_key_value.ini > diff --git a/app/test/test_cfgfiles/etc/invalid_section.ini > b/app/test/test_cfgfiles/invalid_section.ini > similarity index 100% > rename from app/test/test_cfgfiles/etc/invalid_section.ini > rename to app/test/test_cfgfiles/invalid_section.ini > diff --git a/app/test/test_cfgfiles/etc/line_too_long.ini > b/app/test/test_cfgfiles/line_too_long.ini > similarity index 100% > rename from app/test/test_cfgfiles/etc/line_too_long.ini > rename to app/test/test_cfgfiles/line_too_long.ini > diff --git a/app/test/test_cfgfiles/meson.build > b/app/test/test_cfgfiles/meson.build > index 068b61044a..348c78c7d9 100644 > --- a/app/test/test_cfgfiles/meson.build > +++ b/app/test/test_cfgfiles/meson.build > @@ -1,14 +1,14 @@ > # SPDX-License-Identifier: BSD-3-Clause > > test_cfgfiles = files( > -'etc/empty.ini', > -'etc/empty_key_value.ini', > -'etc/invalid_section.ini', > -'etc/line_too_long.ini', > -'etc/missing_section.ini', > -'etc/realloc_sections.ini', > -'etc/sample1.ini', > -'etc/sample2.ini', > +'empty.ini', > +'empty_key_value.ini', > +'invalid_section.ini', > +'line_too_long.ini', > +'missing_section.ini', > +'realloc_sections.ini', > +'sample1.ini', > +'sample2.ini', > ) > > # generate the header file used in cfgfile test > diff --git a/app/test/test_cfgfiles/etc/missing_section.ini > b/app/test/test_cfgfiles/missing_section.ini > similarity index 100% > rename from app/test/test_cfgfiles/etc/missing_section.ini > rename to app/test/test_cfgfiles/missing_section.ini > diff --git a/app/test/test_cfgfiles/etc/realloc_sections.ini > b/app/test/test_cfgfiles/realloc_sections.ini > similarity index 100% > rename from app/test/test_cfgfiles/etc/realloc_sections.ini > rename to app/test/test_cfgfiles/realloc_sections.ini > diff --git a/app/test/test_cfgfiles/etc/sample1.ini > b/app/test/test_cfgfiles/sample1.ini > similarity index 100% > rename from app/test/test_cfgfiles/etc/sample1.ini > rename to app/test/test_cfgfiles/sample1.ini > diff --git a/app/test/test_cfgfiles/etc/sample2.ini > b/app/test/test_cfgfiles/sample2.ini > similarity index 100% > rename from app/test/test_cfgfiles/etc/sample2.ini > rename to app/test/test_cfgfiles/sample2.ini > -- > 2.43.0 >
Re: [PATCH v5 3/4] test: restore cfgfile tests
On Fri, Aug 02, 2024 at 09:45:02AM -0700, Stephen Hemminger wrote: > These tests were not built since the conversion to meson. > Instead of using embedded resource functions, put data in include > file and generate temporary file before the test. > > The changes to app/test/meson.build are to handle auto-generated > files (resources) differently. Don't scan these files to look > for test input. > > Using common unit test macro allows for simpler management > of more tests. > > Signed-off-by: Stephen Hemminger Acked-by: Bruce Richardson
Re: [PATCH v2 0/7] record and rework component dependencies
On 8/2/2024 1:44 PM, Bruce Richardson wrote: > As part of the meson build, we can record the dependencies for each > component as we process it, logging them to a file. This file can be > used as input to a number of other scripts and tools, for example, to > graph the dependencies, or to allow higher-level build-config tools to > automatically enable component requirements, etc. > > The first patch of this set generates the basic dependency tree. The > second patch does some processing of that dependency tree to identify > cases where dependencies are being unnecessarily specified. Reducing > these makes it easier to have readable dependency graphs in future, > without affecting the build. > > The following 4 patches are based on the output of the second patch, and > greatly cut down the number of direct dependency links between > components. Even with the cut-down dependencies, the full dependency > graph is nigh-unreadable, so the final patch adds a new script to > generate dependency tree subgraphs, creating dot files for e.g. the > dependencies of a particular component, or a component class such as > mempool drivers. > > Bruce Richardson (7): > build: output a dependency log in build directory > devtools: add script to flag unneeded dependencies > build: remove kvargs from driver class dependencies > build: reduce library dependencies > build: reduce driver dependencies > build: reduce app dependencies > devtools: add script to generate DPDK dependency graphs > Tested-by: Ferruh Yigit Thanks for the update, output is now easier to consume with the help of the 'devtools/draw-dependency-graphs.py' script. A detail but script is not actually drawing graph, but parsing .dot file, name is a little misleading. Also for the patches that are converting explicit dependency to implicit dependency, I can see the benefit for the meson and graph. But also there is a value of keeping explicit dependency, it was an easy way to document dependencies of component for developers. So, I am not really sure which one is better.
RE: [PATCH] netvsc: optimize stats counters performance
> From: Long Li [mailto:lon...@microsoft.com] > Sent: Friday, 2 August 2024 18.49 > > > Subject: [PATCH] netvsc: optimize stats counters performance > > > > Optimized the performance of updating the statistics counters by > reducing the > > number of branches. > > > > Ordered the packet size comparisons according to the probability with > typical > > internet traffic mix. > > > > Signed-off-by: Morten Brørup > > --- > > drivers/net/netvsc/hn_rxtx.c | 32 ++-- > > 1 file changed, 10 insertions(+), 22 deletions(-) > > > > diff --git a/drivers/net/netvsc/hn_rxtx.c > b/drivers/net/netvsc/hn_rxtx.c index > > 9bf1ec5509..b704b2c971 100644 > > --- a/drivers/net/netvsc/hn_rxtx.c > > +++ b/drivers/net/netvsc/hn_rxtx.c > > @@ -110,30 +110,18 @@ hn_update_packet_stats(struct hn_stats *stats, > > const struct rte_mbuf *m) > > uint32_t s = m->pkt_len; > > const struct rte_ether_addr *ea; > > > > - if (s == 64) { > > - stats->size_bins[1]++; > > - } else if (s > 64 && s < 1024) { > > - uint32_t bin; > > - > > - /* count zeros, and offset into correct bin */ > > - bin = (sizeof(s) * 8) - rte_clz32(s) - 5; > > - stats->size_bins[bin]++; > > - } else { > > - if (s < 64) > > - stats->size_bins[0]++; > > - else if (s < 1519) > > - stats->size_bins[6]++; > > - else > > - stats->size_bins[7]++; > > - } > > + if (s >= 1024) > > + stats->size_bins[6 + (s > 1518)]++; > > + else if (s <= 64) > > + stats->size_bins[s >> 6]++; > > + else > > + stats->size_bins[32UL - rte_clz32(s) - 5]++; > > This part looks good. > > > > > ea = rte_pktmbuf_mtod(m, const struct rte_ether_addr *); > > - if (rte_is_multicast_ether_addr(ea)) { > > - if (rte_is_broadcast_ether_addr(ea)) > > - stats->broadcast++; > > - else > > - stats->multicast++; > > - } > > + RTE_BUILD_BUG_ON(offsetof(struct hn_stats, broadcast) != > > + offsetof(struct hn_stats, multicast) + > sizeof(uint64_t)); > > + if (unlikely(rte_is_multicast_ether_addr(ea))) > > + (&stats->multicast)[rte_is_broadcast_ether_addr(ea)]++; > > } > > This makes the code a little harder to read. I agree it is somewhat convoluted. It's a tradeoff... I preferred performance at the cost of making the code somewhat harder to read. The RTE_BUILD_BUG_ON() also helps showing what is going on with the weird indexing. > How about just add > "unlikely" to rte_is_multicast_ether_addr(ea) and keep the rest > unchanged? Then I could also add "unlikely" to is_broadcast(). In theory, there should be minimal broadcast traffic in any normal network. (Except when experiencing broadcast storms due to network loops or other network problems.) But in reality, I think most real life networks see less multicast (non-broadcast) than broadcast traffic. I don' think the following alternative makes the code significantly more readable than the method in this patch, but I'll mention it for the sake of discussion: I could modify the hn_stats type like this: struct hn_stats { other fields... + union { + struct { uint64_tmulticast; uint64_tbroadcast; + }; + uint64_tmulticast_broadcast[2]; + }; other fields... }; So the code would become: + if (unlikely(rte_is_multicast_ether_addr(ea))) +stats->multicast_broadcast[rte_is_broadcast_ether_addr(ea)]++; Whatever we decide, we should use the same method in all three patches (netvsc, virtio, vhost-user). The choices are: 1. Stick with the original code (with two branches), and add unlikely(). 2. Use the method provided in this patch (with only one branch). 3. Introduce a multicast_broadcast[2] to the stats structure as described above (also with only one branch). @Long, @Wei, @Maxime, @Chenbo, @Stephen and anyone else interested, please cast your votes. Comments are also welcome! :-) I'm in favor of #2.
[RFC PATCH v1 0/3] dts: port over stats checks
From: Jeremy Spewock This series ports over the functionality of the stats_checks test suite from old DTS, but I left it as an RFC just because the verification is different than other test suites that we have written. Mainly because verifying the accuracy of the port statistics while accounting for noise on the wire is not the most straight-forward task. The way I decided to differentiate noise from valid packets in this suite was I used the MAC addresses of the packets and the software packet types that are provided in the verbose output of testpmd. Another idea for how to do this that I tried was using packet checksums. I wanted originally to send packets with bad checksums and assume that noise on the wire would either have a valid checksum or no checksum at all, but this unfortunately only works for the RX side of verbose output as the TX side does not reflect the same checksum information. Jeremy Spewock (3): dts: add clearing port stats and verbose mode to testpmd dts: add port stats checks test suite dts: add stats checks to schemai dts/framework/config/conf_yaml_schema.json| 3 +- dts/framework/remote_session/testpmd_shell.py | 62 +++ dts/tests/TestSuite_port_stats_checks.py | 156 ++ 3 files changed, 220 insertions(+), 1 deletion(-) create mode 100644 dts/tests/TestSuite_port_stats_checks.py -- 2.45.2
[RFC PATCH v1 1/3] dts: add clearing port stats and verbose mode to testpmd
From: Jeremy Spewock Methods currently exist for querying the statistics of a port in testpmd, but there weren't methods added for clearing the current statistics on a port. This patch adds methods that allow you to clear the statistics of a single port or all ports to account for situations where the user only wants the port statistics after a certain point and does not care about any existing prior values. This patch also contains methods for modifying the verbose level of testpmd so that users are able to utilize the extra information that it provides. Depends-on: patch-142762 ("dts: add text parser for testpmd verbose output") Signed-off-by: Jeremy Spewock --- dts/framework/remote_session/testpmd_shell.py | 62 +++ 1 file changed, 62 insertions(+) diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py index dedf1553cf..cbea03464f 100644 --- a/dts/framework/remote_session/testpmd_shell.py +++ b/dts/framework/remote_session/testpmd_shell.py @@ -948,6 +948,68 @@ def extract_verbose_output(output: str) -> list[TestPmdVerboseOutput]: iter = re.finditer(r"(port \d+/queue \d+:.*?(?=port \d+/queue \d+|$))", output, re.S) return [TestPmdVerboseOutput.parse(s.group(0)) for s in iter] +def clear_port_stats(self, port_id: int, verify: bool = True) -> None: +"""Clear statistics of a given port. + +Args: +port_id: ID of the port to clear the statistics on. +verify: If :data:`True` the output of the command will be scanned to verify that it was +successful, otherwise failures will be ignored. Defaults to :data:`True`. + +Raises: +InteractiveCommandExecutionError: If `verify` is :data:`True` and testpmd fails to +clear the statistics of the given port. +""" +clear_output = self.send_command(f"clear port stats {port_id}") +if verify and f"NIC statistics for port {port_id} cleared" not in clear_output: +raise InteractiveCommandExecutionError( +f"Test pmd failed to set clear forwarding stats on port {port_id}" +) + +def clear_port_stats_all(self, verify: bool = True) -> None: +"""Clear the statistics of all ports that testpmd is aware of. + +Args: +verify: If :data:`True` the output of the command will be scanned to verify that all +ports had their statistics cleared, otherwise failures will be ignored. Defaults to +:data:`True`. + +Raises: +InteractiveCommandExecutionError: If `verify` is :data:`True` and testpmd fails to +clear the statistics of any of its ports. +""" +clear_output = self.send_command("clear port stats all") +if verify: +if type(self._app_params.ports) is list: +for port_id in range(len(self._app_params.ports)): +if f"NIC statistics for port {port_id} cleared" not in clear_output: +raise InteractiveCommandExecutionError( +f"Test pmd failed to set clear forwarding stats on port {port_id}" +) + +def set_verbose(self, level: int, verify: bool = True) -> None: +"""Set debug verbosity level. + +Args: +level: 0 - silent except for error +1 - fully verbose except for Tx packets +2 - fully verbose except for Rx packets +>2 - fully verbose +verify: if :data:`True` an additional command will be sent to verify that verbose level +is properly set. Defaults to :data:`True`. + +Raises: +InteractiveCommandExecutionError: If `verify` is :data:`True` and verbose level +is not correctly set. +""" +verbose_output = self.send_command(f"set verbose {level}") +if verify: +if "Change verbose level" not in verbose_output: +self._logger.debug(f"Failed to set verbose level to {level}: \n{verbose_output}") +raise InteractiveCommandExecutionError( +f"Testpmd failed to set verbose level to {level}." +) + def _close(self) -> None: """Overrides :meth:`~.interactive_shell.close`.""" self.stop() -- 2.45.2
[RFC PATCH v1 2/3] dts: add port stats checks test suite
From: Jeremy Spewock This patch adds a new test suite to DTS that validates the accuracy of the port statistics using testpmd. The functionality is tested by sending a packet of a fixed side to the SUT and verifying that the statistic for packets received, received bytes, packets sent, and sent bytes all update accordingly. Depends-on: patch-142762 ("dts: add text parser for testpmd verbose output") Signed-off-by: Jeremy Spewock --- dts/tests/TestSuite_port_stats_checks.py | 156 +++ 1 file changed, 156 insertions(+) create mode 100644 dts/tests/TestSuite_port_stats_checks.py diff --git a/dts/tests/TestSuite_port_stats_checks.py b/dts/tests/TestSuite_port_stats_checks.py new file mode 100644 index 00..71e1c7906f --- /dev/null +++ b/dts/tests/TestSuite_port_stats_checks.py @@ -0,0 +1,156 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2024 University of New Hampshire + +"""Port Statistics testing suite. + +This test suite tests the functionality of querying the statistics of a port and verifies that the +values provided in the statistics accurately reflect the traffic that has been handled on the port. +This is shown by sending a packet of a fixed size to the SUT and verifying that the number of RX +packets has increased by 1, the number of RX bytes has increased by the specified size, the number +of TX packets has also increased by 1 (since we expect the packet to be forwarded), and the number +of TX bytes has also increased by the same fixed amount. +""" + +from typing import ClassVar, Tuple + +from scapy.layers.inet import IP # type: ignore[import-untyped] +from scapy.layers.l2 import Ether # type: ignore[import-untyped] +from scapy.packet import Packet, Raw # type: ignore[import-untyped] + +from framework.params.testpmd import SimpleForwardingModes +from framework.remote_session.testpmd_shell import TestPmdShell, TestPmdVerboseOutput +from framework.test_suite import TestSuite + + +class TestPortStatsChecks(TestSuite): +"""DPDK Port statistics testing suite. + +Support for port statistics is tested by sending a packet of a fixed size denoted by +`total_packet_len` and verifying the that TX/RX packets of the TX/RX ports updated by exactly +1 and the TX/RX bytes of the TX/RX ports updated by exactly `total_packet_len`. This is done by +finding the total amount of packets that were sent/received which did not originate from this +test suite and taking the sum of the lengths of each of these "noise" packets and subtracting +it from the total values in the port statistics so that all that is left are relevant values. +""" + +#: Port where traffic will be received on the SUT. +recv_port: ClassVar[int] = 0 +#: Port where traffic will be sent from on the SUT. +send_port: ClassVar[int] = 1 + +#: +ip_header_len: ClassVar[int] = 20 +#: +ether_header_len: ClassVar[int] = 14 + +#: Length of the packet being sent including the IP and frame headers. +total_packet_len: ClassVar[int] = 100 +#: Packet to send during testing. +send_pkt: ClassVar[Packet] = ( +Ether() / IP() / Raw("X" * (total_packet_len - ip_header_len - ether_header_len)) +) + +def extract_noise_information( +self, verbose_out: list[TestPmdVerboseOutput] +) -> Tuple[int, int, int, int]: +"""Extract information about packets that were not sent by the framework in `verbose_out`. + +Extract the number of sent/received packets that did not originate from this test suite as +well as the sum of the lengths of said "noise" packets. Note that received packets are only +examined on the port with the ID `self.recv_port` since these are the receive stats that +will be analyzed in this suite. Sent packets are also only examined on the port with the ID +`self.send_port`. + +Packets are considered to be "noise" when they don't match the expected structure of the +packets that are being sent by this test suite. Specifically, the source and destination +mac addresses as well as the software packet type are checked on packets received by +testpmd to ensure they match the proper addresses of the TG and SUT nodes. Packets that are +sent by testpmd however only check the source mac address and the software packet type. +This is because MAC forwarding mode adjusts both addresses, but only the source will belong +to the TG or SUT node. + +Args: +verbose_out: Parsed testpmd verbose output to collect the noise information from. + +Returns: +A tuple containing the total size of received noise in bytes, the number of received +noise packets, size of all noise packets sent by testpmd in bytes, and the number of +noise packets sent by testpmd. +""" +recv_noise_bytes = 0 +recv_noise_packets = 0 +sent_no
[RFC PATCH v1 3/3] dts: add stats checks to schemai
From: Jeremy Spewock Adding the test suite to the yaml schema allows for users to specify it in their conf.yaml files and run the suite in their test runs. Signed-off-by: Jeremy Spewock --- dts/framework/config/conf_yaml_schema.json | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json index f02a310bb5..8ecfa2a145 100644 --- a/dts/framework/config/conf_yaml_schema.json +++ b/dts/framework/config/conf_yaml_schema.json @@ -187,7 +187,8 @@ "enum": [ "hello_world", "os_udp", -"pmd_buffer_scatter" +"pmd_buffer_scatter", +"port_stats_checks" ] }, "test_target": { -- 2.45.2
Re: [PATCH v2 1/1] dts: add text parser for testpmd verbose output
On Fri, Aug 2, 2024 at 10:54 AM Nicholas Pratte wrote: > > > > You're right that in most cases it would come from the stop output, > > but the output from that stop command has one other thing as well that > > I would consider valuable which is statistics of packets handled by > > ports specifically for the duration of the packet forwarding you are > > stopping. It is also true that sending other testpmd commands while > > verbose output is being sent will change what is collected, so I > > didn't want to tie the method specifically to the stop command since > > if you did a start command then checked port statistics for example, > > it would consume all of the verbose output up until the command to > > check port statistics. > > > > For the reason stated above I think it actually might make sense to > > make it so that every testpmd method (including ones that currently > > return dataclasses) return their original, unmodified collected output > > from the testpmd shell as well. Things like port stats can return both > > in a tuple potentially. This way if there is asynchronous output like > > there is with verbose output other commands don't completely remove > > it. > > I agree! I think giving each testpmd method its own output would add > more consistency. An idea I had floating around that kind of relates > to your suggestion above was introducing some instance variables that > could enable the testpmd shell object to be smart enough to > automatically scan, and keep a record of, any verbose output that > comes out across any command run. The TestPMDShell class could track > whether verbose mode is on or not, and if True, run additional logic > to scan for verbose output and add it to a data structure for access > every time a command is run. Then users, from the perspective of > writing a test suite, could do something like 'output in > testpmd.verbose_output', where verbose_output is an instance data > structure of the TestPMDShell. This might be overcomplicated to > implement, but it was an idea I had that might make using verbose mode > more streamlined. What are your thoughts? I like the sound of this idea a lot actually since it would remove the chance of the output just completely being thrown away. In my own test suite I managed to dance around this by strategically placing my testpmd commands, but this could save people some headache in the future. I feel like this wouldn't be something overly complicated to implement either, all we would have to do is extend the send_command method in the TestpmdShell class and check a boolean for if verbose is on, extract this output. If/how to clear this list would be something to think about, but I would say that, in general, the idea of making sure we don't lose information is something that I'm all for.
[PATCH v6 0/4] Add network packet dissector
While debugging TAP rte_flow discovered that test pmd verbose output was confusing and unhelpful. Instead, made a simple dissector that prints one line per packet like this in test-pmd with verbose level 4. v6 - validate result of rte_dissect for simple packet v5 - breakout the additional ICMP types into header - fix some decoding bugs in ARP and ICMP Stephen Hemminger (4): net: add more icmp types net: add new packet dissector test: add test for packet dissector test-pmd: add more packet verbose decode options app/test-pmd/cmdline_flow.c | 3 +- app/test-pmd/config.c | 33 +- app/test-pmd/testpmd.h | 11 + app/test-pmd/util.c | 77 +++- app/test/meson.build| 1 + app/test/test_dissect.c | 253 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 5 +- lib/net/meson.build | 2 + lib/net/rte_dissect.c | 416 lib/net/rte_dissect.h | 42 ++ lib/net/rte_icmp.h | 22 +- lib/net/version.map | 7 + 12 files changed, 850 insertions(+), 22 deletions(-) create mode 100644 app/test/test_dissect.c create mode 100644 lib/net/rte_dissect.c create mode 100644 lib/net/rte_dissect.h -- 2.43.0
[PATCH v6 1/4] net: add more icmp types
Add more defines for additional defined ICMP types. Signed-off-by: Stephen Hemminger --- lib/net/rte_icmp.h | 22 ++ 1 file changed, 18 insertions(+), 4 deletions(-) diff --git a/lib/net/rte_icmp.h b/lib/net/rte_icmp.h index 4bf64d70ad..b51b60a6d2 100644 --- a/lib/net/rte_icmp.h +++ b/lib/net/rte_icmp.h @@ -54,10 +54,24 @@ struct rte_icmp_hdr { } __rte_packed; /* ICMP packet types */ -#define RTE_IP_ICMP_ECHO_REPLY 0 -#define RTE_IP_ICMP_ECHO_REQUEST 8 -#define RTE_ICMP6_ECHO_REQUEST 128 -#define RTE_ICMP6_ECHO_REPLY 129 +#define RTE_IP_ICMP_ECHO_REPLY 0 +#define RTE_IP_ICMP_DEST_UNREACH3 +#define RTE_IP_ICMP_SOURCE_QUENCH 4 +#define RTE_IP_ICMP_REDIRECT5 +#define RTE_IP_ICMP_ECHO_REQUEST8 +#define RTE_IP_ICMP_TIME_EXCEEDED 11 +#define RTE_IP_ICMP_PARAMETERPROB 12 +#define RTE_IP_ICMP_TIMESTAMP 13 +#define RTE_IP_ICMP_TIMESTAMPREPLY 14 +#define RTE_IP_ICMP_INFO_REQUEST 15 +#define RTE_IP_ICMP_INFO_REPLY 16 + +#define RTE_ICMP6_ECHO_REQUEST 128 +#define RTE_ICMP6_ECHO_REPLY129 +#define RTE_ND_ROUTER_SOLICIT 133 +#define RTE_ND_ROUTER_ADVERT134 +#define RTE_ND_NEIGHBOR_SOLICIT 135 +#define RTE_ND_NEIGHBOR_ADVERT 136 #ifdef __cplusplus } -- 2.43.0
[PATCH v6 2/4] net: add new packet dissector
The function rte_dissect_mbuf is used to decode the contents of an mbuf into ah uman readable format similar to what tshark uses. For now, handles IP, IPv6, TCP, UDP, ICMP and ARP. Signed-off-by: Stephen Hemminger --- lib/net/meson.build | 2 + lib/net/rte_dissect.c | 416 ++ lib/net/rte_dissect.h | 42 + lib/net/version.map | 7 + 4 files changed, 467 insertions(+) create mode 100644 lib/net/rte_dissect.c create mode 100644 lib/net/rte_dissect.h diff --git a/lib/net/meson.build b/lib/net/meson.build index 0b69138949..48edf17ea3 100644 --- a/lib/net/meson.build +++ b/lib/net/meson.build @@ -2,6 +2,7 @@ # Copyright(c) 2017-2020 Intel Corporation headers = files( +'rte_dissect.h', 'rte_ip.h', 'rte_tcp.h', 'rte_udp.h', @@ -30,6 +31,7 @@ headers = files( sources = files( 'rte_arp.c', +'rte_dissect.c', 'rte_ether.c', 'rte_net.c', 'rte_net_crc.c', diff --git a/lib/net/rte_dissect.c b/lib/net/rte_dissect.c new file mode 100644 index 00..ae86f7591e --- /dev/null +++ b/lib/net/rte_dissect.c @@ -0,0 +1,416 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 Stephen Hemminger + * + * Print packets in format similar to tshark. + * Output should be one line per mbuf + */ + +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* Forward declaration - Ethernet can be nested */ +static void +dissect_eth(char *buf, size_t size, const struct rte_mbuf *mb, + uint32_t offset, uint32_t dump_len); + +/* + * Read data from segmented mbuf and put it into buf , but stop if would go past max length + * See rte_pktmbuf_read() + */ +static const void * +dissect_read(const struct rte_mbuf *m, uint32_t offset, uint32_t len, +void *buf, uint32_t dump_len) +{ + /* If this header would be past the requested length +* then unwind back to end the string. +*/ + if (dump_len > 0 && offset + len > dump_len) + return NULL; + + return rte_pktmbuf_read(m, offset, len, buf); +} + +/* + * Print to string buffer and adjust result + * Returns true on success, false if buffer is exhausted. + */ +static __rte_format_printf(3, 4) bool +dissect_print(char **buf, size_t *sz, const char *fmt, ...) +{ + va_list ap; + int cnt; + + va_start(ap, fmt); + cnt = vsnprintf(*buf, *sz, fmt, ap); + va_end(ap); + + /* error or string is full */ + if (cnt < 0 || cnt >= (int)*sz) { + *sz = 0; + return false; + } + + *buf += cnt; + *sz -= cnt; + return true; +} + +static void +dissect_arp(char *buf, size_t size, const struct rte_mbuf *mb, uint32_t offset, uint32_t dump_len) +{ + const struct rte_arp_hdr *arp; + struct rte_arp_hdr _arp; + char abuf[64]; + + arp = dissect_read(mb, offset, sizeof(_arp), &_arp, dump_len); + if (arp == NULL) + return; + offset += sizeof(_arp); + + uint16_t ar_op = rte_be_to_cpu_16(arp->arp_opcode); + switch (ar_op) { + case RTE_ARP_OP_REQUEST: + inet_ntop(AF_INET, &arp->arp_data.arp_tip, abuf, sizeof(abuf)); + if (!dissect_print(&buf, &size, "Who has %s? ", abuf)) + return; + + rte_ether_format_addr(abuf, sizeof(abuf), &arp->arp_data.arp_sha); + if (!dissect_print(&buf, &size, "Tell %s ", abuf)) + return; + + break; + case RTE_ARP_OP_REPLY: + inet_ntop(AF_INET, &arp->arp_data.arp_sip, abuf, sizeof(abuf)); + if (!dissect_print(&buf, &size, "%s is at", abuf)) + return; + + rte_ether_format_addr(abuf, sizeof(abuf), &arp->arp_data.arp_sha); + if (!dissect_print(&buf, &size, "%s ", abuf)) + return; + break; + case RTE_ARP_OP_INVREQUEST: + rte_ether_format_addr(abuf, sizeof(abuf), &arp->arp_data.arp_tha); + if (!dissect_print(&buf, &size, "Who is %s? ", abuf)) + return; + + rte_ether_format_addr(abuf, sizeof(abuf), &arp->arp_data.arp_sha); + if (!dissect_print(&buf, &size, "Tell %s ", abuf)) + return; + break; + + case RTE_ARP_OP_INVREPLY: + rte_ether_format_addr(abuf, sizeof(buf), &arp->arp_data.arp_sha); + if (!dissect_print(&buf, &size, "%s is at ", abuf)) + return; + + inet_ntop(AF_INET, &arp->arp_data.arp_sip, abuf, sizeof(abuf)); + if (!dissect_print(&buf, &size, "%s ", abuf)) + return; + break; + default: + if (!dissect_print(&buf
[PATCH v6 3/4] test: add test for packet dissector
Add some tests for new packet dissector. Signed-off-by: Stephen Hemminger --- app/test/meson.build| 1 + app/test/test_dissect.c | 253 2 files changed, 254 insertions(+) create mode 100644 app/test/test_dissect.c diff --git a/app/test/meson.build b/app/test/meson.build index e29258e6ec..9cd2051320 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -62,6 +62,7 @@ source_file_deps = { 'test_debug.c': [], 'test_devargs.c': ['kvargs'], 'test_dispatcher.c': ['dispatcher'], +'test_dissect.c': ['net'], 'test_distributor.c': ['distributor'], 'test_distributor_perf.c': ['distributor'], 'test_dmadev.c': ['dmadev', 'bus_vdev'], diff --git a/app/test/test_dissect.c b/app/test/test_dissect.c new file mode 100644 index 00..0644d644d5 --- /dev/null +++ b/app/test/test_dissect.c @@ -0,0 +1,253 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2024 Stephen Hemminger + */ + +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "test.h" + +#ifndef LINE_MAX +#define LINE_MAX 2048 +#endif + +#define TOTAL_PACKETS 100 +#define PACKET_LEN 1000 +#define ETH_IP_UDP_VXLAN_SIZE (sizeof(struct rte_ether_hdr) + \ + sizeof(struct rte_ipv4_hdr) + \ + sizeof(struct rte_udp_hdr) + \ + sizeof(struct rte_vxlan_hdr)) + + +static uint16_t port_id; +static const char null_dev[] = "net_null0"; + +static void +add_header(struct rte_mbuf *mb, uint32_t plen, + rte_be16_t src_port, rte_be16_t dst_port) +{ + struct { + struct rte_ether_hdr eth; + struct rte_ipv4_hdr ip; + struct rte_udp_hdr udp; + } pkt = { + .eth = { + .dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4), + }, + .ip = { + .version_ihl = RTE_IPV4_VHL_DEF, + .time_to_live = 1, + .next_proto_id = IPPROTO_UDP, + .src_addr = rte_cpu_to_be_32(RTE_IPV4_LOOPBACK), + .dst_addr = rte_cpu_to_be_32(RTE_IPV4_BROADCAST), + }, + .udp = { + .dst_port = dst_port, + .src_port = src_port, + }, + }; + + rte_eth_random_addr(pkt.eth.src_addr.addr_bytes); + + plen -= sizeof(struct rte_ether_hdr); + pkt.ip.total_length = rte_cpu_to_be_16(plen); + pkt.ip.hdr_checksum = rte_ipv4_cksum(&pkt.ip); + + plen -= sizeof(struct rte_ipv4_hdr); + pkt.udp.dgram_len = rte_cpu_to_be_16(plen); + + /* Copy header into mbuf */ + memcpy(rte_pktmbuf_append(mb, sizeof(pkt)), &pkt, sizeof(pkt)); +} + +static void +add_vxlan(struct rte_mbuf *mb, rte_be32_t vni) +{ + struct rte_vxlan_hdr *vxlan; + + vxlan = (struct rte_vxlan_hdr *)rte_pktmbuf_append(mb, sizeof(*vxlan)); + memset(vxlan, 0, sizeof(*vxlan)); + vxlan->flag_i = 1; + vxlan->vx_vni = vni; +} + + +static void +fill_data(struct rte_mbuf *mb, uint32_t len) +{ + uint32_t i; + char *ptr = rte_pktmbuf_append(mb, len); + char c = '!'; + + /* traditional barber pole pattern */ + for (i = 0; i < len; i++) { + ptr[i] = c++; + if (c == 0x7f) + c = '!'; + } +} + +static void +mbuf_prep(struct rte_mbuf *mb, uint8_t buf[], uint32_t buf_len) +{ + mb->buf_addr = buf; + rte_mbuf_iova_set(mb, (uintptr_t)buf); + mb->buf_len = buf_len; + rte_mbuf_refcnt_set(mb, 1); + + /* set pool pointer to dummy value, test doesn't use it */ + mb->pool = (void *)buf; + + rte_pktmbuf_reset(mb); +} + +static int +test_setup(void) +{ + port_id = rte_eth_dev_count_avail(); + + /* Make a dummy null device to snoop on */ + if (rte_vdev_init(null_dev, NULL) != 0) { + fprintf(stderr, "Failed to create vdev '%s'\n", null_dev); + goto fail; + } + return 0; + +fail: + rte_vdev_uninit(null_dev); + return -1; +} + +static void +test_cleanup(void) +{ + rte_vdev_uninit(null_dev); +} + + +static int +test_simple(void) +{ + struct rte_mbuf mb; + uint8_t buf[RTE_MBUF_DEFAULT_BUF_SIZE]; + uint32_t data_len = PACKET_LEN; + rte_be16_t src_port = rte_rand_max(UINT16_MAX); + const rte_be16_t dst_port = rte_cpu_to_be_16(9); /* Discard port */ + char obuf[LINE_MAX] = { }; + char result[LINE_MAX] = { }; + + /* make a dummy packet */ + mbuf_prep(&mb, buf, sizeof(buf)); + add_header(&mb, data_len, src_port, dst_port); + fill_data(&mb, data_len - mb.data_
[PATCH v6 4/4] test-pmd: add more packet verbose decode options
The existing verbose levels 1..3 provide a messy multi-line output per packet. I found this unhelpful when diagnosing many types of problems like packet flow. This patch keeps the previous levels and adds two new levels: 4: one line per packet is printed in a format resembling tshark output. With addresses and protocol info. 5: dump packet in hex. Useful if the driver is messing up the data. Signed-off-by: Stephen Hemminger --- app/test-pmd/cmdline_flow.c | 3 +- app/test-pmd/config.c | 33 ++--- app/test-pmd/testpmd.h | 11 +++ app/test-pmd/util.c | 77 +++-- doc/guides/testpmd_app_ug/testpmd_funcs.rst | 5 +- 5 files changed, 111 insertions(+), 18 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index d04280eb3e..a010fcf61a 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -14143,7 +14143,8 @@ cmd_set_raw_parsed(const struct buffer *in) upper_layer = proto; } } - if (verbose_level & 0x1) + + if (verbose_level > 0) printf("total data size is %zu\n", (*total_size)); RTE_ASSERT((*total_size) <= ACTION_RAW_ENCAP_MAX_DATA); memmove(data, (data_tail - (*total_size)), *total_size); diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 6f0beafa27..b5b5f3b464 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -6246,26 +6246,37 @@ configure_rxtx_dump_callbacks(uint16_t verbose) return; #endif - RTE_ETH_FOREACH_DEV(portid) - { - if (verbose == 1 || verbose > 2) + RTE_ETH_FOREACH_DEV(portid) { + switch (verbose) { + case VERBOSE_OFF: + remove_rx_dump_callbacks(portid); + remove_tx_dump_callbacks(portid); + break; + case VERBOSE_RX: add_rx_dump_callbacks(portid); - else + remove_tx_dump_callbacks(portid); + break; + case VERBOSE_TX: + add_tx_dump_callbacks(portid); remove_rx_dump_callbacks(portid); - if (verbose >= 2) + break; + default: + add_rx_dump_callbacks(portid); add_tx_dump_callbacks(portid); - else - remove_tx_dump_callbacks(portid); + } } } void set_verbose_level(uint16_t vb_level) { - printf("Change verbose level from %u to %u\n", - (unsigned int) verbose_level, (unsigned int) vb_level); - verbose_level = vb_level; - configure_rxtx_dump_callbacks(verbose_level); + if (vb_level < VERBOSE_MAX) { + printf("Change verbose level from %u to %u\n", verbose_level, vb_level); + verbose_level = vb_level; + configure_rxtx_dump_callbacks(verbose_level); + } else { + fprintf(stderr, "Verbose level %u is out of range\n", vb_level); + } } void diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 9facd7f281..3d7a2b6dac 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -489,6 +489,17 @@ enum dcb_mode_enable extern uint8_t xstats_hide_zero; /**< Hide zero values for xstats display */ +enum verbose_mode { + VERBOSE_OFF = 0, + VERBOSE_RX, + VERBOSE_TX, + VERBOSE_BOTH, + VERBOSE_DISSECT, + VERBOSE_HEX, + VERBOSE_MAX +}; + + /* globals used for configuration */ extern uint8_t record_core_cycles; /**< Enables measurement of CPU cycles */ extern uint8_t record_burst_stats; /**< Enables display of RX and TX bursts */ diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c index bf9b639d95..f277e7f035 100644 --- a/app/test-pmd/util.c +++ b/app/test-pmd/util.c @@ -5,9 +5,11 @@ #include +#include #include #include #include +#include #include #include #include @@ -16,6 +18,7 @@ #include "testpmd.h" #define MAX_STRING_LEN 8192 +#define MAX_DUMP_LEN 1024 #define MKDUMPSTR(buf, buf_size, cur_len, ...) \ do { \ @@ -67,9 +70,10 @@ get_timestamp(const struct rte_mbuf *mbuf) timestamp_dynfield_offset, rte_mbuf_timestamp_t *); } -static inline void -dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[], - uint16_t nb_pkts, int is_rx) +/* More verbose older style packet decode */ +static void +dump_pkt_verbose(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[], +uint16_t nb_pkts, int is_rx) { struct rte_mbuf *mb; const struct rte_ether_hdr *eth_hdr; @@ -90,8 +94,6 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[], size_t cur_len = 0; uint64_t
X710 RSS working on one vmare setup bit not working on another
Hi All, We are facing issue with X710 + SRIOV. Some how packets coming to queue 0 only . Same Nic sriov are shared between a vm using dpdk and other vm using Linux interfaces. Can anyone provide any direction? We are using vpp 2106 with dpdk 2102. I know it’s old but any hint might help. Thanks, Chetan
Re: [RFC PATCH v3 1/2] dts: add port config mtu options to testpmd shell
Hey Nick, Looks good to me, I just had one comment about what looks like a mistake in a merge, and then another more general question. On Fri, Jul 26, 2024 at 10:13 AM Nicholas Pratte wrote: > > Testpmd offers mtu configuration options that omit ethernet overhead > calculations when set. This patch adds easy-of-use methods to leverage > these runtime options. > > Bugzilla ID: 1421 > > Signed-off-by: Nicholas Pratte > --- > dts/framework/remote_session/testpmd_shell.py | 20 ++- > 1 file changed, 19 insertions(+), 1 deletion(-) > > diff --git a/dts/framework/remote_session/testpmd_shell.py > b/dts/framework/remote_session/testpmd_shell.py > index eda6eb320f..83f7961359 100644 > --- a/dts/framework/remote_session/testpmd_shell.py > +++ b/dts/framework/remote_session/testpmd_shell.py > @@ -804,7 +804,25 @@ def show_port_stats(self, port_id: int) -> > TestPmdPortStats: > > return TestPmdPortStats.parse(output) > > -def _close(self) -> None: > +def configure_port_mtu(self, port_id: int, mtu_length: int) -> None: Was there a reason you decided to omit the verify parameter? I think it would still be valuable to have just so the developer knows that if they get past this step they are guaranteed to have a modified MTU. I think you can do it with port info and I actually wrote the same method in my (albeit, old and outdated now) scatter expansion patch if you wanted to see an example of what I mean [1]. Additionally, I think on some NICs you also have to stop the port before you can adjust the MTU, so this could fail in some cases. > +"""Set the MTU length on a designated port. > + > +Args: > +port_id: The ID of the port being configured. > +mtu_length: The length, in bytes, of the MTU being set. > +""" > +self.send_command(f"port config mtu {port_id} {mtu_length}") > + > +def configure_port_mtu_all(self, mtu_length: int) -> None: > +"""Set the MTU length on all designated ports. > + > +Args: > +mtu_length: The MTU length to be set on all ports. > +""" > +for port in self.show_port_info_all(): > +self.send_command(f"port config mtu {port.id} {mtu_length}") > + > +def close(self) -> None: This looks like something that went wrong in the merge, this method is called _close on main and that is the one that SingleActiveIntactiveShells use to properly close. > """Overrides :meth:`~.interactive_shell.close`.""" > self.stop() > self.send_command("quit", "Bye...") > -- > 2.44.0 > [1] https://patchwork.dpdk.org/project/dpdk/patch/20240709175341.183888-2-jspew...@iol.unh.edu/
Re: [RFC PATCH v3 2/2] dts: Initial Implementation For Jumbo Frames Test Suite
Just some very light comments here as well, mostly about doc-string but also a few questions/suggestions. On Fri, Jul 26, 2024 at 10:14 AM Nicholas Pratte wrote: > diff --git a/dts/tests/TestSuite_jumboframes.py > b/dts/tests/TestSuite_jumboframes.py > new file mode 100644 > index 00..dd8092f2a4 > --- /dev/null > +++ b/dts/tests/TestSuite_jumboframes.py > @@ -0,0 +1,182 @@ > +# SPDX-License-Identifier: BSD-3-Clause > +# Copyright(c) 2023-2024 University of New Hampshire It's probably fine to just make the date on this copyright 2024. > +"""Jumbo frame consistency and compatibility test suite. > + > +The test suite ensures the consistency of jumbo frames transmission within > +Poll Mode Drivers using a series of individual test cases. If a Poll Mode > +Driver receives a packet that is greater than its assigned MTU length, then > +that packet will be dropped, and thus not received. Likewise, if a Poll Mode > Driver This sentence is a little confusing because you're saying "if a packet is received with X condition then it isn't received." Maybe it would be more clear to say it isn't forwarded? > +receives a packet that is less than or equal to a its designated MTU length, > then the I think this was a typo, this should probably be either "equal to its" or "equal to a" rather than both. > +packet should be transmitted by the Poll Mode Driver, completing a cycle > within the > +testbed and getting received by the traffic generator. Thus, the following > test suite > +evaluates the behavior within all possible edge cases, ensuring that a test > Poll > +Mode Driver strictly abides by the above implications. > +""" > + > +class TestJumboframes(TestSuite): > +"""DPDK PMD jumbo frames test suite. > + > +Asserts the expected behavior of frames greater than, less then, or > equal to I think this should be "less than" rather than "less then". > +a designated MTU size in the testpmd application. If a packet size > greater > +than the designated testpmd MTU length is retrieved, the test fails. If a > +packet size less than or equal to the designated testpmd MTU length is > retrieved, > +the test passes. > +""" > + > +def set_up_suite(self) -> None: > +"""Set up the test suite. > + > +Setup: > +Set traffic generator MTU lengths to a size greater than scope > of all > +test cases. > +""" > +self.tg_node.main_session.configure_port_mtu( > +ETHER_JUMBO_FRAME_MTU + 200, self._tg_port_egress > +) > +self.tg_node.main_session.configure_port_mtu( > +ETHER_JUMBO_FRAME_MTU + 200, self._tg_port_ingress > +) I know 9000 is a common jumboframe MTU to support, but do we have to worry about a case where some NICs wouldn't support an MTU this high? That could also be a case of the NICs where that would be a problem are maybe older and not expected to be as common? I'm not completely sure what the standards and expectations are. > + > +def send_packet_and_verify(self, pktsize: int, should_receive: bool = > True) -> None: > +"""Generate, send, and capture packets to verify that the sent > packet was received or not. I feel like the "verify that..." asserts that you are verifying something positive when it could be positive or negative. Maybe "verify whether" would fit better here. > + > +Generates a packet based on a specified size and sends it to the > SUT. The desired packet's > +payload size is calculated, and arbitrary, byte-sized characters are > inserted into the > +packet before sending. Packets are captured, and depending on the > test case, packet > +payloads are checked to determine if the sent payload was received. > + > +Args: > +pktsize: Size of packet to be generated and sent. > +should_receive: Indicate whether the test case expects to > receive the packet or not. > +""" > +padding = pktsize - IP_HEADER_LEN > +# Insert extra space for placeholder 'CRC' Error correction. > +packet = Ether() / Raw("") / IP(len=pktsize) / Raw(load="X" * > padding) This CRC error correction is interesting, I thought generally that the Ether header length included the CRC, but it makes sense to try and account for it if it isn't . > +received_packets = self.send_packet_and_capture(packet) > +found = any( > +("X" * padding) in str(packets.load) > +for packets in received_packets > +if hasattr(packets, "load") > +) > + > +if should_receive: > +self.verify(found, "Did not receive packet") > +else: > +self.verify(not found, "Received packet") > + > + > +def test_jumboframes_jumbo_nojumbo(self) -> None: > +"""Assess the boundaries of packets sent greater than standard MTU > length. > + > +PMDs are set to the standard MTU length of 1518 bytes to
[PATCH v7 0/4] Add network packet dissector
While debugging TAP rte_flow discovered that test pmd verbose output was confusing and unhelpful. Instead, made a simple dissector that prints one line per packet like this in test-pmd with verbose level 4. v7 - change rte_dissect to return number of characters that would be printed (same as snprintf). And tests around that. v6 - validate result of rte_dissect for simple packet v5 - breakout the additional ICMP types into header - fix some decoding bugs in ARP and ICMP Stephen Hemminger (4): net: add more icmp types net: add new packet dissector test: add test for packet dissector test-pmd: add more packet verbose decode options app/test-pmd/cmdline_flow.c | 3 +- app/test-pmd/config.c | 33 +- app/test-pmd/testpmd.h | 11 + app/test-pmd/util.c | 77 +++- app/test/meson.build| 1 + app/test/test_dissect.c | 302 ++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 5 +- lib/net/meson.build | 2 + lib/net/rte_dissect.c | 428 lib/net/rte_dissect.h | 45 ++ lib/net/rte_icmp.h | 22 +- lib/net/version.map | 7 + 12 files changed, 914 insertions(+), 22 deletions(-) create mode 100644 app/test/test_dissect.c create mode 100644 lib/net/rte_dissect.c create mode 100644 lib/net/rte_dissect.h -- 2.43.0
[PATCH v7 1/4] net: add more icmp types
Add more defines for additional defined ICMP types. Signed-off-by: Stephen Hemminger --- lib/net/rte_icmp.h | 22 ++ 1 file changed, 18 insertions(+), 4 deletions(-) diff --git a/lib/net/rte_icmp.h b/lib/net/rte_icmp.h index 4bf64d70ad..b51b60a6d2 100644 --- a/lib/net/rte_icmp.h +++ b/lib/net/rte_icmp.h @@ -54,10 +54,24 @@ struct rte_icmp_hdr { } __rte_packed; /* ICMP packet types */ -#define RTE_IP_ICMP_ECHO_REPLY 0 -#define RTE_IP_ICMP_ECHO_REQUEST 8 -#define RTE_ICMP6_ECHO_REQUEST 128 -#define RTE_ICMP6_ECHO_REPLY 129 +#define RTE_IP_ICMP_ECHO_REPLY 0 +#define RTE_IP_ICMP_DEST_UNREACH3 +#define RTE_IP_ICMP_SOURCE_QUENCH 4 +#define RTE_IP_ICMP_REDIRECT5 +#define RTE_IP_ICMP_ECHO_REQUEST8 +#define RTE_IP_ICMP_TIME_EXCEEDED 11 +#define RTE_IP_ICMP_PARAMETERPROB 12 +#define RTE_IP_ICMP_TIMESTAMP 13 +#define RTE_IP_ICMP_TIMESTAMPREPLY 14 +#define RTE_IP_ICMP_INFO_REQUEST 15 +#define RTE_IP_ICMP_INFO_REPLY 16 + +#define RTE_ICMP6_ECHO_REQUEST 128 +#define RTE_ICMP6_ECHO_REPLY129 +#define RTE_ND_ROUTER_SOLICIT 133 +#define RTE_ND_ROUTER_ADVERT134 +#define RTE_ND_NEIGHBOR_SOLICIT 135 +#define RTE_ND_NEIGHBOR_ADVERT 136 #ifdef __cplusplus } -- 2.43.0
[PATCH v7 2/4] net: add new packet dissector
The function rte_dissect_mbuf is used to decode the contents of an mbuf into ah uman readable format similar to what tshark uses. For now, handles IP, IPv6, TCP, UDP, ICMP and ARP. Signed-off-by: Stephen Hemminger --- lib/net/meson.build | 2 + lib/net/rte_dissect.c | 428 ++ lib/net/rte_dissect.h | 45 + lib/net/version.map | 7 + 4 files changed, 482 insertions(+) create mode 100644 lib/net/rte_dissect.c create mode 100644 lib/net/rte_dissect.h diff --git a/lib/net/meson.build b/lib/net/meson.build index 0b69138949..48edf17ea3 100644 --- a/lib/net/meson.build +++ b/lib/net/meson.build @@ -2,6 +2,7 @@ # Copyright(c) 2017-2020 Intel Corporation headers = files( +'rte_dissect.h', 'rte_ip.h', 'rte_tcp.h', 'rte_udp.h', @@ -30,6 +31,7 @@ headers = files( sources = files( 'rte_arp.c', +'rte_dissect.c', 'rte_ether.c', 'rte_net.c', 'rte_net_crc.c', diff --git a/lib/net/rte_dissect.c b/lib/net/rte_dissect.c new file mode 100644 index 00..de9e8f4c56 --- /dev/null +++ b/lib/net/rte_dissect.c @@ -0,0 +1,428 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 Stephen Hemminger + * + * Print packets in format similar to tshark. + * Output should be one line per mbuf + */ + +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* Forward declaration - Ethernet can be nested */ +static int +dissect_eth(char *buf, size_t size, const struct rte_mbuf *mb, + uint32_t offset, uint32_t dump_len); + +/* + * Read data from segmented mbuf and put it into buf , but stop if would go past max length + * See rte_pktmbuf_read() + */ +static const void * +dissect_read(const struct rte_mbuf *m, uint32_t offset, uint32_t len, +void *buf, uint32_t dump_len) +{ + + /* If this header would be past the requested length */ + if (dump_len > 0 && offset + len > dump_len) + return NULL; + + return rte_pktmbuf_read(m, offset, len, buf); +} + +/* + * Print to string buffer and adjust result + * Returns true on success, false if buffer is exhausted. + */ +static __rte_format_printf(3, 4) int +dissect_print(char **buf, size_t *sz, const char *fmt, ...) +{ + va_list ap; + int count; + + va_start(ap, fmt); + count = vsnprintf(*buf, *sz, fmt, ap); + va_end(ap); + + /* error or string is full */ + if (count < 0 || count >= (int)*sz) { + *sz = 0; + } else { + *buf += count; + *sz -= count; + } + return count; +} + +static int +dissect_arp(char *buf, size_t size, const struct rte_mbuf *mb, + uint32_t offset, uint32_t dump_len) +{ + const struct rte_arp_hdr *arp; + struct rte_arp_hdr _arp; + int count = 0; + char abuf[64]; + + arp = dissect_read(mb, offset, sizeof(_arp), &_arp, dump_len); + if (arp == NULL) + return snprintf(buf, size, "Missing ARP header"); + + offset += sizeof(_arp); + + uint16_t ar_op = rte_be_to_cpu_16(arp->arp_opcode); + switch (ar_op) { + case RTE_ARP_OP_REQUEST: + inet_ntop(AF_INET, &arp->arp_data.arp_tip, abuf, sizeof(abuf)); + count += dissect_print(&buf, &size, "Who has %s? ", abuf); + + rte_ether_format_addr(abuf, sizeof(abuf), &arp->arp_data.arp_sha); + count += dissect_print(&buf, &size, "Tell %s ", abuf); + break; + + case RTE_ARP_OP_REPLY: + inet_ntop(AF_INET, &arp->arp_data.arp_sip, abuf, sizeof(abuf)); + count += dissect_print(&buf, &size, "%s is at", abuf); + + rte_ether_format_addr(abuf, sizeof(abuf), &arp->arp_data.arp_sha); + count += dissect_print(&buf, &size, "%s ", abuf); + break; + + case RTE_ARP_OP_INVREQUEST: + rte_ether_format_addr(abuf, sizeof(abuf), &arp->arp_data.arp_tha); + count += dissect_print(&buf, &size, "Who is %s? ", abuf); + + rte_ether_format_addr(abuf, sizeof(abuf), &arp->arp_data.arp_sha); + count += dissect_print(&buf, &size, "Tell %s ", abuf); + break; + + case RTE_ARP_OP_INVREPLY: + rte_ether_format_addr(abuf, sizeof(buf), &arp->arp_data.arp_sha); + count += dissect_print(&buf, &size, "%s is at ", abuf); + + inet_ntop(AF_INET, &arp->arp_data.arp_sip, abuf, sizeof(abuf)); + count += dissect_print(&buf, &size, "%s ", abuf); + break; + + default: + count += dissect_print(&buf, &size, "Unknown ARP %#x ", ar_op); + break; + } + + return count; +} + +static int +dissect_vxlan(char *buf, size_t size, const struct rte_mbuf *mb, uint
[PATCH v7 3/4] test: add test for packet dissector
Add some tests for new packet dissector. Signed-off-by: Stephen Hemminger --- app/test/meson.build| 1 + app/test/test_dissect.c | 302 2 files changed, 303 insertions(+) create mode 100644 app/test/test_dissect.c diff --git a/app/test/meson.build b/app/test/meson.build index e29258e6ec..9cd2051320 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -62,6 +62,7 @@ source_file_deps = { 'test_debug.c': [], 'test_devargs.c': ['kvargs'], 'test_dispatcher.c': ['dispatcher'], +'test_dissect.c': ['net'], 'test_distributor.c': ['distributor'], 'test_distributor_perf.c': ['distributor'], 'test_dmadev.c': ['dmadev', 'bus_vdev'], diff --git a/app/test/test_dissect.c b/app/test/test_dissect.c new file mode 100644 index 00..08734134d5 --- /dev/null +++ b/app/test/test_dissect.c @@ -0,0 +1,302 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2024 Stephen Hemminger + */ + +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "test.h" + +#ifndef LINE_MAX +#define LINE_MAX 2048 +#endif + +#define TOTAL_PACKETS 100 +#define PACKET_LEN 1000 +#define ETH_IP_UDP_VXLAN_SIZE (sizeof(struct rte_ether_hdr) + \ + sizeof(struct rte_ipv4_hdr) + \ + sizeof(struct rte_udp_hdr) + \ + sizeof(struct rte_vxlan_hdr)) + + +static uint16_t port_id; +static const char null_dev[] = "net_null0"; + +static void +add_header(struct rte_mbuf *mb, uint32_t plen, + rte_be16_t src_port, rte_be16_t dst_port) +{ + struct { + struct rte_ether_hdr eth; + struct rte_ipv4_hdr ip; + struct rte_udp_hdr udp; + } pkt = { + .eth = { + .dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4), + }, + .ip = { + .version_ihl = RTE_IPV4_VHL_DEF, + .time_to_live = 1, + .next_proto_id = IPPROTO_UDP, + .src_addr = rte_cpu_to_be_32(RTE_IPV4_LOOPBACK), + .dst_addr = rte_cpu_to_be_32(RTE_IPV4_BROADCAST), + }, + .udp = { + .dst_port = dst_port, + .src_port = src_port, + }, + }; + + rte_eth_random_addr(pkt.eth.src_addr.addr_bytes); + + plen -= sizeof(struct rte_ether_hdr); + pkt.ip.total_length = rte_cpu_to_be_16(plen); + pkt.ip.hdr_checksum = rte_ipv4_cksum(&pkt.ip); + + plen -= sizeof(struct rte_ipv4_hdr); + pkt.udp.dgram_len = rte_cpu_to_be_16(plen); + + /* Copy header into mbuf */ + memcpy(rte_pktmbuf_append(mb, sizeof(pkt)), &pkt, sizeof(pkt)); +} + +static void +add_vxlan(struct rte_mbuf *mb, rte_be32_t vni) +{ + struct rte_vxlan_hdr *vxlan; + + vxlan = (struct rte_vxlan_hdr *)rte_pktmbuf_append(mb, sizeof(*vxlan)); + memset(vxlan, 0, sizeof(*vxlan)); + vxlan->flag_i = 1; + vxlan->vx_vni = vni; +} + + +static void +fill_data(struct rte_mbuf *mb, uint32_t len) +{ + uint32_t i; + char *ptr = rte_pktmbuf_append(mb, len); + char c = '!'; + + /* traditional barber pole pattern */ + for (i = 0; i < len; i++) { + ptr[i] = c++; + if (c == 0x7f) + c = '!'; + } +} + +static void +mbuf_prep(struct rte_mbuf *mb, uint8_t buf[], uint32_t buf_len) +{ + mb->buf_addr = buf; + rte_mbuf_iova_set(mb, (uintptr_t)buf); + mb->buf_len = buf_len; + rte_mbuf_refcnt_set(mb, 1); + + /* set pool pointer to dummy value, test doesn't use it */ + mb->pool = (void *)buf; + + rte_pktmbuf_reset(mb); +} + +static int +test_setup(void) +{ + port_id = rte_eth_dev_count_avail(); + + /* Make a dummy null device to snoop on */ + if (rte_vdev_init(null_dev, NULL) != 0) { + fprintf(stderr, "Failed to create vdev '%s'\n", null_dev); + goto fail; + } + return 0; + +fail: + rte_vdev_uninit(null_dev); + return -1; +} + +static void +test_cleanup(void) +{ + rte_vdev_uninit(null_dev); +} + + +static int +test_simple(void) +{ + struct rte_mbuf mb; + uint8_t buf[RTE_MBUF_DEFAULT_BUF_SIZE]; + uint32_t data_len = PACKET_LEN; + rte_be16_t src_port = rte_rand_max(UINT16_MAX); + const rte_be16_t dst_port = rte_cpu_to_be_16(9); /* Discard port */ + char obuf[LINE_MAX] = { }; + char result[LINE_MAX] = { }; + int ret; + + /* make a dummy packet */ + mbuf_prep(&mb, buf, sizeof(buf)); + add_header(&mb, data_len, src_port, dst_port); + fill_data(&mb, da
[PATCH v7 4/4] test-pmd: add more packet verbose decode options
The existing verbose levels 1..3 provide a messy multi-line output per packet. I found this unhelpful when diagnosing many types of problems like packet flow. This patch keeps the previous levels and adds two new levels: 4: one line per packet is printed in a format resembling tshark output. With addresses and protocol info. 5: dump packet in hex. Useful if the driver is messing up the data. Signed-off-by: Stephen Hemminger --- app/test-pmd/cmdline_flow.c | 3 +- app/test-pmd/config.c | 33 ++--- app/test-pmd/testpmd.h | 11 +++ app/test-pmd/util.c | 77 +++-- doc/guides/testpmd_app_ug/testpmd_funcs.rst | 5 +- 5 files changed, 111 insertions(+), 18 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index d04280eb3e..a010fcf61a 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -14143,7 +14143,8 @@ cmd_set_raw_parsed(const struct buffer *in) upper_layer = proto; } } - if (verbose_level & 0x1) + + if (verbose_level > 0) printf("total data size is %zu\n", (*total_size)); RTE_ASSERT((*total_size) <= ACTION_RAW_ENCAP_MAX_DATA); memmove(data, (data_tail - (*total_size)), *total_size); diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 6f0beafa27..b5b5f3b464 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -6246,26 +6246,37 @@ configure_rxtx_dump_callbacks(uint16_t verbose) return; #endif - RTE_ETH_FOREACH_DEV(portid) - { - if (verbose == 1 || verbose > 2) + RTE_ETH_FOREACH_DEV(portid) { + switch (verbose) { + case VERBOSE_OFF: + remove_rx_dump_callbacks(portid); + remove_tx_dump_callbacks(portid); + break; + case VERBOSE_RX: add_rx_dump_callbacks(portid); - else + remove_tx_dump_callbacks(portid); + break; + case VERBOSE_TX: + add_tx_dump_callbacks(portid); remove_rx_dump_callbacks(portid); - if (verbose >= 2) + break; + default: + add_rx_dump_callbacks(portid); add_tx_dump_callbacks(portid); - else - remove_tx_dump_callbacks(portid); + } } } void set_verbose_level(uint16_t vb_level) { - printf("Change verbose level from %u to %u\n", - (unsigned int) verbose_level, (unsigned int) vb_level); - verbose_level = vb_level; - configure_rxtx_dump_callbacks(verbose_level); + if (vb_level < VERBOSE_MAX) { + printf("Change verbose level from %u to %u\n", verbose_level, vb_level); + verbose_level = vb_level; + configure_rxtx_dump_callbacks(verbose_level); + } else { + fprintf(stderr, "Verbose level %u is out of range\n", vb_level); + } } void diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 9facd7f281..3d7a2b6dac 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -489,6 +489,17 @@ enum dcb_mode_enable extern uint8_t xstats_hide_zero; /**< Hide zero values for xstats display */ +enum verbose_mode { + VERBOSE_OFF = 0, + VERBOSE_RX, + VERBOSE_TX, + VERBOSE_BOTH, + VERBOSE_DISSECT, + VERBOSE_HEX, + VERBOSE_MAX +}; + + /* globals used for configuration */ extern uint8_t record_core_cycles; /**< Enables measurement of CPU cycles */ extern uint8_t record_burst_stats; /**< Enables display of RX and TX bursts */ diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c index bf9b639d95..f277e7f035 100644 --- a/app/test-pmd/util.c +++ b/app/test-pmd/util.c @@ -5,9 +5,11 @@ #include +#include #include #include #include +#include #include #include #include @@ -16,6 +18,7 @@ #include "testpmd.h" #define MAX_STRING_LEN 8192 +#define MAX_DUMP_LEN 1024 #define MKDUMPSTR(buf, buf_size, cur_len, ...) \ do { \ @@ -67,9 +70,10 @@ get_timestamp(const struct rte_mbuf *mbuf) timestamp_dynfield_offset, rte_mbuf_timestamp_t *); } -static inline void -dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[], - uint16_t nb_pkts, int is_rx) +/* More verbose older style packet decode */ +static void +dump_pkt_verbose(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[], +uint16_t nb_pkts, int is_rx) { struct rte_mbuf *mb; const struct rte_ether_hdr *eth_hdr; @@ -90,8 +94,6 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[], size_t cur_len = 0; uint64_t
Re: [PATCH v4 1/2] dts: add methods for setting mac and multicast addresses
On Fri, Jul 26, 2024 at 12:39 PM Nicholas Pratte wrote: > > New methods have been added to TestPMDShell in order to produce the mac > filter's individual test cases: > - set_mac_addr > - set_multicast_mac_addr > > set_mac_addr and set_multicast_addr were created for the mac filter test > suite, enabling users to both add or remove mac and multicast > addresses based on a boolean 'add or remove' parameter. The success or > failure of each call can be verified if a user deems it necessary. > > Bugzilla ID: 1454 > Signed-off-by: Nicholas Pratte Reviewed-by: Jeremy Spewock
Re: [PATCH v4 2/2] dts: mac filter test suite refactored for new dts
On Fri, Jul 26, 2024 at 12:39 PM Nicholas Pratte wrote: > > The mac address filter test suite, whose test cases are based on old > DTS's test cases, has been refactored to interface with the new DTS > framework. > > In porting over this test suite into the new framework, some > adjustments were made, namely in the EAL and TestPMD parameter provided > before executing the application. While the original test plan was > referenced, by and large, only for the individual test cases, I'll leave > the parameters the original test plan was asking for below for the sake > of discussion: > > --burst=1 --rxpt=0 --rxht=0 --rxwt=0 --txpt=36 --txht=0 --txwt=0 > --txfreet=32 --rxfreet=64 --mbcache=250 --portmask=0x3 > > depends-on: patch-142691 ("dts: add send_packets to test suites and > rework packet addressing") > depends-on: patch-142696 ("dts: add VLAN methods to testpmd shell") > > Bugzilla ID: 1454 > Signed-off-by: Nicholas Pratte > Reviewed-by: Jeremy Spewock
Re: [PATCH v4 2/2] dts: mac filter test suite refactored for new dts
Just a few small comments, otherwise: Reviewed-by: Jeremy Spewock On Fri, Jul 26, 2024 at 12:46 PM Nicholas Pratte wrote: > > The mac address filter test suite, whose test cases are based on old > DTS's test cases, has been refactored to interface with the new DTS > framework. > > In porting over this test suite into the new framework, some > adjustments were made, namely in the EAL and TestPMD parameter provided > before executing the application. While the original test plan was > referenced, by and large, only for the individual test cases, I'll leave > the parameters the original test plan was asking for below for the sake > of discussion: > > --burst=1 --rxpt=0 --rxht=0 --rxwt=0 --txpt=36 --txht=0 --txwt=0 > --txfreet=32 --rxfreet=64 --mbcache=250 --portmask=0x3 > > depends-on: patch-142691 ("dts: add send_packets to test suites and > rework packet addressing") > depends-on: patch-142696 ("dts: add VLAN methods to testpmd shell") > > Bugzilla ID: 1454 > Signed-off-by: Nicholas Pratte > > --- > + > +def send_packet_and_verify( > +self, > +mac_address: str, > +add_vlan: bool = False, > +should_receive: bool = True, > +) -> None: > +"""Generate, send, and verify a packet based on specified parameters. > + > +Test cases within this suite utilize this method to create, send, > and verify > +packets based on criteria relating to the packet's destination mac > address, > +vlan tag, and whether or not the packet should be received or not. > Packets > +are verified using an inserted payload. Assuming the test case > expects to > +receive a specified packet, if the list of received packets contains > this > +payload within any of its packets, the test case passes. > Alternatively, if > +the designed packet should not be received, and the packet payload > is not, I think there is an extra comma here, but we probably should remove the "not," all together since the test case really fails here if it is received. > +received, then the test case fails. Each call with this method sends > exactly > +one packet. > + > +Args: > +mac_address: The destination mac address of the packet being > sent. > +add_vlan: If :data:'True', add a vlan tag to the packet being > sent. The > +vlan tag will be :data:'2' if the packet should be received > and > +:data:'1' if the packet should not be received but requires > a vlan tag. > +should_receive: If :data:'True', assert whether or not the sent > packet > +has been received. If :data:'False', assert that the send > packet was not > +received. :data:'True' by default > +""" > +if add_vlan: > +packet = Ether() / Dot1Q(vlan=2 if should_receive else 1) / IP() > / Raw(load="X" * 22) > +else: > +packet = Ether() / IP() / Raw(load="X" * 22) > +packet.dst = mac_address > +received_packets = [ > +packets > +for packets in self.send_packet_and_capture(packet) > +if hasattr(packets, "load") and "X" * 22 in str(packets.load) > +] > +if should_receive: > +self.verify(len(received_packets) == 1, "Expected packet not > received") > +else: > +self.verify(len(received_packets) == 0, "Expected packet > received") > + > +def test_add_remove_mac_addresses(self) -> None: > +"""Assess basic mac addressing filtering functionalities. > + > +This test case validates for proper behavior of mac address > filtering with both > +a port's default, burned-in mac address, as well as additional mac > addresses > +added to the PMD. Packets should either be received or not received > depending on > +the properties applied to the PMD at any given time. > + > +Test: > +Start TestPMD with promiscuous mode. > +Send a packet with the port's default mac address. (Should > receive) > +Send a packet with fake mac address. (Should not receive) > +Add fake mac address to the PMD's address pool. > +Send a packet with the fake mac address to the PMD. (Should > receive) > +Remove the fake mac address from the PMD's address pool. > +Sent a packet with the fake mac address to the PMD. (Should not > receive) Typo: sent should be send. > +""" > +with TestPmdShell(self.sut_node) as testpmd: > +testpmd.set_promisc(0, on=False) > +testpmd.start() > +mac_address = self._sut_port_ingress.mac_address > + > +# Send a packet with NIC default mac address > +self.send_packet_and_verify(mac_address=mac_address, > should_receive=True) > +# Send a packet with different mac address > +fake_address = "00:00:00:00:00:
Re: [PATCH v4 1/2] dts: add methods for setting mac and multicast addresses
Oops, I just reviewed an older version without realizing this one existed, haha. Although, seeing the full diff again I saw a few documentation nit-picks that I put below. On Fri, Jul 26, 2024 at 12:45 PM Nicholas Pratte wrote: > > New methods have been added to TestPMDShell in order to produce the mac > filter's individual test cases: > - set_mac_addr > - set_multicast_mac_addr > > set_mac_addr and set_multicast_addr were created for the mac filter test > suite, enabling users to both add or remove mac and multicast > addresses based on a boolean 'add or remove' parameter. The success or > failure of each call can be verified if a user deems it necessary. > > Bugzilla ID: 1454 > Signed-off-by: Nicholas Pratte > --- > dts/framework/remote_session/testpmd_shell.py | 59 +++ > 1 file changed, 59 insertions(+) > > diff --git a/dts/framework/remote_session/testpmd_shell.py > b/dts/framework/remote_session/testpmd_shell.py > index 8e5a1c084a..64ffb23439 100644 > --- a/dts/framework/remote_session/testpmd_shell.py > +++ b/dts/framework/remote_session/testpmd_shell.py > @@ -765,6 +765,65 @@ def show_port_info(self, port_id: int) -> TestPmdPort: > > return TestPmdPort.parse(output) > > +def set_mac_addr(self, port_id: int, mac_address: str, add: bool, > verify: bool = True) -> None: > +"""Add or remove a mac address on a given port's Allowlist. > + > +Args: > +port_id: The port ID the mac address is set on. > +mac_address: The mac address to be added or removed to the > specified port. This might be more clear if it said the mac address to be "added to or removed from" since the "removed to" doesn't exactly fit. > +add: If :data:`True`, add the specified mac address. If > :data:`False`, remove specified > +mac address. > +verify: If :data:'True', assert that the 'mac_addr' operation > was successful. If > +:data:'False', run the command and skip this assertion. > + > +Raises: > +InteractiveCommandExecutionError: If the set mac address > operation fails. > +""" > +mac_cmd = "add" if add else "remove" > +output = self.send_command(f"mac_addr {mac_cmd} {port_id} > {mac_address}") > +if "Bad arguments" in output: > +self._logger.debug("Invalid argument provided to mac_addr") > +raise InteractiveCommandExecutionError("Invalid argument > provided") > + > +if verify: > +if "mac_addr_cmd error:" in output: > +self._logger.debug(f"Failed to {mac_cmd} {mac_address} on > port {port_id}") > +raise InteractiveCommandExecutionError( > +f"Failed to {mac_cmd} {mac_address} on port {port_id} > \n{output}" > +) > + > +def set_multicast_mac_addr( > +self, port_id: int, multi_addr: str, add: bool, verify: bool = True > +) -> None: > +"""Add or remove multicast mac address to a specified port's filter. Same thing here as above, but it might be a little trickier here. Maybe this could be something like allow or disallow to make it more homogeneous? I'm not quite sure. > + > +Args: > +port_id: The port ID the multicast address is set on. > +multi_addr: The multicast address to be added to the filter. Since this method also has an `add` parameter, it would probably be helpful to specify that this also can be either added to or removed from the filter. > +add: If :data:'True', add the specified multicast address to the > port filter. > +If :data:'False', remove the specified multicast address > from the port filter. > +verify: If :data:'True', assert that the 'mcast_addr' operations > was successful. > +If :data:'False', execute the 'mcast_addr' operation and > skip the assertion. > + > +Raises: > +InteractiveCommandExecutionError: If either the 'add' or > 'remove' operations fails. > +""" > +mcast_cmd = "add" if add else "remove" > +output = self.send_command(f"mcast_addr {mcast_cmd} {port_id} > {multi_addr}") > +if "Bad arguments" in output: > +self._logger.debug("Invalid arguments provided to mcast_addr") > +raise InteractiveCommandExecutionError("Invalid argument > provided") > + > +if verify: > +if ( > +"Invalid multicast_addr" in output > +or f'multicast address {"already" if add else "not"} > filtered by port' in output > +): > +self._logger.debug(f"Failed to {mcast_cmd} {multi_addr} on > port {port_id}") > +raise InteractiveCommandExecutionError( > +f"Failed to {mcast_cmd} {multi_addr} on port {port_id} > \n{output}" > +) > + > def show_port_stats_all(self) -> list[TestPmdPortStats]: > "
Re: [PATCH v4 2/2] dts: mac filter test suite refactored for new dts
Apologies, sent reviews in the wrong order so I am sending another reply to this one just to make sure it appears second in people's inboxes for less confusion.
Re: [PATCH v1] dts: add flow rule dataclass to testpmd shell
I think Luca made some great points and I agree with what he said, I just had one other though as well. Great work! On Fri, Jul 26, 2024 at 10:22 AM Dean Marx wrote: > + > class TestPmdShell(DPDKShell): > """Testpmd interactive shell. > > @@ -804,6 +841,25 @@ def show_port_stats(self, port_id: int) -> > TestPmdPortStats: > > return TestPmdPortStats.parse(output) > > +def flow_create(self, cmd: str, verify: bool = True) -> None: It might make more sense for this method to take in the actual dataclass rather than a string, and then it can convert it to a string when it sends the command. That way users don't have to make the class and then always do str() on it before passing it into this method. Additionally, it discourages people from just putting anything in the command section and shows the expectation that you should be using the dataclass to make the flow rules. > +"""Creates a flow rule in the testpmd session. > + > +Args: > +cmd: String from flow_func instance to send as a flow rule. > +verify: If :data:`True`, the output of the command is scanned > +to ensure the flow rule was created successfully. > + > +Raises: > +InteractiveCommandExecutionError: If flow rule is invalid. > +""" > +flow_output = self.send_command(cmd) > +if verify: > +if "created" not in flow_output: > +self._logger.debug(f"Failed to create flow > rule:\n{flow_output}") > +raise InteractiveCommandExecutionError( > +f"Failed to create flow rule:\n{flow_output}" > +) > + > def _close(self) -> None: > """Overrides :meth:`~.interactive_shell.close`.""" > self.stop() > -- > 2.44.0 >
Re: [PATCH v2] dts: add flow rule dataclass to testpmd shell
On Thu, Aug 1, 2024 at 5:24 AM Luca Vizzarro wrote: > > Hi Dean, thank you for your work! Some minor comments. > > On 26/07/2024 15:15, Dean Marx wrote: > > add dataclass for passing in flow rule creation arguments, as well as a > > __str__ method for converting to a sendable testpmd command. Add > > flow_create method to TestPmdShell class for initializing flow rules. > > > > Signed-off-by: Dean Marx > > --- > > dts/framework/remote_session/testpmd_shell.py | 58 ++- > > 1 file changed, 57 insertions(+), 1 deletion(-) > > > > diff --git a/dts/framework/remote_session/testpmd_shell.py > > b/dts/framework/remote_session/testpmd_shell.py > > index eda6eb320f..d6c111da0a 100644 > > --- a/dts/framework/remote_session/testpmd_shell.py > > +++ b/dts/framework/remote_session/testpmd_shell.py > > @@ -19,7 +19,7 @@ > > from dataclasses import dataclass, field > > from enum import Flag, auto > > from pathlib import PurePath > > -from typing import ClassVar > > +from typing import ClassVar, Optional > > > > from typing_extensions import Self, Unpack > > > > @@ -577,6 +577,43 @@ class TestPmdPortStats(TextParser): > > tx_bps: int = field(metadata=TextParser.find_int(r"Tx-bps:\s+(\d+)")) > > > > > > +@dataclass > > +class flow_func: > Class names should be UpperCamelCase, also should be a suitable name for > what it's describing. I believe FlowRule should work. > > +"""Dataclass for setting flow rule parameters.""" > > + > > +#: > > +port_id: int > > +#: > > +ingress: bool > > +#: > > +pattern: str > > +#: > > +actions: str > > + > > +#: > > +group_id: Optional[int] = None > > +#: > > +priority_level: Optional[int] = None > > +#: > > +user_id: Optional[int] = None > Optional[..] is an outdated notation. `int | None` is preferred instead. > See PEP 604[1]. > > + > > +def __str__(self) -> str: > > +"""Returns the string representation of a flow_func instance. > > + > > +In this case, a properly formatted flow create command that can be > > sent to testpmd. > > +""" > > +ret = [] > > +ret.append(f"flow create {self.port_id} ") > > +ret.append(f"group {self.group_id} " if self.group_id is not None > > else "") > > +ret.append(f"priority {self.priority_level} " if > > self.priority_level is not None else "") > > +ret.append("ingress " if self.ingress else "egress ") > > +ret.append(f"user_id {self.user_id} " if self.user_id is not None > > else "") > > +ret.append(f"pattern {self.pattern} ") > > +ret.append(" / end actions ") > > +ret.append(f"{self.actions} / end") > > +return "".join(ret) > > Using a list with inline conditional appending is not particularly > readable. A regular string with conditional appending should do: > > ret = f"flow create {self.port_id} " > if self.group_id is not None: > ret += f"group {self.group_id} " > ... > > Also the latest three append lines can all be in one, if you like the > separation you can just do a multi-line string: > > ret += ( > f"pattern {self.pattern} / end " > f"actions {self.actions} / end" > ) > # or actually this may be just fine: > ret += f"pattern {self.pattern} / end " > ret += f"actions {self.actions} / end" > > I guess the way it's split is more of a game changer. > > If you really want to use a list (in a way that is similar to what I've > described here) then I'd take advantage of it... by omitting leading and > trailing whitespaces and then use the join to add them in between: " > ".join(ret) > > > + > > + > > class TestPmdShell(DPDKShell): > > """Testpmd interactive shell. > > > > @@ -804,6 +841,25 @@ def show_port_stats(self, port_id: int) -> > > TestPmdPortStats: > > > > return TestPmdPortStats.parse(output) > > > > +def flow_create(self, cmd: str, verify: bool = True) -> None: > > Not a comment, but a discussion point. Normally we'd want a function to > be read as an action such as: > > create_flow > > But I understand this is basically mirroring the command format... I > wonder which one would be the best. I am personally inclined in verb > first. Maybe others can give their opinion. That's funny because Patrick also raised this point about whether to use a more testpmd-oriented naming scheme or a more human-readable one. I forget which patch it was on exactly, but Patrick did raise a good point that if our goal is to have DPDK developers be able to easily pick up and work with this API, they'll probably be more familiar with the testpmd methods as they are now. I also lean more towards the verb first method just because it makes it more readable I think, but I can't speak for DPDK developers. Even if it were `create_flow` though I'm sure people can just type `testpmd.flow` and their IDE will be able to help with the auto-complete for the flow rule options. Maybe it is