Re: [PATCH 1/1] block: add missed block_acct_setup with new block device init procedure
Am 28.07.2022 um 21:27 hat Denis V. Lunev geschrieben: > On 28.07.2022 16:42, Vladimir Sementsov-Ogievskiy wrote: > > On 7/11/22 14:07, Denis V. Lunev wrote: > > > Commit 5f76a7aac156ca75680dad5df4a385fd0b58f6b1 is looking harmless from > > > the first glance, but it has changed things a lot. 'libvirt' uses it to > > > detect that it should follow new initialization way and this changes > > > things considerably. With this procedure followed, blockdev_init() is > > > not called anymore and thus block_acct_setup() helper is not called. > > > > I'm not sure that 5f76a7aac156ca is really the corner stone.. But yes, > > libvirt moved to "blockdev era", which means that we don't use old > > -drive, > > instead block nodes are created by -blockdev / qmp: blockdev-add, and > > attached > > to block devices by node-name. > > > git bisected, thus I am sure here > > > > And if I understand correctly blockdev_init() is called only on -drive > > path. > > > > I have some questions: > > > > 1. After this patch, don't we call block_acct_setup() twice on old path > > with -drive? That seems safe as block_acct_setup just assign fields of > > BlockAcctStats.. But that's doesn't look good. > > > hmmm I don't think it's actually correct because then a value that was explicitly set with -drive will by overridden by the default provided by the device. A possible solution would be to switch the defaults in the BlockBackend initialisation back to true, and then have a ON_OFF_AUTO property in the devices to allow overriding the default from -drive. With -blockdev, the BlockBackend default will be hard coded to true and the options of the devices will be the only way to change it. > > 2. Do we really need these options? Could we instead just enable > > accounting invalid and failed ops unconditionally? I doubt that someone > > will learn that these new options appeared and will use them to disable > > the failed/invalid accounting again. > > > I can move assignment of these fields to true int > block_acct_init() and forget about "configurable" > items in new path. I do not think that somebody > ever has these options set. Well, whether anyone uses the option is a different question. I don't know. But it has existed for many years. > The real question in this patch is that this initialization > was a precondition for old good "long IO" report > configuration, which should be "enableable". > > But we could move this option to "tracked request" > layer only and this will solve my puzzle. So, I'll move > "long IO report" to tracked request level only and will > create an option for it on bdrv_ level and will avoid > it on blk_ accounting. > > What do you think? I'm not sure what you mean by "long IO report". Don't these switches just change which kind of operations are counted into statistics rather than changing the structure of the report? Conceptually, I would like accounting on the block node level, but it's not what we have been doing, so it would be a big change. Kevin
Re: [PATCH v10 03/21] job.c: API functions not used outside should be static
Am 25.07.2022 um 09:38 hat Emanuele Giuseppe Esposito geschrieben: > job_event_* functions can all be static, as they are not used > outside job.c. > > Same applies for job_txn_add_job(). > > Signed-off-by: Emanuele Giuseppe Esposito > Reviewed-by: Stefan Hajnoczi > Reviewed-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Kevin Wolf
Re: [PATCH v10 02/21] job.h: categorize fields in struct Job
Am 25.07.2022 um 09:38 hat Emanuele Giuseppe Esposito geschrieben: > Categorize the fields in struct Job to understand which ones > need to be protected by the job mutex and which don't. > > Signed-off-by: Emanuele Giuseppe Esposito > Reviewed-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Kevin Wolf
Re: [PATCH v10 04/21] aio-wait.h: introduce AIO_WAIT_WHILE_UNLOCKED
Am 25.07.2022 um 09:38 hat Emanuele Giuseppe Esposito geschrieben: > Same as AIO_WAIT_WHILE macro, but if we are in the Main loop > do not release and then acquire ctx_ 's aiocontext. > > Once all Aiocontext locks go away, this macro will replace > AIO_WAIT_WHILE. > > Signed-off-by: Emanuele Giuseppe Esposito > Reviewed-by: Stefan Hajnoczi > Reviewed-by: Vladimir Sementsov-Ogievskiy > --- > include/block/aio-wait.h | 17 + > 1 file changed, 13 insertions(+), 4 deletions(-) > > diff --git a/include/block/aio-wait.h b/include/block/aio-wait.h > index 54840f8622..a61f82c617 100644 > --- a/include/block/aio-wait.h > +++ b/include/block/aio-wait.h > @@ -59,10 +59,13 @@ typedef struct { > extern AioWait global_aio_wait; > > /** > - * AIO_WAIT_WHILE: > + * _AIO_WAIT_WHILE: > * @ctx: the aio context, or NULL if multiple aio contexts (for which the > * caller does not hold a lock) are involved in the polling condition. > * @cond: wait while this conditional expression is true > + * @unlock: whether to unlock and then lock again @ctx. This apples > + * only when waiting for another AioContext from the main loop. > + * Otherwise it's ignored. > * > * Wait while a condition is true. Use this to implement synchronous > * operations that require event loop activity. > @@ -75,7 +78,7 @@ extern AioWait global_aio_wait; > * wait on conditions between two IOThreads since that could lead to > deadlock, > * go via the main loop instead. > */ > -#define AIO_WAIT_WHILE(ctx, cond) ({ \ > +#define _AIO_WAIT_WHILE(ctx, cond, unlock) ({ \ "All identifiers that begin with an underscore and either an uppercase letter or another underscore are always reserved for any use." (C11, 7.1.3) Kevin
Re: [PATCH 1/1] block: add missed block_acct_setup with new block device init procedure
On 29.07.2022 11:13, Kevin Wolf wrote: Am 28.07.2022 um 21:27 hat Denis V. Lunev geschrieben: On 28.07.2022 16:42, Vladimir Sementsov-Ogievskiy wrote: On 7/11/22 14:07, Denis V. Lunev wrote: Commit 5f76a7aac156ca75680dad5df4a385fd0b58f6b1 is looking harmless from the first glance, but it has changed things a lot. 'libvirt' uses it to detect that it should follow new initialization way and this changes things considerably. With this procedure followed, blockdev_init() is not called anymore and thus block_acct_setup() helper is not called. I'm not sure that 5f76a7aac156ca is really the corner stone.. But yes, libvirt moved to "blockdev era", which means that we don't use old -drive, instead block nodes are created by -blockdev / qmp: blockdev-add, and attached to block devices by node-name. git bisected, thus I am sure here And if I understand correctly blockdev_init() is called only on -drive path. I have some questions: 1. After this patch, don't we call block_acct_setup() twice on old path with -drive? That seems safe as block_acct_setup just assign fields of BlockAcctStats.. But that's doesn't look good. hmmm I don't think it's actually correct because then a value that was explicitly set with -drive will by overridden by the default provided by the device. A possible solution would be to switch the defaults in the BlockBackend initialisation back to true, and then have a ON_OFF_AUTO property in the devices to allow overriding the default from -drive. With -blockdev, the BlockBackend default will be hard coded to true and the options of the devices will be the only way to change it. 2. Do we really need these options? Could we instead just enable accounting invalid and failed ops unconditionally? I doubt that someone will learn that these new options appeared and will use them to disable the failed/invalid accounting again. I can move assignment of these fields to true int block_acct_init() and forget about "configurable" items in new path. I do not think that somebody ever has these options set. Well, whether anyone uses the option is a different question. I don't know. But it has existed for many years. I have said about very small patch like the following iris ~/src/qemu $ git diff diff --git a/block/accounting.c b/block/accounting.c index 2030851d79..c20d6ba9a0 100644 --- a/block/accounting.c +++ b/block/accounting.c @@ -38,6 +38,8 @@ void block_acct_init(BlockAcctStats *stats) if (qtest_enabled()) { clock_type = QEMU_CLOCK_VIRTUAL; } + stats->account_invalid = true; + stats->account_failed = true; } void block_acct_setup(BlockAcctStats *stats, bool account_invalid, iris ~/src/qemu $ but your proposal with ON_OFF_AUTO will work for me too. The real question - do we really need to publish this option for the external to configure it? The real question in this patch is that this initialization was a precondition for old good "long IO" report configuration, which should be "enableable". But we could move this option to "tracked request" layer only and this will solve my puzzle. So, I'll move "long IO report" to tracked request level only and will create an option for it on bdrv_ level and will avoid it on blk_ accounting. What do you think? I'm not sure what you mean by "long IO report". Don't these switches just change which kind of operations are counted into statistics rather than changing the structure of the report? Conceptually, I would like accounting on the block node level, but it's not what we have been doing, so it would be a big change. I have to say sorry again. I have found this place once I have reverted to my very old series discussed here + some late additions on top of it done by Vladimir. https://lists.defectivebydesign.org/archive/html/qemu-devel/2020-07/msg03772.html I will definitely have to come back to this later. Den
[RFC v2 00/10] Introduce an extensible static analyzer
This series introduces a static analyzer for QEMU. It consists of a single static-analyzer.py script that relies on libclang's Python bindings, and provides a common framework on which arbitrary static analysis checks can be developed and run against QEMU's code base. Summary of the series: - Patch 1 adds the base static analyzer, along with a simple check that finds static functions whose return value is never used, and patch 2 fixes many occurrences of this. - Patch 3 introduces support for output-comparison check tests, and adds some tests to the abovementioned check. - Patch 4 makes the analyzer skip checks on a translation unit when it hasn't been modified since the last time those checks passed. - Patch 5 adds a check to ensure that non-coroutine_fn functions don't perform direct calls to coroutine_fn functions, and patch 6 fixes some violations of this rule. - Patch 7 adds a check to ensure that operations on coroutine_fn pointers make sense, like assignment and indirect calls, and patch 8 fixes some problems detected by the check. (Implementing this check properly is complicated, since AFAICT annotation attributes cannot be applied directly to types. This part still needs a lot of work.) - Patch 9 introduces a no_coroutine_fn marker for functions that should not be called from coroutines, makes generated_co_wrapper evaluate to no_coroutine_fn, and adds a check enforcing this rule. Patch 10 fixes some violations that it finds. The current primary motivation for this work is enforcing rules around block layer coroutines, which is why most of the series focuses on that. However, the static analyzer is intended to be sufficiently generic to satisfy other present and future QEMU static analysis needs. Performance isn't great, but with some more optimization, the analyzer should be fast enough to be used iteratively during development, given that it avoids reanalyzing unmodified translation units, and that users can restrict the set of translation units under consideration. It should also be fast enough to run in CI (?). Consider a small QEMU configuration and build (all commands were run on the same 12-thread laptop): $ cd build && time ../configure --target-list=x86_64-softmmu && cd .. [...] real0m17.232s user0m13.261s sys 0m3.895s $ time make -C build -j $(nproc) all [...] real2m39.029s user14m49.370s sys 1m57.364s $ time make -C build -j $(nproc) check [...] real2m46.349s user6m4.718s sys 4m15.660s We can run the static analyzer against all translation units enabled in this configuration: $ time ./static-analyzer.py build util/qemu-coroutine.c:122:23: non-coroutine_fn function calls coroutine_fn qemu_coroutine_self() io/channel.c:152:17: non-coroutine_fn function calls coroutine_fn qio_channel_yield() [...] Analyzed 1649 translation units in 520.3 seconds. real8m42.342s user95m51.759s sys 0m21.576s You will need libclang's Python bindings to run this. Try `dnf install python3-clang` or `apt install python3-clang`. It takes around 1 to 2 seconds for the analyzer to load the compilation database, determine which translation units to analyze, etc. The durations reported by the analyzer itself don't include those steps, which is why they differ from what `time` reports. We can also analyze only some of the translation units: $ time ./static-analyzer.py build block block/raw-format.c:420:12: non-coroutine_fn function calls coroutine_fn bdrv_co_ioctl() block/blkverify.c:266:12: non-coroutine_fn function calls coroutine_fn bdrv_co_flush() [...] Analyzed 21 translation units (58 other were up-to-date) in 5.8 seconds. real0m7.031s user0m40.951s sys 0m1.299s Since the previous command had already analyzed all translation units, only the ones that had problems were reanalyzed. Now skipping all the actual checks, but still parsing and building the AST for each translation unit, and adding --force to reanalyze all translation units: $ time ./static-analyzer.py build --force --skip-checks Analyzed 1649 translation units in 41.2 seconds. real0m42.296s user7m14.256s sys 0m15.803s And now running a single check: $ time ./static-analyzer.py build --force --check return-value-never-used Analyzed 1649 translation units in 157.6 seconds. real2m38.759s user29m28.930s sys 0m17.968s TODO: - Run in GitLab CI (?). - Finish the "coroutine_fn" check. - Add check tests where missing. - Avoid redundant AST traversals while keeping checks modular. - More optimization. v2: - Fix parsing of compilation database commands. - Reorganize checks and split them into separate modules. - Make "return-value-never-used" ignore __attribute__((unused)) funcs. - Add a visitor() abstrac
[RFC v2 01/10] Add an extensible static analyzer
Add a static-analyzer.py script that uses libclang's Python bindings to provide a common framework on which arbitrary static analysis checks can be developed and run against QEMU's code base. As an example, a simple "return-value-never-used" check is included that verifies that the return value of static, non-void functions is used by at least one caller. Signed-off-by: Alberto Faria --- static-analyzer.py | 486 + static_analyzer/__init__.py| 242 ++ static_analyzer/return_value_never_used.py | 117 + 3 files changed, 845 insertions(+) create mode 100755 static-analyzer.py create mode 100644 static_analyzer/__init__.py create mode 100644 static_analyzer/return_value_never_used.py diff --git a/static-analyzer.py b/static-analyzer.py new file mode 100755 index 00..3ade422dbf --- /dev/null +++ b/static-analyzer.py @@ -0,0 +1,486 @@ +#!/usr/bin/env python3 +# # + +from configparser import ConfigParser +from contextlib import contextmanager +from dataclasses import dataclass +import json +import os +import os.path +import shlex +import subprocess +import sys +import re +from argparse import ArgumentParser, Namespace, RawDescriptionHelpFormatter +from multiprocessing import Pool +from pathlib import Path +import textwrap +import time +from typing import ( +Iterable, +Iterator, +List, +Mapping, +NoReturn, +Sequence, +Union, +) + +import clang.cindex # type: ignore + +from static_analyzer import CHECKS, CheckContext + +# # +# Usage + + +def parse_args() -> Namespace: + +available_checks = "\n".join( +f" {name} {' '.join((CHECKS[name].__doc__ or '').split())}" +for name in sorted(CHECKS) +) + +parser = ArgumentParser( +allow_abbrev=False, +formatter_class=RawDescriptionHelpFormatter, +description=textwrap.dedent( +""" +Checks are best-effort, but should never report false positives. + +This only considers translation units enabled under the given QEMU +build configuration. Note that a single .c file may give rise to +several translation units. + +You should build QEMU before running this, since some translation +units depend on files that are generated during the build. If you +don't, you'll get errors, but should never get false negatives. +""" +), +epilog=textwrap.dedent( +f""" +available checks: +{available_checks} + +exit codes: +0 No problems found. +1 Internal failure. +2 Bad usage. +3 Problems found in one or more translation units. +""" +), +) + +parser.add_argument( +"build_dir", +type=Path, +help="Path to the build directory.", +) + +parser.add_argument( +"translation_units", +type=Path, +nargs="*", +help=( +"Analyze only translation units whose root source file matches or" +" is under one of the given paths." +), +) + +# add regular options + +parser.add_argument( +"-c", +"--check", +metavar="CHECK", +dest="check_names", +choices=sorted(CHECKS), +action="append", +help=( +"Enable the given check. Can be given multiple times. If not given," +" all checks are enabled." +), +) + +parser.add_argument( +"-j", +"--jobs", +dest="threads", +type=int, +default=os.cpu_count() or 1, +help=( +f"Number of threads to employ. Defaults to {os.cpu_count() or 1} on" +f" this machine." +), +) + +# add development options + +dev_options = parser.add_argument_group("development options") + +dev_options.add_argument( +"--profile", +metavar="SORT_KEY", +help=( +"Profile execution. Forces single-threaded execution. The argument" +" specifies how to sort the results; see" +" https://docs.python.org/3/library/profile.html#pstats.Stats.sort_stats"; +), +) + +dev_options.add_argument( +"--skip-checks", +action="store_true", +help="Do everything except actually running the checks.", +) + +# parse arguments + +args = parser.parse_args() +args.check_names = sorted(set(args.check_names or CHECKS)) + +return args + + +# # +# Main + + +def main() -> int: + +args = parse_args() + +if args.profile: + +import cProfile +import pstats + +profile = cP
[RFC v2 02/10] Drop unused static function return values
Make non-void static functions whose return values are ignored by all callers return void instead. These functions were found by static-analyzer.py. Not all occurrences of this problem were fixed. Signed-off-by: Alberto Faria --- accel/kvm/kvm-all.c | 12 ++--- accel/tcg/plugin-gen.c| 9 ++-- accel/tcg/translate-all.c | 9 ++-- audio/audio.c | 5 +- block/block-copy.c| 4 +- block/file-posix.c| 6 +-- block/io.c| 30 +--- block/qcow2-bitmap.c | 6 +-- block/quorum.c| 5 +- block/vpc.c | 4 +- block/vvfat.c | 11 ++--- chardev/char-ringbuf.c| 4 +- contrib/ivshmem-server/main.c | 4 +- contrib/vhost-user-blk/vhost-user-blk.c | 5 +- dump/dump.c | 4 +- fsdev/virtfs-proxy-helper.c | 3 +- gdbstub.c | 18 +++- hw/audio/intel-hda.c | 7 ++- hw/audio/pcspk.c | 7 +-- hw/char/virtio-serial-bus.c | 14 +++--- hw/display/cirrus_vga.c | 5 +- hw/hyperv/vmbus.c | 10 ++-- hw/i386/intel_iommu.c | 28 ++-- hw/i386/pc_q35.c | 5 +- hw/ide/pci.c | 4 +- hw/net/rtl8139.c | 3 +- hw/net/virtio-net.c | 6 +-- hw/net/vmxnet3.c | 3 +- hw/nvme/ctrl.c| 17 ++- hw/nvram/fw_cfg.c | 3 +- hw/scsi/megasas.c | 6 +-- hw/scsi/mptconfig.c | 7 +-- hw/scsi/mptsas.c | 14 ++ hw/scsi/scsi-bus.c| 6 +-- hw/usb/dev-audio.c| 13 +++--- hw/usb/hcd-ehci.c | 6 +-- hw/usb/hcd-ohci.c | 4 +- hw/usb/hcd-xhci.c | 56 +++ hw/vfio/common.c | 21 + hw/virtio/vhost-vdpa.c| 3 +- hw/virtio/vhost.c | 11 ++--- hw/virtio/virtio-iommu.c | 4 +- hw/virtio/virtio-mem.c| 9 ++-- io/channel-command.c | 10 ++-- migration/migration.c | 12 ++--- net/dump.c| 16 +++ net/vhost-vdpa.c | 8 ++-- qemu-img.c| 6 +-- qga/commands-posix-ssh.c | 10 ++-- softmmu/physmem.c | 18 softmmu/qtest.c | 5 +- subprojects/libvduse/libvduse.c | 12 ++--- subprojects/libvhost-user/libvhost-user.c | 24 -- target/i386/host-cpu.c| 3 +- target/i386/kvm/kvm.c | 19 tcg/optimize.c| 3 +- tests/qtest/libqos/malloc.c | 5 +- tests/qtest/libqos/qgraph.c | 3 +- tests/qtest/test-x86-cpuid-compat.c | 8 ++-- tests/qtest/virtio-9p-test.c | 6 +-- tests/unit/test-aio-multithread.c | 5 +- tests/vhost-user-bridge.c | 19 +++- ui/vnc.c | 23 -- util/aio-posix.c | 7 +-- util/uri.c| 18 +++- 65 files changed, 248 insertions(+), 403 deletions(-) diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index f165074e99..748e9d6a2a 100644 --- a/accel/kvm/kvm-all.c +++ b/accel/kvm/kvm-all.c @@ -759,7 +759,7 @@ static uint32_t kvm_dirty_ring_reap_one(KVMState *s, CPUState *cpu) } /* Must be with slots_lock held */ -static uint64_t kvm_dirty_ring_reap_locked(KVMState *s, CPUState* cpu) +static void kvm_dirty_ring_reap_locked(KVMState *s, CPUState* cpu) { int ret; uint64_t total = 0; @@ -785,18 +785,14 @@ static uint64_t kvm_dirty_ring_reap_locked(KVMState *s, CPUState* cpu) if (total) { trace_kvm_dirty_ring_reap(total, stamp / 1000); } - -return total; } /* * Currently for simplicity, we must hold BQL before calling this. We can * consider to drop the BQL if we're clear with all the race conditions. */ -static uint64_t kvm_dirty_ring_reap(KVMState *s, CPUState *cpu) +static void kvm_dirty_ring_reap(KVMState *s, CPUState *cpu) { -uint64_t total; - /* * We need to lock all kvm slots for all address spaces here, * because: @@ -813,10 +809,8 @@ static uint64_t kvm_dirty_ring_reap(KVMState *s, CPUState *cpu) * reset below. */ kvm_slots_lock(); -total = kvm_dirty_ring_reap
[RFC v2 03/10] static-analyzer: Support adding tests to checks
Introduce an add_check_tests() method to add output-comparison tests to checks, and add some tests to the "return-value-never-used" check. Signed-off-by: Alberto Faria --- static-analyzer.py | 133 - static_analyzer/__init__.py| 39 +- static_analyzer/return_value_never_used.py | 103 3 files changed, 267 insertions(+), 8 deletions(-) diff --git a/static-analyzer.py b/static-analyzer.py index 3ade422dbf..16e331000d 100755 --- a/static-analyzer.py +++ b/static-analyzer.py @@ -11,17 +11,26 @@ import subprocess import sys import re -from argparse import ArgumentParser, Namespace, RawDescriptionHelpFormatter +from argparse import ( +Action, +ArgumentParser, +Namespace, +RawDescriptionHelpFormatter, +) from multiprocessing import Pool from pathlib import Path +from tempfile import TemporaryDirectory import textwrap import time +from traceback import print_exc from typing import ( +Callable, Iterable, Iterator, List, Mapping, NoReturn, +Optional, Sequence, Union, ) @@ -37,7 +46,7 @@ def parse_args() -> Namespace: available_checks = "\n".join( -f" {name} {' '.join((CHECKS[name].__doc__ or '').split())}" +f" {name} {' '.join((CHECKS[name].checker.__doc__ or '').split())}" for name in sorted(CHECKS) ) @@ -134,14 +143,37 @@ def parse_args() -> Namespace: help="Do everything except actually running the checks.", ) +dev_options.add_argument( +"--test", +action=TestAction, +nargs=0, +help="Run tests for all checks and exit.", +) + # parse arguments -args = parser.parse_args() +try: +args = parser.parse_args() +except TestSentinelError: +# --test was specified +if len(sys.argv) > 2: +parser.error("--test must be given alone") +return Namespace(test=True) + args.check_names = sorted(set(args.check_names or CHECKS)) return args +class TestAction(Action): +def __call__(self, parser, namespace, values, option_string=None): +raise TestSentinelError + + +class TestSentinelError(Exception): +pass + + # # # Main @@ -150,7 +182,11 @@ def main() -> int: args = parse_args() -if args.profile: +if args.test: + +return test() + +elif args.profile: import cProfile import pstats @@ -170,6 +206,18 @@ def main() -> int: return analyze(args) +def test() -> int: + +for (check_name, check) in CHECKS.items(): +for group in check.test_groups: +for input in group.inputs: +run_test( +check_name, group.location, input, group.expected_output +) + +return 0 + + def analyze(args: Namespace) -> int: tr_units = get_translation_units(args) @@ -247,6 +295,7 @@ class TranslationUnit: build_command: str system_include_paths: Sequence[str] check_names: Sequence[str] +custom_printer: Optional[Callable[[str], None]] def get_translation_units(args: Namespace) -> Sequence["TranslationUnit"]: @@ -264,6 +313,7 @@ def get_translation_units(args: Namespace) -> Sequence["TranslationUnit"]: build_command=cmd["command"], system_include_paths=system_include_paths, check_names=args.check_names, +custom_printer=None, ) for cmd in compile_commands ) @@ -380,7 +430,7 @@ def analyze_translation_unit(tr_unit: TranslationUnit) -> bool: try: for name in tr_unit.check_names: -CHECKS[name](check_context) +CHECKS[name].checker(check_context) except Exception as e: raise RuntimeError(f"Error analyzing {check_context._rel_path}") from e @@ -428,7 +478,9 @@ def get_check_context(tr_unit: TranslationUnit) -> CheckContext: except clang.cindex.TranslationUnitLoadError as e: raise RuntimeError(f"Failed to load {rel_path}") from e -if sys.stdout.isatty(): +if tr_unit.custom_printer is not None: +printer = tr_unit.custom_printer +elif sys.stdout.isatty(): # add padding to fully overwrite progress message printer = lambda s: print(s.ljust(50)) else: @@ -457,6 +509,75 @@ def get_check_context(tr_unit: TranslationUnit) -> CheckContext: return check_context +# # +# Tests + + +def run_test( +check_name: str, location: str, input: str, expected_output: str +) -> None: + +with TemporaryDirectory() as temp_dir: + +os.chdir(temp_dir) + +input_path = Path(temp_dir) / "file.c" +input_path.write_text(input) + +actual_output_list = [] + +tu_context = TranslationUnit( +
[RFC v2 09/10] block: Add no_coroutine_fn marker
When applied to a function, it advertises that it should not be called from coroutine_fn functions. Make generated_co_wrapper evaluate to no_coroutine_fn, as coroutine_fn functions should instead directly call the coroutine_fn that backs the generated_co_wrapper. Add a "no_coroutine_fn" check to static-analyzer.py that enforces no_coroutine_fn rules. Signed-off-by: Alberto Faria --- include/block/block-common.h | 2 +- include/qemu/coroutine.h | 12 static_analyzer/no_coroutine_fn.py | 111 + 3 files changed, 124 insertions(+), 1 deletion(-) create mode 100644 static_analyzer/no_coroutine_fn.py diff --git a/include/block/block-common.h b/include/block/block-common.h index fdb7306e78..4b60c8c6f2 100644 --- a/include/block/block-common.h +++ b/include/block/block-common.h @@ -42,7 +42,7 @@ * * Read more in docs/devel/block-coroutine-wrapper.rst */ -#define generated_co_wrapper +#define generated_co_wrapper no_coroutine_fn /* block.c */ typedef struct BlockDriver BlockDriver; diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h index 26445b3176..c61dd2d3f7 100644 --- a/include/qemu/coroutine.h +++ b/include/qemu/coroutine.h @@ -48,6 +48,18 @@ #define coroutine_fn #endif +/** + * Mark a function that should never be called from a coroutine context + * + * This typically means that there is an analogous, coroutine_fn function that + * should be used instead. + */ +#ifdef __clang__ +#define no_coroutine_fn __attribute__((__annotate__("no_coroutine_fn"))) +#else +#define no_coroutine_fn +#endif + /** * This can wrap a call to a coroutine_fn from a non-coroutine_fn function and * suppress the static analyzer's complaints. diff --git a/static_analyzer/no_coroutine_fn.py b/static_analyzer/no_coroutine_fn.py new file mode 100644 index 00..1d0b93bb62 --- /dev/null +++ b/static_analyzer/no_coroutine_fn.py @@ -0,0 +1,111 @@ +# # + +from clang.cindex import Cursor, CursorKind # type: ignore + +from static_analyzer import ( +CheckContext, +VisitorResult, +check, +is_annotation, +is_annotated_with, +visit, +) +from static_analyzer.coroutine_fn import is_coroutine_fn + +# # + + +@check("no_coroutine_fn") +def check_no_coroutine_fn(context: CheckContext) -> None: +"""Reports violations of no_coroutine_fn rules.""" + +def visitor(node: Cursor) -> VisitorResult: + +validate_annotations(context, node) + +if node.kind == CursorKind.FUNCTION_DECL and node.is_definition(): +check_calls(context, node) +return VisitorResult.CONTINUE + +return VisitorResult.RECURSE + +visit(context.translation_unit.cursor, visitor) + + +def validate_annotations(context: CheckContext, node: Cursor) -> None: + +# validate annotation usage + +if is_annotation(node, "no_coroutine_fn") and ( +node.parent is None or not is_valid_no_coroutine_fn_usage(node.parent) +): +context.report(node, "invalid no_coroutine_fn usage") + +# reject re-declarations with inconsistent annotations + +if node.kind == CursorKind.FUNCTION_DECL and is_no_coroutine_fn( +node +) != is_no_coroutine_fn(node.canonical): + +context.report( +node, +f"no_coroutine_fn annotation disagreement with" +f" {context.format_location(node.canonical)}", +) + + +def check_calls(context: CheckContext, caller: Cursor) -> None: +""" +Reject calls from coroutine_fn to no_coroutine_fn. + +Assumes that `caller` is a function definition. +""" + +if is_coroutine_fn(caller): + +def visitor(node: Cursor) -> VisitorResult: + +# We can get some "calls" that are actually things like top-level +# macro invocations for which `node.referenced` is None. + +if ( +node.kind == CursorKind.CALL_EXPR +and node.referenced is not None +and is_no_coroutine_fn(node.referenced.canonical) +): +context.report( +node, +f"coroutine_fn calls no_coroutine_fn function" +f" {node.referenced.spelling}()", +) + +return VisitorResult.RECURSE + +visit(caller, visitor) + + +# # + + +def is_valid_no_coroutine_fn_usage(parent: Cursor) -> bool: +""" +Checks if an occurrence of `no_coroutine_fn` represented by a node with +parent `parent` appears at a valid point in the AST. This is the case if the +parent is a function declaration/definition. +""" + +return parent.kind == CursorKind.FUNCTION_DECL + + +def is_no_coroutine_fn(node: Cursor) -> bool: +""" +Checks whether t
[RFC v2 05/10] static-analyzer: Enforce coroutine_fn restrictions for direct calls
Add a "coroutine_fn" check to static-analyzer.py that ensures that non-coroutine_fn functions don't perform direct calls to coroutine_fn functions. For the few cases where this must happen, introduce an __allow_coroutine_fn_call() macro that wraps offending calls and overrides the static analyzer. Signed-off-by: Alberto Faria --- include/qemu/coroutine.h| 13 +++ static_analyzer/__init__.py | 46 - static_analyzer/coroutine_fn.py | 173 3 files changed, 231 insertions(+), 1 deletion(-) create mode 100644 static_analyzer/coroutine_fn.py diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h index 08c5bb3c76..40a4037525 100644 --- a/include/qemu/coroutine.h +++ b/include/qemu/coroutine.h @@ -42,7 +42,20 @@ * * } */ +#ifdef __clang__ +#define coroutine_fn __attribute__((__annotate__("coroutine_fn"))) +#else #define coroutine_fn +#endif + +/** + * This can wrap a call to a coroutine_fn from a non-coroutine_fn function and + * suppress the static analyzer's complaints. + * + * You don't want to use this. + */ +#define __allow_coroutine_fn_call(call) \ +((void)"__allow_coroutine_fn_call", call) typedef struct Coroutine Coroutine; diff --git a/static_analyzer/__init__.py b/static_analyzer/__init__.py index 36028724b1..5abdbd21a3 100644 --- a/static_analyzer/__init__.py +++ b/static_analyzer/__init__.py @@ -23,8 +23,9 @@ from clang.cindex import ( # type: ignore Cursor, CursorKind, -TranslationUnit, SourceLocation, +TranslationUnit, +TypeKind, conf, ) @@ -146,6 +147,49 @@ def matcher(n: Cursor) -> bool: return any(map(matcher, node.get_children())) +def is_annotated_with(node: Cursor, annotation: str) -> bool: +return any(is_annotation(c, annotation) for c in node.get_children()) + + +def is_annotation(node: Cursor, annotation: str) -> bool: +return node.kind == CursorKind.ANNOTATE_ATTR and node.spelling == annotation + + +def is_comma_wrapper(node: Cursor, literal: str) -> bool: +""" +Check if `node` is a "comma-wrapper" with the given string literal. + +A "comma-wrapper" is the pattern `((void)string_literal, expr)`. The `expr` +is said to be "comma-wrapped". +""" + +# TODO: Do we need to check that the operator is `,`? Is there another +# operator that can combine void and an expr? + +if node.kind != CursorKind.BINARY_OPERATOR: +return False + +[left, _right] = node.get_children() + +if ( +left.kind != CursorKind.CSTYLE_CAST_EXPR +or left.type.kind != TypeKind.VOID +): +return False + +[unexposed_expr] = left.get_children() + +if unexposed_expr.kind != CursorKind.UNEXPOSED_EXPR: +return False + +[string_literal] = unexposed_expr.get_children() + +return ( +string_literal.kind == CursorKind.STRING_LITERAL +and string_literal.spelling == f'"{literal}"' +) + + # # # Checks diff --git a/static_analyzer/coroutine_fn.py b/static_analyzer/coroutine_fn.py new file mode 100644 index 00..f70a3167eb --- /dev/null +++ b/static_analyzer/coroutine_fn.py @@ -0,0 +1,173 @@ +# # + +from clang.cindex import Cursor, CursorKind, TypeKind # type: ignore + +from static_analyzer import ( +CheckContext, +VisitorResult, +check, +is_annotated_with, +is_annotation, +is_comma_wrapper, +visit, +) + +# # + + +@check("coroutine_fn") +def check_coroutine_fn(context: CheckContext) -> None: +"""Reports violations of coroutine_fn rules.""" + +def visitor(node: Cursor) -> VisitorResult: + +validate_annotations(context, node) + +if node.kind == CursorKind.FUNCTION_DECL and node.is_definition(): +check_direct_calls(context, node) +return VisitorResult.CONTINUE + +return VisitorResult.RECURSE + +visit(context.translation_unit.cursor, visitor) + + +def validate_annotations(context: CheckContext, node: Cursor) -> None: + +# validate annotation usage + +if is_annotation(node, "coroutine_fn") and ( +node.parent is None or not is_valid_coroutine_fn_usage(node.parent) +): +context.report(node, "invalid coroutine_fn usage") + +if is_comma_wrapper( +node, "__allow_coroutine_fn_call" +) and not is_valid_allow_coroutine_fn_call_usage(node): +context.report(node, "invalid __allow_coroutine_fn_call usage") + +# reject re-declarations with inconsistent annotations + +if node.kind == CursorKind.FUNCTION_DECL and is_coroutine_fn( +node +) != is_coroutine_fn(node.canonical): +context.report( +node, +f"coroutine_fn annotation disagreement with" +f" {cont
[RFC v2 04/10] static-analyzer: Avoid reanalyzing unmodified translation units
For each translation unit, run each check only if any of the translation unit's files has been modified since the last time the check ran and passed without reporting problems. Signed-off-by: Alberto Faria --- static-analyzer.py | 240 - 1 file changed, 217 insertions(+), 23 deletions(-) diff --git a/static-analyzer.py b/static-analyzer.py index 16e331000d..140760a93e 100755 --- a/static-analyzer.py +++ b/static-analyzer.py @@ -25,6 +25,8 @@ from traceback import print_exc from typing import ( Callable, +Dict, +FrozenSet, Iterable, Iterator, List, @@ -32,6 +34,7 @@ NoReturn, Optional, Sequence, +Tuple, Union, ) @@ -61,9 +64,19 @@ def parse_args() -> Namespace: build configuration. Note that a single .c file may give rise to several translation units. +For each translation unit, each check is run only if any its files +has been modified since the last time the check ran and passed +without reporting problems. + You should build QEMU before running this, since some translation units depend on files that are generated during the build. If you don't, you'll get errors, but should never get false negatives. +Also, translation units that haven't been built will always be +reanalyzed, even they haven't been modified, because we cant't know +what their dependencies are until they are built. (TODO: This is +rather annoying since `make all` does not actually build every +translation unit. Should this script trigger an actual full build +somehow as a first step?) """ ), epilog=textwrap.dedent( @@ -111,6 +124,16 @@ def parse_args() -> Namespace: ), ) +parser.add_argument( +"-f", +"--force", +action="store_true", +help=( +"Analyze translation units even if they haven't been modified since" +" the last analysis." +), +) + parser.add_argument( "-j", "--jobs", @@ -220,12 +243,17 @@ def test() -> int: def analyze(args: Namespace) -> int: -tr_units = get_translation_units(args) +analysis_timestamp = time.time() + +# load log and get translation units + +log = AnalysisLog.load(args.build_dir) +(tr_units, num_up_to_date_tr_units) = get_translation_units(args, log) # analyze translation units start_time = time.monotonic() -results: List[bool] = [] +results: List[AnalysisResults] = [] if len(tr_units) == 1: progress_suffix = " of 1 translation unit...\033[0m\r" @@ -237,7 +265,7 @@ def print_progress() -> None: print_progress() -def collect_results(results_iter: Iterable[bool]) -> None: +def collect_results(results_iter: Iterable[AnalysisResults]) -> None: if sys.stdout.isatty(): for r in results_iter: results.append(r) @@ -246,27 +274,41 @@ def collect_results(results_iter: Iterable[bool]) -> None: for r in results_iter: results.append(r) -if tr_units: +try: -if args.threads == 1: +if tr_units: -collect_results(map(analyze_translation_unit, tr_units)) +if args.threads == 1: -else: +collect_results(map(analyze_translation_unit, tr_units)) -# Mimic Python's default pool.map() chunk size, but limit it to -# 5 to avoid very big chunks when analyzing thousands of -# translation units. -chunk_size = min(5, -(-len(tr_units) // (args.threads * 4))) +else: -with Pool(processes=args.threads) as pool: -collect_results( -pool.imap_unordered( -analyze_translation_unit, tr_units, chunk_size +# Mimic Python's default pool.map() chunk size, but limit it to +# 5 to avoid very big chunks when analyzing thousands of +# translation units. +chunk_size = min(5, -(-len(tr_units) // (args.threads * 4))) + +with Pool(processes=args.threads) as pool: +collect_results( +pool.imap_unordered( +analyze_translation_unit, tr_units, chunk_size +) ) -) -end_time = time.monotonic() +end_time = time.monotonic() + +finally: + +# update analysis timestamps for passed checks for each translation unit +# (even if the static analyzer failed or was interrupted) + +for r in results: +log.set_last_analysis_time( +r.tr_unit_object_file, r.passed_check_names, analysis_timestamp +) + +log.save()
[RFC v2 06/10] Fix some direct calls from non-coroutine_fn to coroutine_fn
In some cases we need to use a different function, in others we need to make the caller a coroutine_fn, and in others still we need to wrap calls to coroutines in __allow_coroutine_fn_call(). Also fix coroutine_fn annotation disagreements between several declarations of the same function. These problems were found by static-analyzer.py. Not all occurrences of these problems were fixed. Signed-off-by: Alberto Faria --- block.c| 2 +- block/dirty-bitmap.c | 6 -- block/io.c | 18 +++--- block/monitor/block-hmp-cmds.c | 2 +- block/nvme.c | 3 ++- block/qcow2.c | 14 +++--- block/qcow2.h | 14 +++--- block/qed.c| 2 +- block/quorum.c | 2 +- block/ssh.c| 6 +++--- block/throttle-groups.c| 3 ++- include/block/block-hmp-cmds.h | 2 +- include/block/block-io.h | 5 +++-- include/qemu/coroutine.h | 18 ++ 14 files changed, 54 insertions(+), 43 deletions(-) diff --git a/block.c b/block.c index bc85f46eed..9f3814cbaa 100644 --- a/block.c +++ b/block.c @@ -561,7 +561,7 @@ int bdrv_create(BlockDriver *drv, const char* filename, if (qemu_in_coroutine()) { /* Fast-path if already in coroutine context */ -bdrv_create_co_entry(&cco); +__allow_coroutine_fn_call(bdrv_create_co_entry(&cco)); } else { co = qemu_coroutine_create(bdrv_create_co_entry, &cco); qemu_coroutine_enter(co); diff --git a/block/dirty-bitmap.c b/block/dirty-bitmap.c index bf3dc0512a..ccf46c0b1f 100644 --- a/block/dirty-bitmap.c +++ b/block/dirty-bitmap.c @@ -419,7 +419,8 @@ int bdrv_remove_persistent_dirty_bitmap(BlockDriverState *bs, const char *name, Error **errp) { if (qemu_in_coroutine()) { -return bdrv_co_remove_persistent_dirty_bitmap(bs, name, errp); +return __allow_coroutine_fn_call( +bdrv_co_remove_persistent_dirty_bitmap(bs, name, errp)); } else { Coroutine *co; BdrvRemovePersistentDirtyBitmapCo s = { @@ -495,7 +496,8 @@ bool bdrv_can_store_new_dirty_bitmap(BlockDriverState *bs, const char *name, { IO_CODE(); if (qemu_in_coroutine()) { -return bdrv_co_can_store_new_dirty_bitmap(bs, name, granularity, errp); +return __allow_coroutine_fn_call( +bdrv_co_can_store_new_dirty_bitmap(bs, name, granularity, errp)); } else { Coroutine *co; BdrvCanStoreNewDirtyBitmapCo s = { diff --git a/block/io.c b/block/io.c index 853ed44289..c2ed14cedb 100644 --- a/block/io.c +++ b/block/io.c @@ -449,8 +449,9 @@ static void bdrv_do_drained_begin(BlockDriverState *bs, bool recursive, BdrvChild *child, *next; if (qemu_in_coroutine()) { -bdrv_co_yield_to_drain(bs, true, recursive, parent, ignore_bds_parents, - poll, NULL); +__allow_coroutine_fn_call( +bdrv_co_yield_to_drain(bs, true, recursive, parent, + ignore_bds_parents, poll, NULL)); return; } @@ -516,8 +517,10 @@ static void bdrv_do_drained_end(BlockDriverState *bs, bool recursive, assert(drained_end_counter != NULL); if (qemu_in_coroutine()) { -bdrv_co_yield_to_drain(bs, false, recursive, parent, ignore_bds_parents, - false, drained_end_counter); +__allow_coroutine_fn_call( +bdrv_co_yield_to_drain(bs, false, recursive, parent, + ignore_bds_parents, false, + drained_end_counter)); return; } assert(bs->quiesce_counter > 0); @@ -643,7 +646,8 @@ void bdrv_drain_all_begin(void) GLOBAL_STATE_CODE(); if (qemu_in_coroutine()) { -bdrv_co_yield_to_drain(NULL, true, false, NULL, true, true, NULL); +__allow_coroutine_fn_call( +bdrv_co_yield_to_drain(NULL, true, false, NULL, true, true, NULL)); return; } @@ -2742,8 +2746,8 @@ int coroutine_fn bdrv_co_is_zero_fast(BlockDriverState *bs, int64_t offset, return (pnum == bytes) && (ret & BDRV_BLOCK_ZERO); } -int coroutine_fn bdrv_is_allocated(BlockDriverState *bs, int64_t offset, - int64_t bytes, int64_t *pnum) +int bdrv_is_allocated(BlockDriverState *bs, int64_t offset, int64_t bytes, + int64_t *pnum) { int ret; int64_t dummy; diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c index bfb3c043a0..b5ba9281f0 100644 --- a/block/monitor/block-hmp-cmds.c +++ b/block/monitor/block-hmp-cmds.c @@ -489,7 +489,7 @@ void hmp_nbd_server_stop(Monitor *mon, const QDict *qdict) hmp_handle_error(mon, err); } -void hmp_block_resize(Monitor *mon, const QDict *qdict) +void coroutine_fn hmp_block_resize(Moni
[RFC v2 07/10] static-analyzer: Enforce coroutine_fn restrictions on function pointers
Extend static-analyzer.py's "coroutine_fn" check to enforce coroutine_fn restrictions on function pointer operations. Invalid operations include assigning a coroutine_fn value to a non-coroutine_fn function pointer, and invoking a coroutine_fn function pointer from a non-coroutine_fn function. Signed-off-by: Alberto Faria --- static_analyzer/__init__.py | 27 static_analyzer/coroutine_fn.py | 115 ++-- 2 files changed, 138 insertions(+), 4 deletions(-) diff --git a/static_analyzer/__init__.py b/static_analyzer/__init__.py index 5abdbd21a3..90992d3500 100644 --- a/static_analyzer/__init__.py +++ b/static_analyzer/__init__.py @@ -24,6 +24,8 @@ Cursor, CursorKind, SourceLocation, +SourceRange, +TokenGroup, TranslationUnit, TypeKind, conf, @@ -117,6 +119,31 @@ def actual_visitor(node: Cursor, parent: Cursor, client_data: None) -> int: # Node predicates +def is_binary_operator(node: Cursor, operator: str) -> bool: +return ( +node.kind == CursorKind.BINARY_OPERATOR +and get_binary_operator_spelling(node) == operator +) + + +def get_binary_operator_spelling(node: Cursor) -> Optional[str]: + +assert node.kind == CursorKind.BINARY_OPERATOR + +[left, right] = node.get_children() + +op_range = SourceRange.from_locations(left.extent.end, right.extent.start) + +tokens = list(TokenGroup.get_tokens(node.translation_unit, op_range)) +if not tokens: +# Can occur when left and right children extents overlap due to +# misparsing. +return None + +[op_token, *_] = tokens +return op_token.spelling + + def might_have_attribute(node: Cursor, attr: Union[CursorKind, str]) -> bool: """ Check whether any of `node`'s children are an attribute of the given kind, diff --git a/static_analyzer/coroutine_fn.py b/static_analyzer/coroutine_fn.py index f70a3167eb..a16dcbeb52 100644 --- a/static_analyzer/coroutine_fn.py +++ b/static_analyzer/coroutine_fn.py @@ -8,6 +8,7 @@ check, is_annotated_with, is_annotation, +is_binary_operator, is_comma_wrapper, visit, ) @@ -22,6 +23,7 @@ def check_coroutine_fn(context: CheckContext) -> None: def visitor(node: Cursor) -> VisitorResult: validate_annotations(context, node) +check_function_pointers(context, node) if node.kind == CursorKind.FUNCTION_DECL and node.is_definition(): check_direct_calls(context, node) @@ -91,6 +93,83 @@ def visitor(node: Cursor) -> VisitorResult: visit(caller, visitor) +def check_function_pointers(context: CheckContext, node: Cursor) -> None: + +# What we would really like is to associate annotation attributes with types +# directly, but that doesn't seem possible. Instead, we have to look at the +# relevant variable/field/parameter declarations, and follow typedefs. + +# This doesn't check all possible ways of assigning to a coroutine_fn +# field/variable/parameter. That would probably be too much work. + +# TODO: Check struct/union/array initialization. +# TODO: Check assignment to struct/union/array fields. + +# check initialization of variables using coroutine_fn values + +if node.kind == CursorKind.VAR_DECL: + +children = [ +c +for c in node.get_children() +if c.kind +not in [ +CursorKind.ANNOTATE_ATTR, +CursorKind.INIT_LIST_EXPR, +CursorKind.TYPE_REF, +CursorKind.UNEXPOSED_ATTR, +] +] + +if ( +len(children) == 1 +and not is_coroutine_fn(node) +and is_coroutine_fn(children[0]) +): +context.report(node, "assigning coroutine_fn to non-coroutine_fn") + +# check initialization of fields using coroutine_fn values + +# TODO: This only checks designator initializers. + +if node.kind == CursorKind.INIT_LIST_EXPR: + +for initializer in filter( +lambda n: n.kind == CursorKind.UNEXPOSED_EXPR, +node.get_children(), +): + +children = list(initializer.get_children()) + +if ( +len(children) == 2 +and children[0].kind == CursorKind.MEMBER_REF +and not is_coroutine_fn(children[0].referenced) +and is_coroutine_fn(children[1]) +): +context.report( +initializer, +"assigning coroutine_fn to non-coroutine_fn", +) + +# check assignments of coroutine_fn values to variables or fields + +if is_binary_operator(node, "="): + +[left, right] = node.get_children() + +if ( +left.kind +in [ +CursorKind.DECL_REF_EXPR, +CursorKind.MEMBER_REF_EXPR, +] +and not is_coroutine_fn(left.refere
[RFC v2 10/10] Fix some calls from coroutine_fn to no_coroutine_fn
These calls were found by static-analyzer.py. Not all occurrences of this problem were fixed. Signed-off-by: Alberto Faria --- block/commit.c | 2 +- block/io.c | 4 ++-- block/mirror.c | 4 ++-- block/parallels.c | 28 ++-- block/qcow.c | 10 +- block/qcow2-snapshot.c | 6 +++--- block/qcow2.c | 24 block/qed-table.c | 2 +- block/qed.c| 12 ++-- block/vdi.c| 17 + block/vhdx.c | 8 block/vmdk.c | 11 ++- blockdev.c | 2 +- 13 files changed, 66 insertions(+), 64 deletions(-) diff --git a/block/commit.c b/block/commit.c index 38571510cb..945945de05 100644 --- a/block/commit.c +++ b/block/commit.c @@ -135,7 +135,7 @@ static int coroutine_fn commit_run(Job *job, Error **errp) } if (base_len < len) { -ret = blk_truncate(s->base, len, false, PREALLOC_MODE_OFF, 0, NULL); +ret = blk_co_truncate(s->base, len, false, PREALLOC_MODE_OFF, 0, NULL); if (ret) { return ret; } diff --git a/block/io.c b/block/io.c index c2ed14cedb..a7d16eae02 100644 --- a/block/io.c +++ b/block/io.c @@ -2736,8 +2736,8 @@ int coroutine_fn bdrv_co_is_zero_fast(BlockDriverState *bs, int64_t offset, return 1; } -ret = bdrv_common_block_status_above(bs, NULL, false, false, offset, - bytes, &pnum, NULL, NULL, NULL); +ret = bdrv_co_common_block_status_above(bs, NULL, false, false, offset, +bytes, &pnum, NULL, NULL, NULL); if (ret < 0) { return ret; diff --git a/block/mirror.c b/block/mirror.c index 3c4ab1159d..3cbb610118 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -921,8 +921,8 @@ static int coroutine_fn mirror_run(Job *job, Error **errp) * active layer. */ if (s->base == blk_bs(s->target)) { if (s->bdev_length > target_length) { -ret = blk_truncate(s->target, s->bdev_length, false, - PREALLOC_MODE_OFF, 0, NULL); +ret = blk_co_truncate(s->target, s->bdev_length, false, + PREALLOC_MODE_OFF, 0, NULL); if (ret < 0) { goto immediate_exit; } diff --git a/block/parallels.c b/block/parallels.c index a229c06f25..46baea6387 100644 --- a/block/parallels.c +++ b/block/parallels.c @@ -204,18 +204,18 @@ static int64_t allocate_clusters(BlockDriverState *bs, int64_t sector_num, * force the safer-but-slower fallocate. */ if (s->prealloc_mode == PRL_PREALLOC_MODE_TRUNCATE) { -ret = bdrv_truncate(bs->file, -(s->data_end + space) << BDRV_SECTOR_BITS, -false, PREALLOC_MODE_OFF, BDRV_REQ_ZERO_WRITE, -NULL); +ret = bdrv_co_truncate(bs->file, + (s->data_end + space) << BDRV_SECTOR_BITS, + false, PREALLOC_MODE_OFF, + BDRV_REQ_ZERO_WRITE, NULL); if (ret == -ENOTSUP) { s->prealloc_mode = PRL_PREALLOC_MODE_FALLOCATE; } } if (s->prealloc_mode == PRL_PREALLOC_MODE_FALLOCATE) { -ret = bdrv_pwrite_zeroes(bs->file, - s->data_end << BDRV_SECTOR_BITS, - space << BDRV_SECTOR_BITS, 0); +ret = bdrv_co_pwrite_zeroes(bs->file, +s->data_end << BDRV_SECTOR_BITS, +space << BDRV_SECTOR_BITS, 0); } if (ret < 0) { return ret; @@ -277,8 +277,8 @@ static coroutine_fn int parallels_co_flush_to_os(BlockDriverState *bs) if (off + to_write > s->header_size) { to_write = s->header_size - off; } -ret = bdrv_pwrite(bs->file, off, to_write, (uint8_t *)s->header + off, - 0); +ret = bdrv_co_pwrite(bs->file, off, to_write, + (uint8_t *)s->header + off, 0); if (ret < 0) { qemu_co_mutex_unlock(&s->lock); return ret; @@ -503,8 +503,8 @@ static int coroutine_fn parallels_co_check(BlockDriverState *bs, * In order to really repair the image, we must shrink it. * That means we have to pass exact=true. */ -ret = bdrv_truncate(bs->file, res->image_end_offset, true, -PREALLOC_MODE_OFF, 0, &local_err); +ret = bdrv_co_truncate(bs->file, res->image_end_offset, true, + PREALLOC_MODE_OFF, 0, &local_err); if (ret < 0) { error_report_err(local_err);
[RFC v2 08/10] Fix some bad coroutine_fn indirect calls and pointer assignments
These problems were found by static-analyzer.py. Not all occurrences of these problems were fixed. Signed-off-by: Alberto Faria --- block/backup.c | 2 +- include/block/block_int-common.h | 12 +--- 2 files changed, 6 insertions(+), 8 deletions(-) diff --git a/block/backup.c b/block/backup.c index b2b649e305..6a9ad97a53 100644 --- a/block/backup.c +++ b/block/backup.c @@ -309,7 +309,7 @@ static void coroutine_fn backup_pause(Job *job) } } -static void coroutine_fn backup_set_speed(BlockJob *job, int64_t speed) +static void backup_set_speed(BlockJob *job, int64_t speed) { BackupBlockJob *s = container_of(job, BackupBlockJob, common); diff --git a/include/block/block_int-common.h b/include/block/block_int-common.h index 8947abab76..16c45d1262 100644 --- a/include/block/block_int-common.h +++ b/include/block/block_int-common.h @@ -731,13 +731,11 @@ struct BlockDriver { void coroutine_fn (*bdrv_co_drain_end)(BlockDriverState *bs); bool (*bdrv_supports_persistent_dirty_bitmap)(BlockDriverState *bs); -bool (*bdrv_co_can_store_new_dirty_bitmap)(BlockDriverState *bs, - const char *name, - uint32_t granularity, - Error **errp); -int (*bdrv_co_remove_persistent_dirty_bitmap)(BlockDriverState *bs, - const char *name, - Error **errp); +bool coroutine_fn (*bdrv_co_can_store_new_dirty_bitmap)( +BlockDriverState *bs, const char *name, uint32_t granularity, +Error **errp); +int coroutine_fn (*bdrv_co_remove_persistent_dirty_bitmap)( +BlockDriverState *bs, const char *name, Error **errp); }; static inline bool block_driver_can_compress(BlockDriver *drv) -- 2.37.1
Re: [RFC 0/3] Add Generic SPI GPIO model
Hello Iris, On 7/29/22 01:23, Iris Chen wrote: Hey everyone, I have been working on a project to add support for SPI-based TPMs in QEMU. Currently, most of our vboot platforms using a SPI-based TPM use the Linux SPI-GPIO driver to "bit-bang" the SPI protocol. This is because the Aspeed SPI controller (modeled in QEMU under hw/ssi/aspeed_smc.c) has an implementation deficiency that prevents bi-directional operations. aspeed_smc models the Aspeed FMC/SPI controllers which have a well defined HW interface. Your model proposal adds support for a new SPI controller using bitbang GPIOs. These are really two differents models. I don't see how you could reuse aspeed_smc for this purpose. or you mean that Linux is using the SPI-GPIO driver because the Linux Aspeed SMC driver doesn't match the need ? It is true that the Linux driver is not generic, it deals with flash devices only. But that's another problem. Thus, in order to connect a TPM to this bus, my patch implements a QEMU SPI-GPIO driver (as the QEMU counterpart of the Linux SPI-GPIO driver). As we use SPI-based TPMs on many of our BMCs for the secure-boot implementation, I have already tested this implementation locally with our Yosemite-v3.5 platform and Facebook-OpenBMC. This model was tested by connecting a generic SPI-NOR (m25p80 for example) to the Yosemite-v3.5 SPI bus containing the TPM. This patch is an RFC because I have several questions about design. Although the model is working, I understand there are many areas where the design decision is not deal (ie. abstracting hard coded GPIO values). Below are some details of the patch and specific areas where I would appreciate feedback on how to make this better: hw/arm/aspeed.c: I am not sure where the best place is to instantiate the spi_gpio besides the aspeed_machine_init. The SPI GPIO device would be a platform device and not a SoC device. Hence, it should be instantiated at the machine level, like the I2C device are, using properties to let the model know about the GPIOs that should be driven to implement the SPI protocol. Ideally, the state of the GPIO controller pins and the SPI GPIO state should be shared. I think that's what you are trying to do that with attribute 'controller_state' in your patch ? But, the way it is done today, is too tightly coupled (names) with the Aspeed GPIO model to be generic. I think we need an intermediate QOM interface, or a base class, to implement an abstract SPI GPIO model and an Aspeed SPI GPIO model on top which would be linked to the Aspeed GPIO model of the SoC in use. Or we could introduce some kind of generic GPIO controller that we would link the SPI GPIO model with (using a property). The Aspeed GPIO model would be of that kind and the SPI GPIO model would be able to drive the pins using a common interface. That's another way to do it. Could we add the ability to instantiate it on the command line? It might be complex to identify the QOM object to use as the GPIO controller from the command line since it is on the SoC and not a platform device. In that case, an Aspeed SPI GPIO model would be a better approach. we would have to peek in the machine/soc to get a link on the Aspeed GPIO model in the realize routine of the Aspeed SPI GPIO model. m25p80_transfer8_ex in hw/block/m25p80.c: Existing SPI model assumed that the output byte is fully formed, can be passed to the SPI device, and the input byte can be returned, all in one operation. With SPI-GPIO we don't have the input byte until all 8 bits of the output have been shifted out, so we have to prime the MISO with bogus values (0xFF). That's fine I think. We do something similar for dummies. MOSI pin in spi_gpio: the mosi pin is not included and we poll the realtime value of the gpio for input bits to prevent bugs with caching the mosi value. It was discovered during testing that when using the mosi pin as the input pin, the mosi value was not being updated due to a kernel and aspeed_gpio model optimization. ah. We need help from Andrew ! the model should have a mosi pin . Thanks, C. Thus, here we are reading the value directly from the gpio controller instead of waiting for the push. I realize there are Aspeed specifics in the spi_gpio model. To make this extensible, is it preferred to make this into a base class and have our Aspeed SPI GPIO extend this or we could set up params to pass in the constructor? Thanks for your review and any direction here would be helpful :) Iris Chen (3): hw: m25p80: add prereading ability in transfer8 hw: spi_gpio: add spi gpio model hw: aspeed: hook up the spi gpio model hw/arm/Kconfig| 1 + hw/arm/aspeed.c | 5 ++ hw/block/m25p80.c | 29 ++- hw/ssi/Kconfig| 4 + hw/ssi/meson.build| 1 + hw/ssi/spi_gpio.c | 166 ++ hw/ssi/ssi.c | 4 - include/hw/ssi/spi_gpio.h | 38 + inc
Re: [PATCH v10 05/21] job.c: add job_lock/unlock while keeping job.h intact
Am 25.07.2022 um 09:38 hat Emanuele Giuseppe Esposito geschrieben: > With "intact" we mean that all job.h functions implicitly > take the lock. Therefore API callers are unmodified. > > This means that: > - many static functions that will be always called with job lock held > become _locked, and call _locked functions > - all public functions take the lock internally if needed, and call _locked > functions > - all public functions called internally by other functions in job.c will > have a > _locked counterpart (sometimes public), to avoid deadlocks (job lock > already taken). > These functions are not used for now. > - some public functions called only from exernal files (not job.c) do not > have _locked() counterpart and take the lock inside. Others won't need > the lock at all because use fields only set at initialization and > never modified. > > job_{lock/unlock} is independent from real_job_{lock/unlock}. > > Note: at this stage, job_{lock/unlock} and job lock guard macros > are *nop* > > Reviewed-by: Vladimir Sementsov-Ogievskiy > Signed-off-by: Emanuele Giuseppe Esposito Would be nice to fix the access Vladimir found, but I think it's not actually a bug because we know that nobody else is going to write to job->ret. So with or without that fix: Reviewed-by: Kevin Wolf
Re: [RFC 0/3] Add Generic SPI GPIO model
On Fri, Jul 29, 2022 at 03:25:55PM +0200, Cédric Le Goater wrote: > Hello Iris, > > On 7/29/22 01:23, Iris Chen wrote: > > Currently, most of our vboot platforms using a SPI-based TPM use the Linux > > SPI-GPIO driver to "bit-bang" the SPI protocol. This is because the Aspeed > > SPI controller (modeled in QEMU under hw/ssi/aspeed_smc.c) has an > > implementation > > deficiency that prevents bi-directional operations. > aspeed_smc models the Aspeed FMC/SPI controllers which have a well defined > HW interface. Your model proposal adds support for a new SPI controller > using bitbang GPIOs. These are really two differents models. I don't see > how you could reuse aspeed_smc for this purpose. (I don't think there was anything here that proposed reusing the QEMU aspeed_smc model, but it might have been poorly worded). > or you mean that Linux is using the SPI-GPIO driver because the Linux > Aspeed SMC driver doesn't match the need ? It is true that the Linux > driver is not generic, it deals with flash devices only. But that's > another problem. We actually mean that the _hardware_ has a deficiency, not any of the code for it. The Aspeed SPI controller has a single byte FIFO in its implementation, which can only read or write in a single operation. It is *impossible* to perform bi-directional access with it (ie. you cannot write the MOSI and read the MISO in the same transaction). This is broken for many SPI protocols, other than flash devices, including the one used for TPMs. In order to connect to SPI-based TPMs on an Aspeed chip, you have to use the SPI-GPIO method. -- Patrick Williams signature.asc Description: PGP signature
Re: [RFC 0/3] Add Generic SPI GPIO model
On Fri, Jul 29, 2022 at 03:25:55PM +0200, Cédric Le Goater wrote: > Hello Iris, > > On 7/29/22 01:23, Iris Chen wrote: > > MOSI pin in spi_gpio: the mosi pin is not included and we poll the realtime > > value > > of the gpio for input bits to prevent bugs with caching the mosi value. It > > was discovered > > during testing that when using the mosi pin as the input pin, the mosi > > value was not > > being updated due to a kernel and aspeed_gpio model optimization. > > ah. We need help from Andrew ! the model should have a mosi pin . This discussion about SMC reminded me of something that might be leading to the issues we're seeing. Our hardware implementation uses the same GPIOs as one of the SMCs and doesn't use the SMC. It could be that both QEMU models (the SPI-GPIO and the SMC) are trying to grab the same GPIOs. -- Patrick Williams signature.asc Description: PGP signature
Re: [RFC 0/3] Add Generic SPI GPIO model
On 7/29/22 16:38, Patrick Williams wrote: On Fri, Jul 29, 2022 at 03:25:55PM +0200, Cédric Le Goater wrote: Hello Iris, On 7/29/22 01:23, Iris Chen wrote: Currently, most of our vboot platforms using a SPI-based TPM use the Linux SPI-GPIO driver to "bit-bang" the SPI protocol. This is because the Aspeed SPI controller (modeled in QEMU under hw/ssi/aspeed_smc.c) has an implementation deficiency that prevents bi-directional operations. aspeed_smc models the Aspeed FMC/SPI controllers which have a well defined HW interface. Your model proposal adds support for a new SPI controller using bitbang GPIOs. These are really two differents models. I don't see how you could reuse aspeed_smc for this purpose. (I don't think there was anything here that proposed reusing the QEMU aspeed_smc model, but it might have been poorly worded). yeah. That's fine. I was trying to see if we could ease Iris work with a fix in some driver/model but I didn't understand where. or you mean that Linux is using the SPI-GPIO driver because the Linux Aspeed SMC driver doesn't match the need ? It is true that the Linux driver is not generic, it deals with flash devices only. But that's another problem. We actually mean that the _hardware_ has a deficiency, not any of the code for it. The Aspeed SPI controller has a single byte FIFO in its implementation, which can only read or write in a single operation. It is *impossible* to perform bi-directional access with it (ie. you cannot write the MOSI and read the MISO in the same transaction). This is broken for many SPI protocols, other than flash devices, including the one used for TPMs. In order to connect to SPI-based TPMs on an Aspeed chip, you have to use the SPI-GPIO method. Ok. Thanks for the clarification. C.
Re: [PATCH v2 0/3] Fix some coverity issues on VDUSE
On Wed, 6 Jul 2022 at 11:18, Xie Yongji wrote: > > This series fixes some issues reported by coverity. > > Patch 1 fixes a incorrect function name. > > Patch 2 fixes Coverity CID 1490224. > > Patch 3 fixes Coverity CID 1490226, 1490223. > > V1 to V2: > - Drop the patch to fix Coverity CID 1490222, 1490227 [Markus] > - Add some commit log to explain why we don't use g_strlcpy() [Markus] > > Xie Yongji (3): > libvduse: Fix the incorrect function name > libvduse: Replace strcpy() with strncpy() > libvduse: Pass positive value to strerror() > > subprojects/libvduse/libvduse.c | 13 +++-- > 1 file changed, 7 insertions(+), 6 deletions(-) Hi -- these patches still don't seem to be in git. What is the plan for getting them in ? thanks -- PMM
Re: [RFC 0/3] Add Generic SPI GPIO model
On Fri, Jul 29, 2022 at 03:25:55PM +0200, Cédric Le Goater wrote: > Hello Iris, > > On 7/29/22 01:23, Iris Chen wrote: > > Hey everyone, > > > > I have been working on a project to add support for SPI-based TPMs in QEMU. > > Currently, most of our vboot platforms using a SPI-based TPM use the Linux > > SPI-GPIO driver to "bit-bang" the SPI protocol. This is because the Aspeed > > SPI controller (modeled in QEMU under hw/ssi/aspeed_smc.c) has an > > implementation > > deficiency that prevents bi-directional operations. > aspeed_smc models the Aspeed FMC/SPI controllers which have a well defined > HW interface. Your model proposal adds support for a new SPI controller > using bitbang GPIOs. These are really two differents models. I don't see > how you could reuse aspeed_smc for this purpose. > > or you mean that Linux is using the SPI-GPIO driver because the Linux > Aspeed SMC driver doesn't match the need ? It is true that the Linux > driver is not generic, it deals with flash devices only. But that's > another problem. > > > Thus, in order to connect > > a TPM to this bus, my patch implements a QEMU SPI-GPIO driver (as the QEMU > > counterpart of the Linux SPI-GPIO driver). > > > > As we use SPI-based TPMs on many of our BMCs for the secure-boot > > implementation, > > I have already tested this implementation locally with our Yosemite-v3.5 > > platform > > and Facebook-OpenBMC. This model was tested by connecting a generic SPI-NOR > > (m25p80 > > for example) to the Yosemite-v3.5 SPI bus containing the TPM. > > > > This patch is an RFC because I have several questions about design. > > Although the > > model is working, I understand there are many areas where the design > > decision > > is not deal (ie. abstracting hard coded GPIO values). Below are some > > details of the > > patch and specific areas where I would appreciate feedback on how to make > > this better: > > hw/arm/aspeed.c: > > I am not sure where the best place is to instantiate the spi_gpio besides > > the > > aspeed_machine_init. > > The SPI GPIO device would be a platform device and not a SoC device. > Hence, it should be instantiated at the machine level, like the I2C > device are, using properties to let the model know about the GPIOs > that should be driven to implement the SPI protocol. Agreed, should be like an I2C device. > > Ideally, the state of the GPIO controller pins and the SPI GPIO state > should be shared. I think that's what you are trying to do that with > attribute 'controller_state' in your patch ? But, the way it is done > today, is too tightly coupled (names) with the Aspeed GPIO model to > be generic. > > I think we need an intermediate QOM interface, or a base class, to > implement an abstract SPI GPIO model and an Aspeed SPI GPIO model > on top which would be linked to the Aspeed GPIO model of the SoC > in use. Disagree, I feel like we can accomplish this without inheritance. > > Or we could introduce some kind of generic GPIO controller that > we would link the SPI GPIO model with (using a property). The > Aspeed GPIO model would be of that kind and the SPI GPIO model > would be able to drive the pins using a common interface. > That's another way to do it. Agree, I would like to go in this direction if at all possible. > > > Could we add the ability to instantiate it on the command line? > > It might be complex to identify the QOM object to use as the GPIO > controller from the command line since it is on the SoC and not > a platform device. In that case, an Aspeed SPI GPIO model would > be a better approach. we would have to peek in the machine/soc to > get a link on the Aspeed GPIO model in the realize routine of the > Aspeed SPI GPIO model. Hrrrm perhaps, I feel like it shouldn't be that hard though. - Get a pointer to the QOM object that holds the GPIO's using object path or ID. Something similar to '-device ftgmac100,netdev=' right? - Get the GPIO's by name from the QOM object. In this situation, I think we should be able to get the GPIO controller object, and then get the IRQ's by the "sysbus-irq[126]"/etc name. We could refactor the aspeed_gpio.c model to name the IRQ's like the properties,. to use "gpioX4" instead of "sysbus-irq[*]". > > > m25p80_transfer8_ex in hw/block/m25p80.c: > > Existing SPI model assumed that the output byte is fully formed, can be > > passed to > > the SPI device, and the input byte can be returned, all in one operation. > > With > > SPI-GPIO we don't have the input byte until all 8 bits of the output have > > been > > shifted out, so we have to prime the MISO with bogus values (0xFF). > > That's fine I think. We do something similar for dummies. > > > MOSI pin in spi_gpio: the mosi pin is not included and we poll the realtime > > value > > of the gpio for input bits to prevent bugs with caching the mosi value. It > > was discovered > > during testing that when using the mosi pin as the input pin, the mosi > > value was not > > being u
Re: [PATCH 1/1] block: add missed block_acct_setup with new block device init procedure
Am 29.07.2022 um 14:36 hat Denis V. Lunev geschrieben: > On 29.07.2022 11:13, Kevin Wolf wrote: > > Am 28.07.2022 um 21:27 hat Denis V. Lunev geschrieben: > > > On 28.07.2022 16:42, Vladimir Sementsov-Ogievskiy wrote: > > > > On 7/11/22 14:07, Denis V. Lunev wrote: > > > > > Commit 5f76a7aac156ca75680dad5df4a385fd0b58f6b1 is looking harmless > > > > > from > > > > > the first glance, but it has changed things a lot. 'libvirt' uses it > > > > > to > > > > > detect that it should follow new initialization way and this changes > > > > > things considerably. With this procedure followed, blockdev_init() is > > > > > not called anymore and thus block_acct_setup() helper is not called. > > > > I'm not sure that 5f76a7aac156ca is really the corner stone.. But yes, > > > > libvirt moved to "blockdev era", which means that we don't use old > > > > -drive, > > > > instead block nodes are created by -blockdev / qmp: blockdev-add, and > > > > attached > > > > to block devices by node-name. > > > > > > > git bisected, thus I am sure here > > > > > > > > > > And if I understand correctly blockdev_init() is called only on -drive > > > > path. > > > > > > > > I have some questions: > > > > > > > > 1. After this patch, don't we call block_acct_setup() twice on old path > > > > with -drive? That seems safe as block_acct_setup just assign fields of > > > > BlockAcctStats.. But that's doesn't look good. > > > > > > > hmmm > > I don't think it's actually correct because then a value that was > > explicitly set with -drive will by overridden by the default provided by > > the device. > > > > A possible solution would be to switch the defaults in the BlockBackend > > initialisation back to true, and then have a ON_OFF_AUTO property in the > > devices to allow overriding the default from -drive. With -blockdev, the > > BlockBackend default will be hard coded to true and the options of the > > devices will be the only way to change it. > > > > > > 2. Do we really need these options? Could we instead just enable > > > > accounting invalid and failed ops unconditionally? I doubt that someone > > > > will learn that these new options appeared and will use them to disable > > > > the failed/invalid accounting again. > > > > > > > I can move assignment of these fields to true int > > > block_acct_init() and forget about "configurable" > > > items in new path. I do not think that somebody > > > ever has these options set. > > Well, whether anyone uses the option is a different question. I don't > > know. But it has existed for many years. > I have said about very small patch like the following > > iris ~/src/qemu $ git diff > diff --git a/block/accounting.c b/block/accounting.c > index 2030851d79..c20d6ba9a0 100644 > --- a/block/accounting.c > +++ b/block/accounting.c > @@ -38,6 +38,8 @@ void block_acct_init(BlockAcctStats *stats) > if (qtest_enabled()) { > clock_type = QEMU_CLOCK_VIRTUAL; > } > + stats->account_invalid = true; > + stats->account_failed = true; > } Yes, this looks good to me and we'll need it either way, even if we add the ON_OFF_AUTO property to devices (because we need to set the right default for 'auto'). > void block_acct_setup(BlockAcctStats *stats, bool account_invalid, > iris ~/src/qemu $ > > but your proposal with ON_OFF_AUTO will work for me too. > > The real question - do we really need to publish this option > for the external to configure it? As I said above, I don't know if anyone uses the option. It would be needed for full feature parity of -blockdev with -drive, but if the option isn't used by anyone, maybe full feature parity isn't something we even want. > > > The real question in this patch is that this initialization > > > was a precondition for old good "long IO" report > > > configuration, which should be "enableable". > > > > > > But we could move this option to "tracked request" > > > layer only and this will solve my puzzle. So, I'll move > > > "long IO report" to tracked request level only and will > > > create an option for it on bdrv_ level and will avoid > > > it on blk_ accounting. > > > > > > What do you think? > > I'm not sure what you mean by "long IO report". Don't these switches > > just change which kind of operations are counted into statistics rather > > than changing the structure of the report? > > > > Conceptually, I would like accounting on the block node level, but it's > > not what we have been doing, so it would be a big change. > > > I have to say sorry again. I have found this place once I have > reverted to my very old series discussed here + some late > additions on top of it done by Vladimir. > https://lists.defectivebydesign.org/archive/html/qemu-devel/2020-07/msg03772.html Oh, we never merged this upstream it seems? > I will definitely have to come back to this later. > > Den Kevin