On Tue, Nov 26, 2019 at 06:48:45PM +0300, Vladimir Sementsov-Ogievskiy wrote: > Hi all! > > Here is simple benchmarking utility, to generate performance > comparison tables, like the following: > > ---------- ------------- ------------- ------------- > backup-1 backup-2 mirror > ssd -> ssd 0.43 +- 0.00 4.48 +- 0.06 4.38 +- 0.02 > ssd -> hdd 10.60 +- 0.08 10.69 +- 0.18 10.57 +- 0.05 > ssd -> nbd 33.81 +- 0.37 10.67 +- 0.17 10.07 +- 0.07 > ---------- ------------- ------------- ------------- > > This is a v2, as v1 was inside > "[RFC 00/24] backup performance: block_status + async" > > I'll use this benchmark in other series, hope someone > will like it. > > Vladimir Sementsov-Ogievskiy (3): > python: add simplebench.py > python: add qemu/bench_block_job.py > python: add example usage of simplebench > > python/bench-example.py | 80 +++++++++++++++++++++ > python/qemu/bench_block_job.py | 115 +++++++++++++++++++++++++++++ > python/simplebench.py | 128 +++++++++++++++++++++++++++++++++ > 3 files changed, 323 insertions(+) > create mode 100644 python/bench-example.py > create mode 100755 python/qemu/bench_block_job.py > create mode 100644 python/simplebench.py > > -- > 2.18.0 >
Hi Vladimir, This looks interesting. Do you think the execution of "test cases" in an "environment" are a generic enough concept that could be reused (or reuse other system)? My point is that it'd be nice to do the same thing say for the acceptance tests, or any tests for that matter. For instance, for known parameters, we could record what's the time difference between booting a guest with q35 or pc machine types and virtio-block or virtio-scsi devices. BTW, This reminded me of a IOzone[1] test runner / results analyzer: https://github.com/avocado-framework-tests/avocado-misc-tests/blob/master/io/disk/iozone.py I'm also cc'ing Lukáš Doktor, who has actively worked in something similar. Cheers, - Cleber.
signature.asc
Description: PGP signature