Thanks for the summary from our discussion and the additional feedback!
On 10/16/23 15:57, Thomas Lamprecht wrote:
- create some sort of test report
As Stefan mentioned, test-output can be good to have. Our buildbot
instance provides that, and while I don't look at them in 99% of the
builds, when I need to its worth *a lot*.
Agreed, test output is always valuable and will definitely captured.
## Introduction
The goal is to establish a framework that allows us to write
automated integration tests for our products.
These tests are intended to run in the following situations:
- When new packages are uploaded to the staging repos (by triggering
a test run from repoman, or similar)
*debian repos, as we could also trigger some when git commits are
pushed, just like we do now through Buildbot. Doing so is IMO nice as it
will catch issues before a package was bumped, but is still quite a bit
simpler to implement than an "apply patch from list to git repos" thing
from the next point, but could still act as a preparation for that.
- Later, this tests could also be run when patch series are posted to
our mailing lists. This requires a mechanism to automatically
discover, fetch and build patches, which will be a separate,
follow-up project.
As a main mode of operation, the Systems under Test (SUTs)
will be virtualized on top of a Proxmox VE node.
For the fully-automated test system this can be OK as primary mode, as
it indeed makes things like going back to an older software state much
easier.
But, if we decouple the test harness and running them from that more
automated system, we can also run the harness periodically on our
bare-metal test servers.
## Terminology
- Template: A backup/VM template that can be instantiated by the test
runner
I.e., the base of the test host? I'd call this test-host, template is a
bit to overloaded/generic and might focus too much on the virtual test
environment.
True, 'template' is a bit overloaded.
Or is this some part that takes place in the test, i.e., a
generalization of product to test and supplementary tool/app that helps
on that test?
It was intended to be a 'general VM/CT base thingy' that can be
instantiated and managed by the test runner, so either a PVE/PBS/PMG
base installation, or some auxiliary resource, e.g. a Debian VM with
an already-set-up LDAP server.
I'll see if I can find good terms with the newly added focus on
bare-metal testing / the decoupling between environment setup and test
execution.
Is the order of test-cases guaranteed by toml parsing, or how are intra-
fixture dependencies ensured?
Good point. With rollbacks in between test cases it probably does not
matter much, but on 'real hardware' with no rollback this could
definitely be a concern.
A super simple thing that could just work fine is ordering test
execution by testcase-names, sorted alphabetically. Ideally you'd write
test cases that do not depend on each other any way, and *if* you ever
find yourself in the situation where you *need* some ordering, you could
just encode the order in the test-case name by adding an integer prefix
- similar how you would name config files in /etc/sysctl.d/*, for
instance.
--
- Lukas
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel