On 10/17/23 18:28, Thomas Lamprecht wrote:
Am 17/10/2023 um 14:33 schrieb Lukas Wagner:
On 10/17/23 08:35, Thomas Lamprecht wrote:
  From top of my head I'd rather do some attribute based dependency
annotation, so that one can depend on single tests, or whole fixture
on others single tests or whole fixture.


The more thought I spend on it, the more I believe that inter-testcase
deps should be avoided as much as possible. In unit testing, (hidden)

We don't plan unit testing here though and the dependencies I proposed
are the contrary from hidden, rather explicit annotated ones.

dependencies between tests are in my experience the no. 1 cause of
flaky tests, and I see no reason why this would not also apply for
end-to-end integration testing.

Any source on that being the no 1 source of flaky tests? IMO that
should not make any difference, in the end you just allow better
Of course I don't have bullet-proof evidence for the 'no. 1' claim, but
it's just my personal experience, which comes partly from a former job (where was I coincidentally also responsible for setting up automated testing ;) - there it was for a firmware project), partly from the work I did for my master's thesis (which was also in the broader area of software testing).

I would say it's just the consequence of having multiple test cases
manipulating a shared, stateful entity, be it directly or indirectly
via side effects. Things get of course even more difficult and messy if concurrent test execution enters the picture ;)

reuse through composition of other tests (e.g., migration builds
upon clustering *set up*, not tests, if I just want to run
migration I can do clustering setup without executing its tests).
 > Not providing that could also mean that one has to move all logic
in the test-script, resulting in a single test per "fixture", reducing
granularity and parallelity of some running tests.

I also think that

I'd suggest to only allow test cases to depend on fixtures. The fixtures
themselves could have setup/teardown hooks that allow setting up and
cleaning up a test scenario. If needed, we could also have something
like 'fixture inheritance', where a fixture can 'extend' another,
supplying additional setup/teardown.
Example: the 'outermost' or 'parent' fixture might define that we
want a 'basic PVE installation' with the latest .debs deployed,
while another fixture that inherits from that one might set up a
storage of a certain type, useful for all tests that require specific
that type of storage.

Maybe our disagreement stems mostly from different design pictures in
our head, I probably am a bit less fixed (heh) on the fixtures, or at
least the naming of that term and might use test system, or intra test
system when for your design plan fixture would be the better word.

I think it's mostly a terminology problem. In my previous definition of
'fixture' I was maybe too fixated (heh) on it being 'the test
infrastructure/VMs that must be set up/instantatiated'. Maybe it helps to think about it more generally as 'common setup/cleanup steps for a set of test cases, which *might* include setting up test infra (although I have not figured out a good way how that would be modeled with the desired decoupling between test runner and test-VM-setup-thingy).


On the other hand, instead of inheritance, a 'role/trait'-based system
might also work (composition >>> inheritance, after all) - and
maybe that also aligns better with the 'properties' mentioned in
your other mail (I mean this here:  "ostype=win*", "memory>=10G").

This is essentially a very similar pattern as in numerous other testing
frameworks (xUnit, pytest, etc.); I think it makes sense to
build upon this battle-proven approach.

Those are all unit testing tools though that we do already in the
sources and IIRC those do not really provide what we need here.
While starting out simple(r) and avoiding too much complexity has
certainly it's merits, I don't think we should try to draw/align
too many parallels with those tools here for us.
 >
In summary, the most important points for me is a decoupled test-system
from the automation system that can manage it, ideally such that I can
decide relatively flexible on manual runs, IMO that should not be to much
work and it guarantees for clean cut APIs from which future development,
or integration surely will benefit too.

The rest is possibly hard to determine clearly on this stage, as it's easy
(at least for me) to get lost in different understandings of terms and
design perception, but hard to convey those very clearly about "pipe dreams",
so at this stage I'll cede to add discussion churn until there's something
more concrete that I can grasp on my terms (through reading/writing code),
but that should not deter others from giving input still while at this stage.

Agreed.
I think we agree on the most important requirements/aspects of this
project and that's a good foundation for my upcoming efforts.

At this point, the best move forward for me is to start experimenting with some ideas and start with the actual implementation.
When I have something concrete to show, may it be a prototype or some
sort of minimum viable product, it's much easier to discuss
any further details and design aspects.

Thanks!

--
- Lukas


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to