>
> Excellent point, I was saying for some time that IMHO we can reduce
> to running in CI at least pre-commit:
> 1) Build J11 2) build J17
> 3) run tests with build 11 + runtime 11
> 4) run tests with build 11 and runtime 17.


Ekaterina, I was thinking more about:
1) build J11
2) build J17
3) run tests with build J11 + runtime J11
4) run smoke tests with build J17 and runtime J17

Again, I don't see value in running build J11 and J17 runtime additionally
to J11 runtime - just pick one unless we change something specific to JVM

If we need to decide whether to test the latest or default, I think we
should pick the latest because this is actually Cassandra 5.0 defined as a
set of new features that will shine on the website.

Also - we have configurations which test some features but they more like
dimensions:
- commit log compression
- sstable compression
- CDC
- Trie memtables
- Trie SSTable format
- Extended deletion time
...

Currently, with what we call the default configuration is tested with:
- no compression, no CDC, no extended deletion time
- *commit log compression + sstable compression*, no cdc, no extended
deletion time
- no compression, *CDC enabled*, no extended deletion time
- no compression, no CDC, *enabled extended deletion time*

This applies only to unit tests of course

Then, are we going to test all of those scenarios with the "latest"
configuration? I'm asking because the latest configuration is mostly about
tries and UCS and has nothing to do with compression or CDC. Then why the
default configuration should be tested more thoroughly than latest which
enables essential Cassandra 5.0 features?

I propose to significantly reduce that stuff. Let's distinguish the
packages of tests that need to be run with CDC enabled / disabled, with
commitlog compression enabled / disabled, tests that verify sstable formats
(mostly io and index I guess), and leave other parameters set as with the
latest configuration - this is the easiest way I think.

For dtests we have vnodes/no-vnodes, offheap/onheap, and nothing about
other stuff. To me running no-vnodes makes no sense because no-vnodes is
just a special case of vnodes=1. On the other hand offheap/onheap buffers
could be tested in unit tests. In short, I'd run dtests only with the
default and latest configuration.

Sorry for being too wordy,


czw., 15 lut 2024 o 07:39 Štefan Miklošovič <stefan.mikloso...@gmail.com>
napisał(a):

> Something along what Paulo is proposing makes sense to me. To sum it up,
> knowing what workflows we have now:
>
> java17_pre-commit_tests
> java11_pre-commit_tests
> java17_separate_tests
> java11_separate_tests
>
> We would have couple more, together like:
>
> java17_pre-commit_tests
> java17_pre-commit_tests-latest-yaml
> java11_pre-commit_tests
> java11_pre-commit_tests-latest-yaml
> java17_separate_tests
> java17_separate_tests-default-yaml
> java11_separate_tests
> java11_separate_tests-latest-yaml
>
> To go over Paulo's plan, his steps 1-3 for 5.0 would result in requiring
> just one workflow
>
> java11_pre-commit_tests
>
> when no configuration is touched and two workflows
>
> java11_pre-commit_tests
> java11_pre-commit_tests-latest-yaml
>
> when there is some configuration change.
>
> Now the term "some configuration change" is quite tricky and it is not
> always easy to evaluate if both default and latest yaml workflows need to
> be executed. It might happen that a change is of such a nature that it does
> not change the configuration but it is necessary to verify that it still
> works with both scenarios. -latest.yaml config might be such that a change
> would make sense to do in isolation for default config only but it would
> not work with -latest.yaml too. I don't know if this is just a theoretical
> problem or not but my gut feeling is that we would be safer if we just
> required both default and latest yaml workflows together.
>
> Even if we do, we basically replace "two jvms" builds for "two yamls"
> builds but I consider "two yamls" builds to be more valuable in general
> than "two jvms" builds. It would take basically the same amount of time, we
> would just reoriented our building matrix from different jvms to different
> yamls.
>
> For releases we would for sure need to just run it across jvms too.
>
> On Thu, Feb 15, 2024 at 7:05 AM Paulo Motta <pa...@apache.org> wrote:
>
>> > Perhaps it is also a good opportunity to distinguish subsets of tests
>> which make sense to run with a configuration matrix.
>>
>> Agree. I think we should define a “standard/golden” configuration for
>> each branch and minimally require precommit tests for that configuration.
>> Assignees and reviewers can determine if additional test variants are
>> required based on the patch scope.
>>
>> Nightly and prerelease tests can be run to catch any issues outside the
>> standard configuration based on the supported configuration matrix.
>>
>> On Wed, 14 Feb 2024 at 15:32 Jacek Lewandowski <
>> lewandowski.ja...@gmail.com> wrote:
>>
>>> śr., 14 lut 2024 o 17:30 Josh McKenzie <jmcken...@apache.org>
>>> napisał(a):
>>>
>>>> When we have failing tests people do not spend the time to figure out
>>>> if their logic caused a regression and merge, making things more unstable…
>>>> so when we merge failing tests that leads to people merging even more
>>>> failing tests...
>>>>
>>>> What's the counter position to this Jacek / Berenguer?
>>>>
>>>
>>> For how long are we going to deceive ourselves? Are we shipping those
>>> features or not? Perhaps it is also a good opportunity to distinguish
>>> subsets of tests which make sense to run with a configuration matrix.
>>>
>>> If we don't add those tests to the pre-commit pipeline, "people do not
>>> spend the time to figure out if their logic caused a regression and merge,
>>> making things more unstable…"
>>> I think it is much more valuable to test those various configurations
>>> rather than test against j11 and j17 separately. I can see a really little
>>> value in doing that.
>>>
>>>
>>>

Reply via email to