Peter Xu <pet...@redhat.com> writes:

> On Thu, Jan 11, 2024 at 10:58:49AM -0300, Fabiano Rosas wrote:
>> Peter Xu <pet...@redhat.com> writes:
>> 
>> > On Wed, Jan 10, 2024 at 11:42:18AM -0300, Fabiano Rosas wrote:
>> >> Peter Xu <pet...@redhat.com> writes:
>> >> 
>> >> > On Tue, Jan 09, 2024 at 11:46:32AM -0300, Fabiano Rosas wrote:
>> >> >> Hm, it would be better to avoid the extra maintenance task at the start
>> >> >> of every release, no? It also blocks us from doing n-2 even
>> >> >> experimentally.
>> >> >
>> >> > See my other reply, on whether we can use "n-1" for migration-test.  If
>> >> > that can work for us, then IIUC we can avoid either "since:" or any
>> >> > relevant flag, neither do we need to unmask tests after each releases.  
>> >> > All
>> >> > old tests should always "just work" with a new qemu binary.
>> >> 
>> >> Hmm.. There are some assumptions here:
>> >> 
>> >> 1) New code will always be compatible with old tests. E.g. some
>> >>    patchseries changed code and changed a test to match the new
>> >>    code. Then we'd need a flag like 'since' anyway to mark that the new
>> >>    QEMU cannot be used with the old test.
>> >> 
>> >>    (if new QEMU is not compatible with old tests without any good
>> >>    reason, then that's just a regression I think)
>> >
>> > Exactly what you are saying here.  We can't make new QEMU not working on
>> > old tests.
>> 
>> Ok, so we need to forbid breaking changes to tests from now on. I'll try
>> to add some words in the docs about this.
>> 
>> >
>> > One way to simplify the understanding is, we can imagine the old tests as
>> > "some user currently using the old QEMU, and who would like to migrate to
>> > the master QEMU binary".  Such user only uses exactly the same cmdline we
>> > used for testing migration-test in exactly that n-1 qemu release binary.
>> >
>> > If we fail that old test, it means we can already fail such an user.
>> > That's destined a regression to me, no?  Or, do you have a solid example?
>> 
>> For instance, we used to not issue the SETUP event on incoming. If a
>> test (or user app) expected to see the ACTIVE or FAILED states, then
>> would it be a regression to now start issuing the SETUP event at the
>> proper place?
>
> Valid example.  And it's a tricky example in that it actually breaks the
> ABI even though slightly, however events are just normally more flexible in
> this case, so we didn't care.
>
> I think it means we didn't care any program expecting no SETUP before
> ACTIVE, or such user already crashes.
>
> Our migration-test is compatible with such change, right?
>
> I think the trick here is we shouldn't make migration-test to ever contain
> any "assumption" of the internals of QEMU.  It should only behave strictly
> as what an user can use QEMU, and that should always be guaranteed to work
> on newer qemu binaries.  Then breaking old migration-test will be the same
> as breaking an user, and it'll naturally fit in this model too of using n-1
> version of migration-test.
>
>> 
>> Anyway, it's pointless to give examples because we either allow old
>> tests to be changed or we don't. If we don't then that's solved. If we
>> do, we'll always have space for the situation I mentioned in 1) above.
>
> IMHO we should allow any changes to old tests, IMHO.  It won't apply to n-1
> test anyway, not until the next release.  It may depend on how you define
> "changed" in this case.
>

I mean introducing a piece of code in QEMU which requires a change in a
test. That shouldn't be allowed. Because the n-1 tests will still have
the old behavior which could potentially clash with what the new QEMU is
doing.

>> 
>> > The only thing I can think of is, when we want to e.g. obsolete a QEMU
>> > cmdline that is used in migration-test.  But then that cmdline needs to be
>> > declared obsolete first for a few releases (let's say, 4), and before that
>> > deadline we should already rewrite migration-test to not use it, and as
>> > long as we do it in 3 releases I suppose nothing will be affected.
>> >
>> >> 
>> >> 2) There would not be issues when fixing bugs/refactoring
>> >>    tests. E.g. old tests had a bug that is now fixed, but since we're
>> >>    not using the new tests, the bug is always there until next
>> >>    release. This could block the entire test suite, specially with
>> >>    concurrency bugs which can start triggering due to changes in timing.
>> >
>> > Yes this might be a problem.  Note that the old tests we're using will be
>> > exactly the same test we released previous QEMU.  I am "assuming" that the
>> > test case is as stable as the released QEMU, since we kept running it for
>> > all pulls in CI runs.  If we see anything flaky, we should mark it
>> > especially right before the release, then the released tests will be
>> > considerably stable.
>> 
>> It's not just the test case. The whole test infrastructure could change
>> entirely. But let's maybe cross that bridge when we get to it.
>> 
>> >
>> > The worst case is we still keep a knob in the CI file, and we can turn off
>> > n-1 -> n tests for the CI for some release if there's some unfortunate
>> > accident. But I hope in reality that can be avoided.
>> >
>> >> 
>> >> 3) New code that can only be reached via new tests cannot cause
>> >>    regressions. E.g. new code is added but is kept under a machine
>> >>    property or migration capability. That code will only show the
>> >>    regression after the new test enables that cap/property. At that
>> >>    point it's too late because it was already released.
>> >
>> > I can't say I fully get the point here.  New code, if with a new cap with
>> > it, should run exactly like the old code if the cap is not turned on.  I
>> > suppose that's the case for when we only run n-1 version of migration-test.
>> > IMHO it's the same issue as 1) above, that we just should not break it, and
>> > if we do, that's exactly what we want to capture and fix in master, not n-1
>> > branch.
>> >
>> > But as I said, perhaps I didn't really get the issue you wanted to 
>> > describe..
>> 
>> if (cap_foo()) {
>>    <do something bad>
>> }
>> 
>> This^ only executes once we have a test that enables cap_foo. If the
>> "something bad" is something that breaks compatibility, then we'll miss
>> it when using n-1 migration-test.
>
> IMHO the n-1 tests are not for this.  The new FOO cap can only be enabled
> in n+ versions anyway, so something like above should be covered by the
> normal migration test that anyone would like to propose the new FOO cap.

You're being too generous in thinking new code will always restrict
itself to implementing new functionality and never have a bug that
affects a completly different part of the code. There could be an
innocent refactoring along with cap FOO that breaks the migration only
when FOO is enabled.

But fine. We can't predict every scenario. Let's get this series out the
door.

Thanks for the comments so far. I'll spin another version.

> The n-1 test we're discussing is extra tests on top of that.  So:
>
>   - Same binary test: we (of course) keep running migration-test for
>     master, covers FOO
>
>   - Cross binary testA: we (hopefully since 9.0?) runs n-1 migration-test
>     for previous release
>
> Then after n boosts, the new FOO test (that will enable FOO) will become
> part of n-1 tests.
>
>> 
>> Now that I think about it, should we parameterize the CI so we can
>> actually switch between old migration-tests and new migration-tests? So
>> we make the default what you suggest, but still have the ability to
>> trigger a job every once in a while that uses the new tests.
>
> Certainly. Such a knob will never hurt, I assume.  It's just that I'd
> expect new migration-test could constantly fail the cross-binary test as
> long as we introduce new features.  Maybe it's a matter of whether we would
> like migration-test itself to understand the "version" idea.
>
> What I was saying above is trying to reduce our burden to teach
> migration-test to understand any version concept.  So migration-test always
> applies only to the master branch (and newer; due to migration's strict
> ABI), no need to detect any cap as long as master supports it.

Reply via email to