On 12/2/19 5:25 PM, Michel Dänzer wrote:
On 2019-12-02 3:15 p.m., Tapani Pälli wrote:
On 11/15/19 8:41 PM, Mark Janes wrote:
Michel Dänzer <mic...@daenzer.net> writes:

On 2019-11-15 4:02 p.m., Mark Janes wrote:
Michel Dänzer <mic...@daenzer.net> writes:

Now that the GitLab CI pipeline tests a snapshot of piglit with
llvmpipe
(https://gitlab.freedesktop.org/mesa/mesa/merge_requests/2468), the
question has come up how to deal with inter-dependent Mesa/piglit
changes (where merging only one or the other causes some piglit
regressions).


First of all, let it be clear that just merging the Mesa changes as-is
and breaking the GitLab CI pipeline is not acceptable.


  From the Mesa POV, the easiest solution is:

1. Merge the piglit changes
2. In the Mesa MR (merge request), add a commit which updates
piglit[0]
3. If the CI pipeline is green, the MR can be merged


In case one wants to avoid alarms from external CI systems, another
possibility is:

For the Intel CI, no alarm is generated if the piglit test is pushed
first.  Normal development process includes writing a piglit test to
illustrate the bug that is being fixed.

Cool, but what if the piglit changes affect the results of existing
tests? That was the situation yesterday which prompted this thread.

We attribute the status change to piglit in the CI config, within a few
hours.  The test shows up as a failure in CI until it is triaged.

I think we have a problem with current gitlab CI process.

Right now if someone needs to update piglit commit used by CI, he also
ends up fixing and editing the .gitlab-ci/piglit/quick_gl.txt (and
glslparser+quick_shader.txt) as CI reports numerous failures because of
completely unrelated stuff as meanwhile people added other tests,
removed tests and modified them.

This is at least somewhat intentional, as the results of any newly added
tests should be carefully checked for plausibility.


I think we should turn such warnings on only when we have more
sophisticated algorithm to detect actual regression (not just 'state
change', like additional test or removed test).

It's unclear what exactly you're proposing. In order to catch
regressions (e.g. pass -> warn, pass -> fail, pass -> skip, pass ->
crash), we need a list of all tests on at least one side of each
transition. We're currently keeping the list of all
warning/failing/skipped/crashing tests, but not passing tests (to keep
the lists as small as possible).

One possibility might be to remove the summary at the end of the lists.
That would allow new passing tests to be silently added, but it would
mean we could no longer catch pass -> notrun regressions.


Yeah, the last point is what I had in mind but it is tricky .. I guess I don't really have a good concrete proposal currently but I was hoping maybe someone comes up with one :)

I guess my issues boil down to difference vs Intel CI that there we track Piglit master so the overall state is 'more fresh'. With current gitlab CI the issues come late as many commits may have happened. So the person dealing with the issue (updating tag) does not have the context of those changes or maybe even expertise about the changes (and what was expected result), it should've have been caught already earlier.

It could be also that I'm trying to update too big chunk at once, should go commit by commit and see what happens to the results.

// Tapani
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to