Thank for your reply. I have some different viewpoint. Please have a look.

> I am not sure that this helps.
> You can still be very lucky and unfortunately many times the impact is on
> other tests, not the new tests.

Yes, that cannot avoid other unit tests affected by this PR. However, the 
stability of unit tests new or modified in the current PR can be ensure. I 
think it's useful.

> I believe that the best weapons we have are:
> - Good code reviews

Yes,We need good code review. But, When the unit test passes, we usually think 
that the PR is OK. I'm not sure if this alone can guarantee.

> - if there is a error on CI, don't default to 'rerun-failure-checks' but
> look carefully into the errors (we could disable the ability to rerun
> failed tests to non committers and so giving more control on CI to the
> committer who is sponsoring a patch)

If this feature is disabled, there may be the following problems:
1. The merger efficiency of each PR is reduced.
2. The owner of the PR needs to deal with additional work.


To sum up, I think the essential problem is that we need as far as possible 
ensure that the unit tests involved in each PR are stable before merging. Let 
the owner of the PR handle the test stability involved in the PR. a good code 
review is one of the mechanisms, and we need more effective and force means to 
check.

Hope to get more suggestions and actions.Otherwise, we have fixed the existing 
flaky tests, but as more features are added, they will continue to be produced. 

> PS: Maybe I'm worried too much. Is it normal to have a lot of flaky tests?


Reply via email to