On 07/03/2014 08:42 AM, Luke Gorrie wrote:
On 3 July 2014 02:44, Michael Still <[email protected] <mailto:[email protected]>> wrote:The main purpose is to let change reviewers know that a change might be problematic for a piece of code not well tested by the gate Just a thought: A "sampling" approach could be a reasonable way to stay responsive under heavy load and still give a strong signal to reviewers about whether a change is likely to be problematic. I mean: Kevin mentions that his CI gets an hours-long queue during peak review season. One way to deal with that could be skipping some events e.g. toss a coin to decide whether to test the next revision of a change that he has already +1'd previously. That would keep responsiveness under control even when throughput is a problem. (A bit like how a router manages a congested input queue or how a sampling profiler keeps overhead low.) Could be worth keeping the rules flexible enough to permit this kind of thing, at least?
The problem with this is that it assumes all patch sets contain equivalent levels of change, which is incorrect. One patch set may contain changes that significantly affect the SnappCo plugin. A sampling system might miss that important patchset, and you'd spend a lot of time trying to figure out which patch caused issues for you when a later patchset (that included the problematic important patch that was merged) causes failures that seem unrelated to the patch currently undergoing tests.
In short, you need to test every single proposed patch to the system fully and consistently, otherwise there's simply no point in running any tests at all, as you will spend an inordinate amount of time tracking down what broke what.
Best, -jay _______________________________________________ OpenStack-dev mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
