> > On Jun 27, 2013, at 7:35 PM, David Nalley <da...@gnsa.us> wrote:
> >> 
> >> So the problem in my mind, is that we don't have a way of verifying
> >> that master isn't broken, and won't be broken by any given merge. I
> >> look at even the minimal level of automated testing that I see today,
> >> and ~20% of integration tests are failing[1]. The regression set of
> >> tests (which isn't running as often) is seeing 75% of tests
> >> failing[2]. Heaping on more change when we are demonstrably already
> >> failing in many places is not behaving responsibly IMO.
> >> The question I'd pose is this - running the various automated tests is
> >> pretty cheap - whats the output of that compared to the current test
> >> output on master? Better or worse? If it hasn't been done, why not?
> >> I desperately want these features, but not necessarily at the cost of
> >> further destabilizing what we have now in master - we can't continue
> >> accruing technical debt.
> >> 
> >> --David
> >> 
> >> [1] 
> >> http://jenkins.buildacloud.org/view/cloudstack-qa/job/test-smoke-matrix/lastCompletedBuild/testReport/
> >> [2] 
> >> http://jenkins.buildacloud.org/view/cloudstack-qa/job/test-regression-matrix/28/testReport/

*We* need to fix those tests. I wouldn't be thrown off by the numbers
there since many are failing because of mismatched environments and/or
script failures. Getting the same pass rate across the vmsync and
master branches is a _decent_ indicator but not the best one because
coverage is poor. Reviews are absolutely *essential*

In fact it's not easy to automate failures in a standard way for
vmsync like functionality.

-- 
Prasanna.,

------------------------
Powered by BigRock.com

Reply via email to