@David The types of concurrency changes introduced in this patch are
extremely difficult to completely test in an automated fashion.
Therefore, code review for correctness is critical to ensure quality.
To be clear, I am not questioning the value of automated testing.  I
am just noting that it's next to impossible to achieve full coverage,
and code review is an critical supplement.

@Ilya I plan to review this patch, but I will be able to start until
next week.  I am also still reviewing object_store (a separate
procedural issue for another thread), and need to complete solidfire.
This backlog is precisely why need to be reviewing iteratively
throughout the dev cycle.

Thanks,
-John

On Jun 27, 2013, at 7:35 PM, David Nalley <da...@gnsa.us> wrote:

> On Thu, Jun 27, 2013 at 5:51 PM, Hugo Trippaers <h...@trippaers.nl> wrote:
>>
>> I think Ilya offers is great, my current stance is also to see how we can 
>> bring this forward.
>>
>> I've had the opportunity to meet with several people at the Citrix office in 
>> Santa Clara, i'm actually working from their office at this moment. I think 
>> it's also the responsibility of someone who put in a -1 to work with the 
>> original committer to get the situation resolved. So i'll invest the time to 
>> help with the review as well.
>>
>> It would be great if Alex or Kelven could take the time to explain how this 
>> feature has been tested. That would give the community some insight as well.
>>
>> My main technical problem with this merge is that stuff is moving all over 
>> the place without having even the slightest idea why. Now having discussed 
>> this with Alex in person i get the general idea of this merge, so can 
>> actually try to review it.
>>
>> I think that John have nicely explained what we could do to prevent 
>> situations like this in advance. I fully understand that big features or 
>> rewrites don't happen overnight and might show up near the end of the 
>> release cycle. With the time based release cycle it's always a risk that 
>> some feature might not make it in on time. Getting more people involved and 
>> chunking the commits into master will greatly speed up the reviewing process.
>>
>> I'll get back to this after spending some time on reviewing the actual 
>> patch. In fact i would like to ask more people to have a look at this patch 
>> and reply to this thread with comments or remarks.
>>
>> Cheers,
>>
>> Hugo
>
> So the problem in my mind, is that we don't have a way of verifying
> that master isn't broken, and won't be broken by any given merge. I
> look at even the minimal level of automated testing that I see today,
> and ~20% of integration tests are failing[1]. The regression set of
> tests (which isn't running as often) is seeing 75% of tests
> failing[2]. Heaping on more change when we are demonstrably already
> failing in many places is not behaving responsibly IMO.
> The question I'd pose is this - running the various automated tests is
> pretty cheap - whats the output of that compared to the current test
> output on master? Better or worse? If it hasn't been done, why not?
> I desperately want these features, but not necessarily at the cost of
> further destabilizing what we have now in master - we can't continue
> accruing technical debt.
>
> --David
>
> [1] 
> http://jenkins.buildacloud.org/view/cloudstack-qa/job/test-smoke-matrix/lastCompletedBuild/testReport/
> [2] 
> http://jenkins.buildacloud.org/view/cloudstack-qa/job/test-regression-matrix/28/testReport/

Reply via email to