On 01/02/2014 06:43 AM, Day, Phil wrote: > I don't really see why this thread seems to keep coming back to a position of > improvements to the review process vs changes to automated testing - to my > mind both are equally important and complementary parts of the solution: > > - Automated tests are strong for objective examination of particular points > of functionality. Each additional tests adds one more piece of functionality > that's covered > > - The review process is strong for a subjective examination of changes, and > can often spot more holistic issues. Changes / improvements to the review > process have the potential to address whole classes of issues.
100% agreed. I just think that we're tapped out on "humans do better" on the review side, especially as there is constant pressure to add more people to review teams, which on average means reviews get worse as those reviewers are less experienced in certain issues that can crop up. It's a constant onboarding problem. Which is why I don't find that a particularly fruitful conversation, and keep trying to steer us away from it. :) And steer us on to "with 40 extra hours someone could add validation approach X". Which would make the machines better, and by removing something the humans had to think about, would actually make the humans better as well. Basically the answer, in my opinion, to "Humans do better" is actually "humans do less", especially when those humans are already working pretty hard. But I think we disagree on the mental load problem, and we just keep going back and forth on that. So there's probably no more value in discussing that. Let's just agree to disagree on that one. -Sean -- Sean Dague Samsung Research America s...@dague.net / sean.da...@samsung.com http://dague.net
signature.asc
Description: OpenPGP digital signature
_______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev