On 10/31/2013 03:24 PM, Jeremy Stanley wrote:
On 2013-10-31 13:30:32 +0000 (+0000), Mark McLoughlin wrote:
[...]
In cases like that, I'd be of a mind to go "+2 Awesome! Thanks for
catching this! It would be great to have a unit test for this, but
it's clear the current code is broken so I'm fine with merging the
fix without a test". You could say it's now the reviewers
responsibility to merge a test, but if that requirement then turns
off reviewers even reviewing such a patch, then that doesn't help
either.
[...]
I get the impression it was one of my +2s in a situation like this
which spawned the thread (or at least was the straw which broke the
camel's back), so apologies for stirring the beehive. Someone was
trying to set up their own copy of several of our infrastructure
projects, spotted a place in one where we had hard-coded something
specific to our environment, and wanted to parameterize it upstream
rather than paper over it on their end.
The bug-reporter-turned-patch-contributor didn't know how to write a
test which would confirm that parameter got passed through and we
weren't performing tests yet for any similar parameters already in
the daemon which I could point to as an example. I agree it's a
judgement call between risking discouraging a new contributor vs.
reducing test coverage further, but I was probably still a bit too
hasty to suggest that we could add tests for those in a separate
commit.
In my case I didn't have the available time to instruct the
contributor on how to write tests for this, but that also probably
meant that I didn't really have time to be reviewing the change
properly to begin with. I'm quite grateful to Robert for stepping in
and helping walk them through it! We got more tests, I think they
got a lot of benefit from the new experience as well, and I was
appropriately humbled for my lax attitude over the situation which
nearly allowed us to miss a great opportunity at educating another
developer on the merits of test coverage.
It is probably the best approach to work with a new patch submitter to
show them how to write the tests. Another approach is to write a test
yourself. CHeck for some precondition that shows the patch is appalied,
and if it is not met, throw a Skip exception. That test should pass
until their bug comes in. Then, once htier patch comes in, test test
should be triggered. You can remove the skip code in a future commit.
I would like to see more of this kind of coding: tests that show a
feature is missing etc. We discussed it a bit in the Keystone/Client
discussions where we wanted to run tests against a live sever for a
client change, but couldn't really check in new tests in the server as
it was after code freeze. I made the decision to push for Tempest tests
there...and we broke some new ground in our workflow.
One tool that is really valuable is the test coverage tool. It can show
the lines of code that have no coverage, and will help confirm if a
given patch is tested or not. It used to run on each commit, or so I
was told. I was running it periodically for Keystone tests: here is an
old run http://admiyo.fedorapeople.org/openstack/keystone/coverage/
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev