Hi Robert,

I have few concerns regarding metrics and core-team. To sum up, I think that there needs to be more metrics and core-reviewers for particular project (not one group). More details follow:

Measures:
--------
* Only number of reviews shouldn't be the only indicator - you can get into situation where people sit at computer and start give +1s to the code - regardless the quality, just to get quantity. * Delivery of solutions (code, and other stuff) should be counted as well. It is not responsibility of core member just to review the code but also to deliver. * Also very important is general activity of the person on IRC, mailing lists, etc.

With multiple metrics, we really can assure that the person is a core member at that project. It can be delivering architectural solutions, it can be delivering code, it can be reviewing the work or discussing problems. But only reviews are not very strong metric and we can run into problems.


Review Process:
-------------
* +1... People should give +1 to something what looks good (they might not test it, but they indicate that they are fine with that) * +2... Should be given only if the person tested it and if he is sure that the solution works (meaning running test, testing functionality, etc). * Approved... Same for approvals - they are final step when person is saying 'merge it'. There needs to be clear certainty, that what I am merging will not brake the app and works.

Quality of code is very important. It shouldn't come into the state, where core reviewers will start to give +2 to code which looks ok. They need to be sure that it works and solves the problem and only core people on particular project might assure this.


Core Reviewers:
-------------
* Tzu-Mainn pointed out, that there are big differences between projects. I think that splitting core-members based on projects where they contribute make bigger sense. * Example: It doesn't make sense, that someone who is core-reviewer based on image-builder is able to give +2 on UI or CLI code and vice-versa. * For me it makes bigger sense to have separate core-members for each project then having one big group - then we can assure higher quality of the code. * If there is no way to split the core-reviewers across projects and we have one big group for whole TripleO, then we need to make sure that all projects are reflected appropriately.

I think that the example speaks for everything. It is really crucial to consider all projects of TripleO and try to assure their quality. That's what core-members are here for, that's why I see them as experts in particular project.

I believe that we all want TripleO to succeed,let's find some solutions how to achieve that.

Thanks
-- Jarda



On 2013/07/10 21:03, Robert Collins wrote:
Hi, like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up: it's not
a purely mechanical process :).

As we've just merged review teams with Tuskar devs, we need to allow
some time for everyone to get up to speed; so for folk who are core as
a result of the merge will be retained as core, but November I expect
the stats will have normalised somewhat and that special handling
won't be needed.

IMO these are the reviewers doing enough over 90 days to meet the
requirements for core:

|       lifeless **        |     349    8 140   2 199    57.6% |    2
(  1.0%)  |
|     clint-fewbar **      |     329    2  54   1 272    83.0% |    7
(  2.6%)  |
|         cmsj **          |     248    1  25   1 221    89.5% |   13
(  5.9%)  |
|        derekh **         |      88    0  28  23  37    68.2% |    6
( 10.0%)  |

Who are already core, so thats easy.

If you are core, and not on that list, that may be because you're
coming from tuskar, which doesn't have 90 days of history, or you need
to get stuck into some more reviews :).

Now, 30 day history - this is the heads up for folk:

| clint-fewbar **  |     179    2  27   0 150    83.8% |    6 (  4.0%)  |
|     cmsj **      |     179    1  15   0 163    91.1% |   11 (  6.7%)  |
|   lifeless **    |     129    3  39   2  85    67.4% |    2 (  2.3%)  |
|    derekh **     |      41    0  11   0  30    73.2% |    0 (  0.0%)  |
|      slagle      |      37    0  11  26   0    70.3% |    3 ( 11.5%)  |
|    ghe.rivero    |      28    0   4  24   0    85.7% |    2 (  8.3%)  |


I'm using the fairly simple metric of 'average at least one review a
day' as a proxy for 'sees enough of the code and enough discussion of
the code to be an effective reviewer'. James and Ghe, good stuff -
you're well on your way to core. If you're not in that list, please
treat this as a heads-up that you need to do more reviews to keep on
top of what's going on, whether so you become core, or you keep it.

In next month's update I'll review whether to remove some folk that
aren't keeping on top of things, as it won't be a surprise :).

Cheers,
Rob







_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to