On 08/10/13 16:22, Clint Byrum wrote:
I don't meant to pick on you personally Jiří, but I have singled this
message out because I feel you have captured the objections to Robert's
initial email well.

Excerpts from Jiří Stránský's message of 2013-10-08 04:30:29 -0700:
On 8.10.2013 11:44, Martyn Taylor wrote:
Whilst I can see that deciding on who is Core is a difficult task, I do
feel that creating a competitive environment based on no. reviews will
be detrimental to the project.

I do feel this is going to result in quantity over quality. Personally,
I'd like to see every commit properly reviewed and tested before getting
a vote and I don't think these stats are promoting that.
+1. I feel that such metric favors shallow "i like this code"-reviews as
opposed to deep "i verified that it actually does what it
should"-reviews. E.g. i hit one such example just today morning on
tuskarclient. If i just looked at the code as the other reviewer did,
we'd let in code that doesn't do what it should. There's nothing bad on
making a mistake, but I wouldn't like to foster environment of quick
shallow reviews by having such metrics for core team.


I think you may not have worked long enough with Robert Collins to
understand what Robert is doing with the stats. While it may seem that
Robert has simply drawn a line in the sand and is going to sit back and
wait for everyone to cross it before nominating them, nothing could be
further from the truth.
Sure. So I did read the original email, as "drawing a line in the sand". I obviously got the wrong end of that stick. Apologies.

But yeah, you make a fair point, we are very new to the team, as we get to know people a little better we can put things into context and avoid such ramblings. :P

Cheers

As one gets involved and start -1'ing and +1'ing, one can expect feedback
from all of us as core reviewers. It is part of the responsibility of
being a core reviewer to communicate not just with the submitter of
patches, but also with the other reviewers. If I see shallow +1's from
people consistently, I'm going to reach out to those people and ask them
to elaborate on their reviews, and I'm going to be especially critical
of their -1's.

I think it's also important who actually *writes* the code, not just who
does reviews. I find it odd that none of the people who most contributed
to any of the Tuskar projects in the last 3 months would make it onto
the core list [1], [2], [3].

I think having written a lot of code in a project is indeed a good way
to get familiar with the code. However, it is actually quite valuable
to have reviewers on a project who did not write _any_ of the code,
as their investment in the code itself is not as deep. They will look
at each change with fresh eyes and bring fewer assumptions.

Reviewing is a different skill than coding, and thus I think it is o-k
to measure it differently than coding.

This might also suggest that we should be looking at contributions to
the particular projects, not just the whole program in general. We're
such a big program that one's staleness towards some of the components
(or being short on global review count) doesn't necessarily mean the
person is not important contributor/reviewer on some of the other
projects, and i'd also argue this doesn't affect the quality of his work
(e.g. there's no relationship between tuskarclient and say, t-i-e,
whatsoever).

Indeed, I don't think we would nominate or approve a reviewer if they
just did reviews, and never came in the IRC channel, participated in
mailing list discussions, or tried to write patches. It would be pretty
difficult to hold a dialog in reviews with somebody who is not involved
with the program as a whole.

So i'd say we should get on with having a greater base of core folks and
count on people using their own good judgement on where will they
exercise their +/-2 powers (i think it's been working very well so far),
or alternatively split tripleo-core into some subteams.

If we see the review queue get backed up and response times rising, I
could see a push to grow the core review team early. But we're talking
about a 30 day sustained review contribution. That means for 30 days
you're +1'ing instead of +2'ing, and then maybe another 30 days while we
figure out who wants core powers and hold a vote.

If this is causing anyone stress, we should definitely address that and
make a change. However, I feel the opposite. Knowing what is expected
and being able to track where I sit on some of those expectations is
extremely comforting. Of course, easy to say up here with my +2/-2. ;)

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to