A feature branch seems reasonable for this work: multiple reviewable
patches that are meaningful only after all of them get in. I guess most
comments here that had concerns for a branch were primarily based on the
initial reasoning of not being able to find a reviewer.

Coming to RTC and CTR, I haven't seen any branch that used CTR. May be, we
should consider allowing CTR. But, if it is hard to get reviews done for
individual smaller patches, I would think it would be hard to find 3
committers to review them later JIRA-wise or the combined patch.

On Wed, Mar 23, 2016 at 12:32 PM, Allen Wittenauer <a...@apache.org> wrote:

>
> > On Mar 23, 2016, at 10:25 AM, Chris Nauroth <cnaur...@hortonworks.com>
> wrote:
> >
> > 2. Apache feature branches: Sign-off may come from designated branch
> > committers in addition to full committers.  It's OK to break the branch
> > for work in progress, but it must be fixed before a merge.  It's still
> > review then commit though.
>
> ...
>
> > Allen, are there particular reasons that you favor #2 (but with
> > commit-then-review) instead of #3 for your work on HADOOP-12857 and
> > HADOOP-12930?  Maybe you want the patches on Apache infrastructure so you
> > can get the pre-commit sign-offs quickly?  Is it something else?
>
>         HADOOP-12930 is sort of a weird case. I’ve only written some test
> code, but... ignoring unit tests and some variable name cleanup, there
> isn’t much code being truly added or modified.  However there is
> significant code _movement_ due to some refactoring that needs to take
> place.  As a result, this has some implications on reviewers, the testing
> infrastructure, and how it is committed:
>
> * A combined patch file will be huge and overwhelming since a lot of
> effort will need to be spent actually finding the changes rather than
> having them as logically separated units of work.
> * As all of you know, precommit doesn't work properly if any 2 or more of
> hadoop-common, hadoop-hdfs,  hadoop-mapreduce-project, and a handful of
> others are touched due to the time it takes to do the unit test runs. Guess
> how many of these are being touched? :)
> * Committing piece meal means that feature-set wise, it could/would be
> incomplete until the last block is pushed into place.  This definitely
> needs to get committed as a logical set (either as many smaller commits at
> once or a rebased/merged single commit) in trunk.
> * Being in a branch means that reviewers can checkout and test rather than
> having to apply a series of patches [despite how easy smart-apply-patch
> makes it. ;) ]
> * One of the big criticisms during the 9902 review was that it would have
> been better to do it as a feature branch due to the size. This is me trying
> to acquiesce to that request. (It’s worthwhile pointing out that 9902
> couldn’t have been easily broken up, however, since hadoop-config.sh was
> touched. That was a major ‘beak the world’ moment.)
>
>         A big problem with the PMC’s ruleset around feature branches is
> that they are pretty much built around the idea that teams of people are
> going to be working on a feature. Realistically, I don’t see that being the
> case here (despite some mild interest from other committers in the feature)
> esp given I can probably finish this feature off in a few days.  That’s why
> I’m asking if CTR into the branch followed by RTM into trunk is a ‘legal'
> strategy here, since branch merges require reviews anyway.
>
>         Yes, I could make a branch in my own repo on gitlab, but I doubt
> that qualifies as something that is actually reviewable by the ASF’s
> rules…. which brings us back here anyway if I turn this into a giant patch
> file.

Reply via email to