On Mon, Oct 19, 2015 at 12:39 PM, Allen Wittenauer <a...@altiscale.com> wrote:

>
> On Oct 19, 2015, at 12:25 PM, Andrew Wang <andrew.w...@cloudera.com>
> wrote:
> >>
> > I do agree we should fix these usages of build/test/data, but I don't
> > follow the logic regarding running RAT after tests. The point of the
> > precommit RAT check is to avoid introducing things to the source tarball
> > that fail RAT, and test output is not part of the source tar ball.
>
>         Yup. Which is why it’s bad that test data is getting written to
> the source area of the tree.
>
> > It's not
> > intended to detect issues if an RM incorrectly builds a release, the RM
> and
> > PMC always need to run RAT again when voting on a release. We don't run
> > precommit on releases, so this "RAT after tests" check wouldn't help
> detect
> > bad release tar balls.
> >
> > Point being, RAT is not supposed to be used to catch naughty tests that
> > operate outside target/, so let's not use it as such. These RAT errors,
> > although they helped us find some test issues, are unrelated to the
> release
> > process and thus spurious.
>
>         I just had a very timely discussion with a dev here who is working
> on their first patch.  They asked why sometimes (the trunk-version of)
> test-patch threw rat errors and sometimes it didn’t.  After chatting, they
> were running it in dirty workspace mode after manually running unit tests.
>
>         Sometimes it isn’t about people with positions.


For this first-time dev, their issue sounds like it should be addressed it
via better documentation or improved error messages. They could dirty their
workspace in any number of ways besides running tests and have the same
problem. The real fix is explaining to them the what and why of RAT, which
is not solved by running RAT last in precommit.

Reply via email to