On Thu, Sep 14, 2017 at 1:43 PM, Ray Chiang <rchi...@apache.org> wrote: > The other solution I've seen (from Oozie?) is to re-run just the subset of > failing tests once more. That should help cut down the failures except for > the mostly flaky of flakies.
Many of our unit tests generate random cases and report the seed to reproduce, and others are flaky because they collide with other tests' artifacts. Success on reexec risks missing some important cases. I'd rather err on the side of fixing/removing tests that are too unreliable to serve their purpose. I understand the counter-argument, but Hadoop has accumulated a ton of tests without a recent round of pruning. We could stand to lose a few cycles to highlight and trim the cases that cost us the most dev and CI time. -C --------------------------------------------------------------------- To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org