On 04/12/2017 12:10 AM, mck wrote:
>
> On 10 March 2017 at 05:51, Jason Brown wrote:
>> A nice convention we've stumbled into wrt to patches submitted via Jira is
>> to post the results of unit test and dtest runs to the ticket (to show the
>> patch doesn't break things).
>> [snip]
>> As an exam
On 10 March 2017 at 05:51, Jason Brown wrote:
> A nice convention we've stumbled into wrt to patches submitted via Jira is
> to post the results of unit test and dtest runs to the ticket (to show the
> patch doesn't break things).
> [snip]
> As an example, should contributors/committers run dtes
Yes, failed test results need to be looked at by someone. But this is
already the case and won't change no matter if we run tests for each
patch and branch, or just once a day for a single dev branch. Having to
figure out which commit exactly causes the regression would take some
additional effort,
>
> I think we'd be able to figure out the one of them causing a regression
> on the day after.
That sounds great in theory. In practice, that doesn't happen unless one
person steps up and makes themselves accountable for it.
For reference, take a look at: https://cassci.datastax.com/view/trunk/,
If I remember correctly, the requirement of providing test results along
with each patch was because of tick-tock, where the goal was to have
stable release branches at all times. Without CI for testing each
individual commit on all branches, this just won't work anymore. But
would that really be t
To Ariel's point, I don't think we can expect all contributors to run all
utesss/dtests, especially when the patch spans multiple branches. On that
front, I, like Ariel and many others, typically create our own branch of
the patch and have executed the tests. I think this is a reasonable system,
if
Hi,
Before this change I had already been queuing the jobs myself as a
reviewer. It also happens to be that many reviewers are committers. I
wouldn't ask contributors to run the dtests/utests for any purpose other
then so that they know the submission is done.
Even if they did and they pass it d
No problem, I'll start a new thread.
On Thu, Mar 9, 2017 at 11:48 AM Jason Brown wrote:
> Jon and Brandon,
>
> I'd actually like to narrow the discussion, and keep it focused to my
> original topic. Those are two excellent topics that should be addressed,
> and the solution(s) might be the same
Jon and Brandon,
I'd actually like to narrow the discussion, and keep it focused to my
original topic. Those are two excellent topics that should be addressed,
and the solution(s) might be the same or similar as the outcome of this.
However, I feel they deserve their own message thread.
Thanks fo
Let me further broaden this discussion to include github branches, which
are often linked on tickets, and then later deleted. This forces a person
to search through git to actually see the patch, and that process can be a
little rough (especially since we all know if you're gonna make a typo,
it's
If you don't mind, I'd like to broaden the discussion a little bit to also
discuss performance related patches. For instance, CASSANDRA-13271 was a
performance / optimization related patch that included *zero* information
on if there was any perf improvement or a regression as a result of the
chan
Hey all,
A nice convention we've stumbled into wrt to patches submitted via Jira is
to post the results of unit test and dtest runs to the ticket (to show the
patch doesn't break things). Many contributors have used the
DataStax-provided cassci system, but that's not the best long term
solution. T
12 matches
Mail list logo