With 7 binding and 2 non-binding +1 votes, the release of 3.0.12 has
passed. I'll get the artifacts uploaded first thing in the morning,
since it's a bit late tonight.
--
Warm regards,
Michael
On 03/07/2017 10:15 AM, Michael Shuler wrote:
> I propose the following artifacts for release as 3.0.12
To Ariel's point, I don't think we can expect all contributors to run all
utesss/dtests, especially when the patch spans multiple branches. On that
front, I, like Ariel and many others, typically create our own branch of
the patch and have executed the tests. I think this is a reasonable system,
if
Hi,
I should clarify. Should in the sense of it was fine for the process we
had at the time (ugh!) but it's not what we should do in the future.
Ariel
On Thu, Mar 9, 2017, at 04:55 PM, Ariel Weisberg wrote:
> Hi,
>
> I agree that patch could have used it and it was amenable to
> micro-benchmark
Hi,
I agree that patch could have used it and it was amenable to
micro-benchmarking. Just to be pedantic about process which is something
I approve of to the chagrin of so many.
On a completely related note that change also randomly boxes a boolean.
Ariel
On Thu, Mar 9, 2017, at 03:45 PM, Jonat
I don't expect everyone to run a 500 node cluster off to the side to test
their patches, but at least some indication that the contributor started
Cassandra on their laptop would be a good sign. The JIRA I referenced was
an optimization around List, Set and Map serialization. Would it really
have
Hi,
Before this change I had already been queuing the jobs myself as a
reviewer. It also happens to be that many reviewers are committers. I
wouldn't ask contributors to run the dtests/utests for any purpose other
then so that they know the submission is done.
Even if they did and they pass it d
Hi,
I think there are issues around the availability of hardware sufficient
to demonstrate the performance concerns under test. It's an open source
project without centralized infrastructure. A lot of performance
contributions come from people running C* in production. They are
already running the
After looking at the patch, my thoughts (beware, it's getting very
technical):
Original code:
-t = new ListType(elements, isMultiCell);
-ListType t2 = internMap.putIfAbsent(elements, t);
-t = (t2 == null) ? t : t2;
Optimized code:
+t = internMap
Agree. Anything that's meant to increase performance should demonstrate it
actually does that. We have microbench available in recent versions -
writing a new microbenchmark isn't all that onerous. Would be great if we
had perf tests included in the normal testall/dtest workflow for ALL
patches so
I'd like to discuss what I consider to be a pretty important matter -
patches which are written for the sole purpose of improving performance
without including a single performance benchmark in the JIRA.
My original email was in "Testing and Jira Tickets", i'll copy it here
for posterity:
If you
No problem, I'll start a new thread.
On Thu, Mar 9, 2017 at 11:48 AM Jason Brown wrote:
> Jon and Brandon,
>
> I'd actually like to narrow the discussion, and keep it focused to my
> original topic. Those are two excellent topics that should be addressed,
> and the solution(s) might be the same
Jon and Brandon,
I'd actually like to narrow the discussion, and keep it focused to my
original topic. Those are two excellent topics that should be addressed,
and the solution(s) might be the same or similar as the outcome of this.
However, I feel they deserve their own message thread.
Thanks fo
Let me further broaden this discussion to include github branches, which
are often linked on tickets, and then later deleted. This forces a person
to search through git to actually see the patch, and that process can be a
little rough (especially since we all know if you're gonna make a typo,
it's
If you don't mind, I'd like to broaden the discussion a little bit to also
discuss performance related patches. For instance, CASSANDRA-13271 was a
performance / optimization related patch that included *zero* information
on if there was any perf improvement or a regression as a result of the
chan
Hey all,
A nice convention we've stumbled into wrt to patches submitted via Jira is
to post the results of unit test and dtest runs to the ticket (to show the
patch doesn't break things). Many contributors have used the
DataStax-provided cassci system, but that's not the best long term
solution. T
I posted this Jira here already
https://issues.apache.org/jira/browse/CASSANDRA-13315 but I wanted to toss
it out there on the mailing list at the same time to get some broader
feedback.
I've been supporting users a few years and during that time I've had 2-5
conversations a week about what is the
+1 (non-binding)
On 2017-03-07 17:15, Michael Shuler wrote:
I propose the following artifacts for release as 3.0.12.
This release addresses a possible 2.1->3.0 upgrade issue[3], along with
a few fixes committed since 3.0.11.
sha1: 50560aaf0f2d395271ade59ba9b900a84cae70f1
Git:
http://git-wip-u
17 matches
Mail list logo