Tests fail due to variety of reasons. Some of them fail due to underlying
infrastructural issues. For example, getting a clean run of Python DTests
typically involves rerunning them a couple times. Is it possible to do that at
the test framework level i.e. in Jenkins and/or CircleCI?
Dinesh
>
That’s awesome that we have that set up. I was checking out b.a.o after my
email and noticed some recent runs. I don’t mean to prescribe any specific
way of surfacing results as long as they are easily accessible to all
contributors (well documented where to find them, etc).
Progress on posting re
> In my opinion/experience, this is all a direct consequence of lack of trust
> in CI caused by flakiness.
The challenge of this project's test state certainly feel like an
insurmountable challenge at times…
Having been battling away with Jenkins, because I do have ASF access and don't
have
Looks like there are two slack plugins for Jenkins. They trigger after
builds and if my rusty Jenkins-fu is right the trunk build can be scheduled
to run daily and then have the plugin post to slack when its done. Not an
expert and can't poke at the Jenkins instance myself so not sure what
limitati
Can someone find a circleci or jenkins bot that posts to the #cassandra-dev
channel in ASF slack once a day?
On Fri, Jan 24, 2020 at 11:23 AM Jordan West wrote:
> Keeping trunk green at all times is a great goal to strive for, I'd love to
> continue to work towards it, but in my experience its
Keeping trunk green at all times is a great goal to strive for, I'd love to
continue to work towards it, but in my experience its not easy. Flaky
tests, for the reason folks mentioned, are a real challenge. A standard we
could use while we work towards the more ambitious one, and we are pretty
clos
>
> I also don't think it leads to the right behaviour or incentives.
The gap between when a test is authored and the point at which it's
determined to be flaky, as the difficulty with responsibility assignment
(an "unrelated" change can in some cases make a previously stable test
become flaky) ma
I support changing the default GC settings. The ones we have now drive me
nuts.
We should raise the max heap size for CMS to 16G instead of 8 now. We
should still not go higher than half the available RAM.
Also, we should set a new gen size between 40% and 50% of the heap size.
The 100MB per core r
> due to oversight on a commit or a delta breaking some test the author thinks
> is unrelated to their diff but turns out to be a second-order consequence of
> their change that they didn't expect
In my opinion/experience, this is all a direct consequence of lack of trust in
CI caused by flakin
>
> gating PRs on clean runs won’t achieve anything other than dealing with
> folks who straight up ignore the spirit of the policy and knowingly commit
> code with test breakage
I think there's some nuance here. We have a lot of suites (novnode, cdc,
etc etc) where failures show up because people
>
> an entry in the progress report?
That'd be slick. I've had some people pinging me on slack asking about the
easiest way to get involved with the project and ramp up, and I think
refactoring and cleaning up a dtest or two would be another vector for
people to get their feet wet. I like it!
On
>
> I'm unable to create an epic in the project - not sure if that has to do
> with project permissions. Could someone create an epic and link these
> tickets as subtasks?
Just realized I can no longer create epics anymore (or the "new" JIRA UI is
just so obtuse I can't figure it out. I give it
As for GH for code review, I find that it works very well for nits. It’s also
great for doc changes, given how GH allows you suggest changes to files
in-place and automatically create PRs for those changes. That lowers the
barrier for those tiny contributions.
For anything relatively substantia
The person introducing flakiness to a test will almost always have run it
locally and on CI first with success. It’s usually later when they first start
failing, and it’s often tricky to attribute to a particular commit/person.
So long as we have these - and we’ve had flaky tests for as long as
> I find it only useful for nits, or for coaching-level comments that I would
> never want propagated to Jira.
Actually, I'll go one step further. GitHub encourages comments that are too
trivial, poisoning the well for third parties trying to find useful
information. If the comment wouldn't be
The common factor is flaky tests, not people. You get a clean run, you commit.
Turns out, a test was flaky. This reduces trust in CI, so people commit
without looking as closely at results. Gating on clean tests doesn't help, as
you run until you're clean. Rinse and repeat. Breakages accum
16 matches
Mail list logo