Here's a build that just ran

https://travis-ci.org/apache/arrow/builds/498906102?utm_source=github_status&utm_medium=notification

2 failed builds

* ARROW-4684
* Seemingly a GLib Plasma OOM
https://travis-ci.org/apache/arrow/jobs/498906118#L3689

24 hours ago: 
https://travis-ci.org/apache/arrow/builds/498501983?utm_source=github_status&utm_medium=notification

* The same GLib Plasma OOM
* Rust try_from bug that was just fixed

It looks like that GLib test has been failing more than it's been
succeeding (also failed in the last build on Feb 22).

I think it might be worth setting up some more "annoying"
notifications when failing builds persist for a long time.

On Tue, Feb 26, 2019 at 3:37 PM Michael Sarahan <msara...@gmail.com> wrote:
>
> Yes, please let us know.  We definitely see 500's from anaconda.org, though
> I'd expect less of them from CDN-enabled channels.
>
> On Tue, Feb 26, 2019 at 3:18 PM Uwe L. Korn <m...@uwekorn.com> wrote:
>
> > Hello Wes,
> >
> > if there are 500er errors it might be useful to report them somehow to
> > Anaconda. They recently migrated conda-forge to a CDN enabled account and
> > this could be one of the results of that. Probably they need to still iron
> > out some things.
> >
> > Uwe
> >
> > On Tue, Feb 26, 2019, at 8:40 PM, Wes McKinney wrote:
> > > hi folks,
> > >
> > > We haven't had a green build on master for about 5 days now (the last
> > > one was February 21). Has anyone else been paying attention to this?
> > > It seems we should start cataloging which tests and build environments
> > > are the most flaky and see if there's anything we can do to reduce the
> > > flakiness. Since we are dependent on anaconda.org for build toolchain
> > > packages, it's hard to control for the 500 timeouts that occur there,
> > > but I'm seeing other kinds of routine flakiness.
> > >
> > > - Wes
> > >
> >

Reply via email to