Hi,
On Wed, 2018-07-25 at 09:27 +1000, Gav wrote:
> Disk space issues , yes, not on most of the Hadoop and related
> projects
> nodes - H0-H12 do not have disk space issues. As a Hadoop related
> project
> HBase should really be concentrating its builds there.
A suggestion from the sidelines. We
On Wed, Jul 25, 2018 at 2:36 AM Robert Munteanu wrote:
> Hi,
>
> On Wed, 2018-07-25 at 09:27 +1000, Gav wrote:
> > Disk space issues , yes, not on most of the Hadoop and related
> > projects
> > nodes - H0-H12 do not have disk space issues. As a Hadoop related
> > project
> > HBase should really
How does a targeted hardware donation work? I was under the impression that
targeted donations are not accepted by the ASF. Maybe it is different in
infrastructure, but this is the first time I've heard of it. Who does the
donation on those projects? DataStax for Cassandra? Who for CouchDB? Google
Hi,
On Wed, Jul 25, 2018 at 6:22 PM Andrew Purtell wrote:
> ...How does a targeted hardware donation work? I was under the impression that
> targeted donations are not accepted by the ASF
This has changed, last year IIRC - there's a bit of information at
https://www.apache.org/foundation/con
I'll speak to CouchDB - the donation is directly in the form of a Jenkins
build agent with our tag, no money is changed hands. The donator received
a letter from fundraising@a.o allowing for tax deduction on the equivalent
amount that the ASF leasing the machine would have cost for a year's
donatio
Thanks Joan and Bertrand.
> The number of failed builds in our stream that are directly related to
this "tragedy of the commons" far exceeds the number of successful builds
at this point, and unfortunately Travis CI is having parallel capacity
issues that prevent us from moving to them wholesale a
> On Jul 25, 2018, at 10:34 AM, Andrew Purtell wrote:
>
> public clouds instead. I'm not sure if the ASF is set up to manage on
> demand billing for test resources but this could be advantageous. It would
> track actual usage not fixed costs. To avoid budget overrun there would be
> caps and l
> On Jul 25, 2018, at 10:48 AM, Chris Lambertus wrote:
>
> On-demand resources are certainly being considered (and we had these in the
> past,) but I will point out that ephemeral (“on-demand”) cloud builds are in
> direct opposition to some of the points brought up by Allen in the other
> j
Am So., 22. Juli 2018 um 00:42 Uhr schrieb Joan Touzet :
> Yes - you can do this in your own Jenkinsfile or job description.
>
> In a Pipeline build (declarative or procedural), use deleteDir() :
>
> https://jenkins.io/doc/pipeline/steps/workflow-basic-steps/#-deletedir-%20recursively%20delete%20t
After writing such a long text (sorry), I completely forgot to say that
this configuration can hopefully be made in one place such that it must not
be done in each build job. And yes, I know that fetching artifacts and
other stuff takes time and costs resources, but there are very effective
ways to
Hey!
I'm setting up a PR job for directory-scmiple:
https://builds.apache.org/view/D/view/Directory/job/dir-scimple-pull-requests/
Based on the blog post:
https://blogs.apache.org/infra/entry/github_pull_request_builds_now, and
other jobs I've looked at, it looks like it _should_ be setup correct
śr., 25 lip 2018 o 22:18 Brian Demers napisał(a):
>
> Hey!
>
> I'm setting up a PR job for directory-scmiple:
> https://builds.apache.org/view/D/view/Directory/job/dir-scimple-pull-requests/
>
> Based on the blog post:
> https://blogs.apache.org/infra/entry/github_pull_request_builds_now, and
> ot
12 matches
Mail list logo