So it looks like part of the problem is that HBase builds have been
hanging/causing slaves to barf out due to orphaned java processes eating up
resources. They're looking into this over at
https://issues.apache.org/jira/browse/INFRA-10150.
On Thu, Aug 20, 2015 at 11:44 AM, Daan Hoogland
wrote:
>
On Thu, Aug 20, 2015 at 5:02 PM, David Nalley wrote:
> On Thu, Aug 20, 2015 at 2:55 AM, Daan Hoogland
> wrote:
> > cloudstack...!
>
> I thought all of the ACS PR builds were happening on Travis?
>
only smoke tests, no code analysis
.
> --David
>
--
Daan
Yeah, I don't think we need physical nodes - leasing hosts somewhere would
be perfectly fine. If we did go with physical nodes, I'd guess that five
would probably be fine for the next year, split into multiple
VMs/containers.
I'm working on getting stats/graphs - should hopefully have that ready i
On Thu, Aug 20, 2015 at 2:55 AM, Daan Hoogland wrote:
> cloudstack...! we started making more intensive use of pull-builders. the
> old pull-request build job is replaced by a rat and an analysis job. Maybe
> others have done so as well but I am sure we (ACS) are a culprit in this.
>
I thought a
On Thu, Aug 20, 2015 at 6:32 AM, Gavin McDonald wrote:
>
>> On 20 Aug 2015, at 2:18 am, David Nalley wrote:
>>
>>
>
>> ...how many additional
>> build nodes/executors will satiate our current demand for capacity?
>
> I know this was not aimed at me but I’ll give my opinion.
>
> From what I’ve se
> On 20 Aug 2015, at 2:18 am, David Nalley wrote:
>
>
> ...how many additional
> build nodes/executors will satiate our current demand for capacity?
I know this was not aimed at me but I’ll give my opinion.
From what I’ve seen of the current growth over the last year; and to allow for
future
> On 20 Aug 2015, at 3:28 am, David Nalley wrote:
>
> So, just spot checking some of the dynamic build slaves - historically
> those have spun up a few hours at a time, to give us additional
> capacity when our queue spiked - but it looks like the slaves are
> staying online pretty much all of t
> On 20 Aug 2015, at 7:55 am, Daan Hoogland wrote:
>
> cloudstack...! we started making more intensive use of pull-builders. the
> old pull-request build job is replaced by a rat and an analysis job. Maybe
> others have done so as well but I am sure we (ACS) are a culprit in this.
From what I c
cloudstack...! we started making more intensive use of pull-builders. the
old pull-request build job is replaced by a rat and an analysis job. Maybe
others have done so as well but I am sure we (ACS) are a culprit in this.
On Thu, Aug 20, 2015 at 4:28 AM, David Nalley wrote:
> So, just spot chec
So, just spot checking some of the dynamic build slaves - historically
those have spun up a few hours at a time, to give us additional
capacity when our queue spiked - but it looks like the slaves are
staying online pretty much all of the time. Right now we are
configured to have 12 slaves. Assumin
Andrew:
I know Jenkins tracks how big the queue is, is that something that we
can graph over time? It'd be interesting to know how that's changed
and will change, otherwise we'll constantly be fighting fires here.
I'd like to have the following items graphed:
# of dynamic slaves in operation
# o
Hey all -
So as you may have noticed, we've seen an increase in build utilization on
builds.a.o in the last few months - which is great! The problem is that
we're seeing more demand than we have resources at this point, and that's
only going to increase. We're working on getting more slaves lined
12 matches
Mail list logo