Thanks Mark!
On Fri, Aug 21, 2015 at 1:13 PM, Mark Bretl wrote:
> I will re-enable the nightly schedule.
>
> --Mark
>
> On Fri, Aug 21, 2015 at 1:11 PM, Kirk Lund wrote:
>
> > Hi,
> >
> > We need to have the nightly build reenabled for Apache Geode incubating.
> > Who do we need to speak with
I will re-enable the nightly schedule.
--Mark
On Fri, Aug 21, 2015 at 1:11 PM, Kirk Lund wrote:
> Hi,
>
> We need to have the nightly build reenabled for Apache Geode incubating.
> Who do we need to speak with to make this happen?
>
> Thanks,
> Kirk
>
> On Fri, Aug 21, 2015 at 11:06 AM, Kirk Lu
Hi,
We need to have the nightly build reenabled for Apache Geode incubating.
Who do we need to speak with to make this happen?
Thanks,
Kirk
On Fri, Aug 21, 2015 at 11:06 AM, Kirk Lund wrote:
> Hi Andrew,
>
> Please reenable our nightly build. It's about a 6 hour run and we don't
> want that tr
[
https://issues.apache.org/jira/browse/BUILDS-85?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14707295#comment-14707295
]
Martin Grigorov commented on BUILDS-85:
---
Everything is green again:
https://ci.apach
On 08/21/2015 06:50 PM, Andrew Bayer wrote:
> Ok, I think the restart seems to have cleared up the polling on jobs -
> sorry, I'll figure out a better way to handle this sort of thing in the
> future.
Thanks Andrew.
I changed all Directory jobs to poll @daily only, should be sufficient
for our ne
FWIW, https://issues.apache.org/jira/browse/INFRA-10171 will help us catch
this earlier, and https://issues.apache.org/jira/browse/INFRA-10172 will
help keep it from happening in the first place. I'm also looking to see
what jobs are using the most disk space to try to trim their usage down.
On Fr
Got it, it looks much better now - thank you!
Jarcec
> On Aug 21, 2015, at 9:33 AM, Andrew Bayer wrote:
>
> It's not the slaves, it's the master itself.
>
> A.
>
> On Fri, Aug 21, 2015 at 12:32 PM, Jarek Jarcec Cecho
> wrote:
>
>> Thanks Andrew!
>>
>> Just to give you heads up - it just se
Everything's lively again now - I'll make sure we've got disk space
monitoring in place again.
A.
On Fri, Aug 21, 2015 at 12:33 PM, Andrew Bayer
wrote:
> It's not the slaves, it's the master itself.
>
> A.
>
> On Fri, Aug 21, 2015 at 12:32 PM, Jarek Jarcec Cecho
> wrote:
>
>> Thanks Andrew!
>>
Ok, I think the restart seems to have cleared up the polling on jobs -
sorry, I'll figure out a better way to handle this sort of thing in the
future.
A.
On Fri, Aug 21, 2015 at 12:31 PM, Stefan Seelmann
wrote:
> Polling would be fine for us, even @daily.
>
> But it seems the polling doesn't wo
Hmm, I'll take a look - I'm restarting builds.a.o right now to deal with
the out of space issue.
A.
On Fri, Aug 21, 2015 at 12:31 PM, Stefan Seelmann
wrote:
> Polling would be fine for us, even @daily.
>
> But it seems the polling doesn't work, see [1]. There is only the icon
> but the "Subvers
It's not the slaves, it's the master itself.
A.
On Fri, Aug 21, 2015 at 12:32 PM, Jarek Jarcec Cecho
wrote:
> Thanks Andrew!
>
> Just to give you heads up - it just seems that the build slaves went out
> of disk space as I see this exception:
>
> java.io.IOException: No space left on device
>
>
Thanks Andrew!
Just to give you heads up - it just seems that the build slaves went out of
disk space as I see this exception:
java.io.IOException: No space left on device
(probably some rogue job or something)
Jarcec
> On Aug 21, 2015, at 9:18 AM, Andrew Bayer wrote:
>
> I'll take a look i
Polling would be fine for us, even @daily.
But it seems the polling doesn't work, see [1]. There is only the icon
but the "Subversion Polling Log" label is missing. When clicking on the
icon it shown an error.
For another job [2] I went into "Configure" and just saved without any
further modifica
We ran out of disk space on builds.a.o - I'm compressing the 487gb (!!!) of
logs and will work with the rest of Infra to make sure that's done
automatically.
A.
On Fri, Aug 21, 2015 at 12:18 PM, Andrew Bayer
wrote:
> I'll take a look in 30 minutes or so.
> On Aug 21, 2015 12:05, "Jarek Jarcec C
No worries, I completely get why we made this change across the board. I
actually feel bad that I didn’t realize the consequences yesterday when I read
the email and had to spend good half an hour figuring it out :)
Jarcec
> On Aug 21, 2015, at 9:17 AM, Andrew Bayer wrote:
>
> Oops, sorry abo
I'll take a look in 30 minutes or so.
On Aug 21, 2015 12:05, "Jarek Jarcec Cecho" wrote:
> Looking at Jenkins queue - there is 132 queued jobs and only 4 working
> executors :) All the others are marked as “DEAD”.
>
> I’m wondering what is the process of reporting that? Should I create
> BUILDS o
Oops, sorry about that!
On Aug 21, 2015 12:02, "Jarek Jarcec Cecho" wrote:
> I’ve noticed that this change has pretty much disabled Hadoop world
> spefici precommit [1] hook as this job is running every 10 minutes and
> fetching updates from JIRA, rather then following any repository. I’ve put
>
Looking at Jenkins queue - there is 132 queued jobs and only 4 working
executors :) All the others are marked as “DEAD”.
I’m wondering what is the process of reporting that? Should I create BUILDS or
INFRA JIRA, or just send email to this group or something completely else?
Jarcec
I’ve noticed that this change has pretty much disabled Hadoop world spefici
precommit [1] hook as this job is running every 10 minutes and fetching updates
from JIRA, rather then following any repository. I’ve put back the “run every
10 minutes”. This job is usually done in less then 30 seconds,
[
https://issues.apache.org/jira/browse/BUILDS-85?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14706681#comment-14706681
]
Martin Grigorov commented on BUILDS-85:
---
I've tried but it seems BuildBot has problem
20 matches
Mail list logo