The job just built correctly on ubuntu-4. The jenkins-test-* images have
been updated to use the correct settings.xml file. You can file a ticket
on JIRA if this is a recurring issue.
Cheers,
-Pono
On Wed, Jun 8, 2016 at 7:51 PM, Keith W wrote:
> Hello builds,
>
> Would anyone be able to take
The newest build has succeeded at revision 1750756
https://ci.apache.org/builders/openjpa-2.2.x-docs/builds/333 Does this
resolve the problem you saw?
-Pono
On Thu, Jun 23, 2016 at 2:59 PM, Jody Grassel wrote:
> Good morning, I had noticed that OpenJPA's doc builds were failing for
> quite som
H10 was rebooted for a kernel upgrade, sorry for the inconvenience. If you
have persisting issues with any of the nodes please create a JIRA issue.
Cheers,
-Pono
On Tue, Aug 9, 2016 at 8:25 AM, Andy Seaborne wrote:
> It looks like podling Taverna is also affected - I found a bunch of jobs
> on
Hey Maxim,
https://issues.apache.org/jira/browse/INFRA-12417 Was filed and resolved!
Cheers,
-Pono
On Fri, Aug 12, 2016 at 9:18 AM, Maxim Solodovnik
wrote:
> I'm currently getting:
>
> Cannot run program "svn" (in directory
> "/home/jenkins/jenkins-slave/workspace/Openmeetings 3.1.x"): error=2,
ubuntu3 got disconnected, thanks for letting us know. We've
reconnected it but I've gone ahead and moved your job to using the
more generic 'ubuntu' label and the job completed successfully!
https://builds.apache.org/view/All/job/PDFBox%202.0.x/188/consoleFull
Now you won't have to worry about whe
This node had some hung jobs, so I've rebooted it and hopefully it can
be speedier now.
Cheers,
-Pono from Infra
On Wed, Oct 19, 2016 at 3:49 AM, Michael Dürig wrote:
>
> Hi,
>
> Our builds regularly time out when assigned to the ubuntu-2 slave. Is there
> anything wrong with that particular sla
I've grown the disk, thanks for letting us know about the problem.
Cheers,
-Pono
On Fri, Oct 21, 2016 at 8:25 AM, Javen O'Neal wrote:
> FYI, this job failed due to running out of disk space on ubuntu-eu2.
> https://builds.apache.org/job/POI/1599/
> Caused by: org.tmatesoft.svn.core.SVNException:
You can join the IRC channel on the Freenode network #asftest to send
commands to the bots to rebuild jobs . You can also have triggers on
your jobs to rebuild whenever something changes to the repository
https://docs.buildbot.net/latest/manual/cfg-schedulers.html#scheduler-types
Feel free to join
A link to a failing job would be great.
Cheers
-Pono
On Fri, Nov 4, 2016 at 1:11 PM, Flavio Junqueira wrote:
> Has there been any changes to windbags recently? Recently being the last day
> or two.
>
> Thanks,
> -Flavio
pomona needs a reboot please.
Cheers,
-Pono
Hervé,
The jenkins-test slaves are provisioned when demand is too high on
cluster; they are not exactly like the puppet nodes and looks like
some packages need updating. I'll do that now, thanks for letting us
know.
Cheers,
-Pono
On Sun, Nov 27, 2016 at 5:45 PM, Hervé BOUTEMY wrote:
> Hi,
>
> S
Could you reboot asf916 when you get a chance. I tried to reboot and
it seems hung.
Thanks!
Hey Rajiv,
Looks like asf916 didn't come back up from a reboot. Would you mind nudging it?
Thanks a bunch,
-Pono
Both are back online, thanks for letting us know.
On Wed, Feb 22, 2017 at 2:49 AM, sebb wrote:
> As the subject says: would it be possible to restart them please?
Which job is it that's having problems?
On Tue, Apr 18, 2017 at 12:33 PM, Martin wrote:
> Hi,
>
> we get OutOfMemoryErrors permanently on the node qnode1.
> We use the generic 'ubuntu' label for our jobs.
> Is there any information about the available resources on the build
> slaves? The Wiki doe
As you've probably noticed we've had some difficulties with the new
wildcard. I've pushed out changes to puppet so all our nodes will now
have the intermediate SSL.com cert. Puppet is catching up and
buildbot/ jenkins nodes should all have the correct certs installed.
Sorry about the inconvenienc
Docker filled up /var/ so I cleared out the old images. Going to work
on making sure docker isn't a disk hog in the future.
Sent an e-mail to Y! about getting the H nodes kicked, thanks for
letting us know.
On Thu, Jun 22, 2017 at 8:56 PM, Allen Wittenauer
wrote:
>
> Hi all.
>
> Just no
On Jun 22, 2017, at 11:12 PM, Daniel Pono Takamori wrote:
>>>
>>> Docker filled up /var/ so I cleared out the old images. Going to work
>>> on making sure docker isn't a disk hog in the future.
>>
>>
>> ha. Kind of ironic that my Yetus job fa
lares doesn't respond to ping or ssh, and orcus seems to have sshd
running but won't let me in and hangs when I try to connect.
Could we get a powercycle when you have a chance please.
Thanks a million!
-Pono
Thanks Rajiv, got out nodes connected again and building :D
On Fri, Jul 14, 2017 at 6:42 PM, Rajiv Chittajallu wrote:
> Both the nodes are up now.
>
>
>
> On Friday, July 14, 2017, 3:41:54 PM PDT, Daniel Pono Takamori
> wrote:
>
>
> la
asf90{1,5,7} are offline, maybe some buttons to press to get them back?
Thanks a bunch!
I've cleaned the disk on that node. The H nodes recently had a bug
with docker which meant they weren't purging old images, that is fixed
now.
Cheers.
On Wed, Sep 20, 2017 at 1:07 PM, Tilman Hausherr wrote:
> https://builds.apache.org/job/PDFBox-2.0.x/696/console
>
> Building remotely onH18
Allen, do you have a link to a job that failed in the way you
describe? I booped the docker service to be safe, so hopefully it was
temporal.
On Thu, Sep 21, 2017 at 5:39 PM, Allen Wittenauer
wrote:
>
>> On Sep 20, 2017, at 2:17 PM, Daniel Pono Takamori wrote:
>>
>> I
We've encountered this bug with jobs not scheduling before, but I've
bounced Jenkins and jobs are building again. Hopefully we can uncover
whether this is an upstream bug or something in our own setup soon.
Thanks for letting us know :)
On Wed, Sep 27, 2017 at 8:03 AM, Lukasz Lenart wrote:
> Som
Great to hear Lukasz. We are still trying to map out pain points of
moving to Gitbox, so hopefully we are getting closer to full coverage
now.
On Wed, Sep 27, 2017 at 1:09 AM, Lukasz Lenart wrote:
> Looks like setting Git SCM back to git://github.com/apache/struts.git
> resolved the problem :)
>
As Allen mentions Docker is the big offender. I've added one cleanup
`docker system prune -a -f` to run hourly. The problem nodes are the
qnodes which have much less space for docker than the other nodes.
I'm disabling them for the time being until I can either get a bigger
disk or guarantee they
Hello builds@ enthusiasts.
As I'm sure you are aware, we're encountering a rather nasty bug
affecting a small subset of jobs but in turn our fix is affecting the
entire Jenkins stack. I've created
https://issues.apache.org/jira/browse/INFRA-15424 to track what we
know and will be compiling a bug
A ticket [0] recently came up about upgrading the Gradle version on
our nodes. It appears we have some nodes with gradle-2.14 and 3.1
installed most places but not all. As a measure of consolidation,
we're going to provide legacy support for 3.1 (as in we won't remove
it but won't be installing i
In our standardizing of the jenkins uid/gid we encountered a bug
preventing puppet from properly changing the gid, which I've now
fixed. All nodes should have maven-3.5.2 now.
Cheers
On Wed, Nov 15, 2017 at 2:33 PM, Martin Stockhammer wrote:
> Hi,
>
> our last jobs failed because mvn was not fo
Sorry about missing that in the move to a single uid. I've nuked most
of /tmp that should be affected. It might be a good idea to move to a
./tmp directory instead and then you can clean that up and not worry
about /tmp being on a different disk and sometimes filling up.
Cheers.
On Tue, Nov 21,
There were some root owned files there, so I've chowned them to
jenkins. Let me know if you see anything again.
Cheers
On Sat, Nov 25, 2017 at 12:11 PM, Stefan Seelmann
wrote:
> Hi,
>
> Jenkins node H30 seems to have a filesystem issue:
>
> [WARNING] Failed to create parent directories for trac
Hey Rajiv,
Looks like orcus and lares fell over, any chance you can poke them and
try to bring them back from the dead?
Thanks a bunch!
On Sat, Jul 15, 2017 at 12:59 PM, Daniel Pono Takamori wrote:
> Thanks Rajiv, got out nodes connected again and building :D
>
> On Fri, Jul 14, 2017 a
Pono Takamori wrote:
> Hey Rajiv,
> Looks like orcus and lares fell over, any chance you can poke them and
> try to bring them back from the dead?
>
> Thanks a bunch!
>
> On Sat, Jul 15, 2017 at 12:59 PM, Daniel Pono Takamori
> wrote:
>> Thanks Rajiv, got out nodes conne
https://github.com/apache/infrastructure-puppet/blob/deployment/modules/gitwcsub/files/config/gitwcsub.cfg#L34
and
https://github.com/apache/infrastructure-puppet/blob/deployment/modules/gitwcsub/files/config/gitwcsub.cfg#L83
are the docs and (currently only) config line.
In regards to:
>Is it po
Problems with disk filling but I've fixed that:
https://issues.apache.org/jira/browse/INFRA-16021
On Mon, Feb 12, 2018 at 11:00 PM, P. Ottlinger wrote:
> Hi,
>
> H22 seems to have problems cloning repos ...
> multiple TAMAYA-builds failed and went through on other builds.
>
> As I'm still unable
After inspecting the node I believe this to be a transient failure.
If it occurs again I'm wrong and will dig deeper.
Cheers
On Mon, Feb 12, 2018 at 1:54 PM, Felix Schumacher
wrote:
> Hi all,
>
> as can be seen at https://builds.apache.org/job/JMeter-trunk/6670/console,
> we (JMeter) have a prob
We support a wide variety of JDK varieties; here is the reference
page: https://cwiki.apache.org/confluence/display/INFRA/JDK+Installation+Matrix
You can change the version of the JDK in the 'Job Config' page for
your specific build.
Also I've cc'ed builds@apache.org as that is the build testing/ C
Ambari Devs,
It appears your PR job in Jenkins has had some 20 odd jobs in queue
due to pinning your job to the 'hadoop' label. I'm unsure why you've
tied your job to that label but I've changed it to the more generic
'ubuntu'. The Hadoop nodes are dedicated to main Hadoop projects and
since we'v
Hey there Appveyor Folks!
It was brought to our attention [0] that the Apache Software
Foundation appveyor builds seem queued and I can't find the cause
since we don't appear to have any jobs running. If you have any
insight or can point me to something to unblock that'd be fantastic.
Thanks so m
Unfortunately we had another crash today so Jenkins has just been
restarted as or 23:15 UTC May 18th. Sorry for the inconvenience.
-Pono
On Tue, May 15, 2018 at 7:26 PM, Chris Lambertus wrote:
> Jenkins crashed around 0200 UTC 16 May. We are restarting it with some
> additional performance adj
Hey Rajiv,
Looks like 900,908,930 are down. 0 and 8 have been down for a while
(they might be dead like 7 so should we remove them from Jenkins?) and
I just tried to reboot 30 since it was wedged in a weird state but
hasn't come back up. If you get a chance to look or verify they are
dead that'd
Jenkins was hung around 07:40 UTC this morning so I needed to restart
it and now it's back. Hopefully any lost jobs can be restarted
without too much fuss, sorry about the disruption in service.
-Pono
Our hosting provider had some routing issues which lead to Jenkins
being down for about an hour this morning:
https://www.hetzner-status.de/en.html#9674
Service is back and things are building.
Happy saturday,
-Pono
Hey folks,
Doing disk cleanups on our Jenkins nodes [0], and found about 40
jpulsar_precommit* jobs on a single node taking up 2-5gb each. We've
moved to purging 15 day old files [1] on Jenkins nodes. I'm wondering
if we can be more aggressive on purging these old precommit jobs, or
if there is s
Hey Kylin folks,
When clearing disk space on H26 I found that your job was pinned to it
(I've since moved the label for the job to the generic 'ubuntu'
label). The job failed about 20 times today and it's because of a
corrupted .git folder which appeared to just be filling and ended up
huge: `97G
45 matches
Mail list logo