Re: How can we choose the right maven version for our build?
you can activate the tool injection plugin to give you a reliable MAVEN_HOME. In the job config, it's the checkbox labeled "tool environment" then find the maven version you want and select it. On Thu, Oct 15, 2015 at 7:32 PM, Jarek Jarcec Cecho wrote: > Thanks for the idea David. I’ll try that to see if it helps at least for now. > > My concern is that it’s not final solution - if the node won’t have proper > maven version we will again fail in unpredictable way. Would it be possible > to add selector for Maven version similarly as we have for Java? > > Jarcec > >> On Oct 15, 2015, at 3:03 PM, David Robson >> wrote: >> >> Hey Jarcec, >> >> Have you tried reversing the path export for example: >> >> export MAVEN_HOME=/home/jenkins/tools/maven/latest >> export PATH=$JAVA_HOME/bin:$MAVEN_HOME/bin:$PATH >> >> To ensure your "mvn" command is picked up first even if it's on the existing >> PATH. >> >> David >> >> -Original Message- >> From: Jarek Jarcec Cecho [mailto:jar...@gmail.com] On Behalf Of Jarek Jarcec >> Cecho >> Sent: Friday, 16 October 2015 6:21 AM >> To: builds@apache.org >> Cc: d...@sqoop.apache.org >> Subject: Re: How can we choose the right maven version for our build? >> >> Any ideas how to pick up the proper version of maven? >> >> Jarcec >> >>> On Aug 24, 2015, at 1:20 PM, Jarek Jarcec Cecho wrote: >>> >>> I have a job that is quite regularly failing with: >>> >>> Error resolving version for 'org.codehaus.mojo:findbugs-maven-plugin': >>> Plugin requires Maven version 3.0.1 >>> >>> Looking into the job’s configuration, I don’t see any way to specify maven >>> version (similarly as we do for let say java). It seems that we were trying >>> to deal with this in the past as the we’re having following “Detection” in >>> the command we’re running: >>> >>> export MAVEN_HOME=/home/jenkins/tools/maven/latest >>> export PATH=$PATH:$JAVA_HOME/bin:$MAVEN_HOME/bin >>> >>> But that sees quite flaky, so I’m wondering what is the right way to get >>> the right maven version for the job? :) >>> >>> Jarcec >>> >>> Links: >>> 1: >>> https://builds.apache.org/job/PreCommit-SQOOP-Build/1622/artifact/patch-process/test_unit.txt >> > -- Sean
Please add YETUS tracker to the PreCommit-Admin jira filter
Hi folks! According to the docs I could find[1], if I want the precommit job for Apache Yetus to start being fed issues, I need to have the project added to the jira filter the PreCommit-Admin job searches. The docs say to email builds@a.o to get added. The tracker is YETUS and I've already set up and manually tested the PreCommit-YETUS-Build jenkins job. Anything else I need to do? [1]: http://wiki.apache.org/general/PreCommitBuilds -- Sean
Re: Please add YETUS tracker to the PreCommit-Admin jira filter
Thanks a ton! I saw a new patch get feedback this morning, so things are working splendidly. -- Sean On Oct 21, 2015 11:57 PM, "Jake Farrell" wrote: > done > > -Jake > > On Wed, Oct 21, 2015 at 11:43 PM, Sean Busbey wrote: > > > Hi folks! > > > > According to the docs I could find[1], if I want the precommit job for > > Apache Yetus to start being fed issues, I need to have the project > > added to the jira filter the PreCommit-Admin job searches. > > > > The docs say to email builds@a.o to get added. The tracker is YETUS > > and I've already set up and manually tested the PreCommit-YETUS-Build > > jenkins job. > > > > Anything else I need to do? > > > > [1]: http://wiki.apache.org/general/PreCommitBuilds > > > > -- > > Sean > > >
Re: Puppetising Jenkins Nodes
I am having trouble finding a maven home related env variable that works. On Wed, Jul 20, 2016 at 10:37 AM, Allen Wittenauer wrote: > >> On Jul 19, 2016, at 6:35 PM, Gav wrote: >> >> >> A real PITA previously was maintaining of the 'latest' and 'latest[1-3]' >> links. We are making >> good progress on improving these and will continue to do so. Please shout >> up if something is a amiss. > > Would this be why MAVEN_3_LATEST__HOME on some of the H nodes are > empty as of a few days ago? e.g., > https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/100/console > > Thanks. > -- busbey
Re: Jenkins Node Labelling Documentation
> Why? yahoo-not-h2 is really not required since H2 is the same as all the > other H* nodes. The yahoo-not-h2 label exists because the H2 node was misconfigured for a long time and would fail builds as a result. What label will jobs taht are currently configured to avoid H2 be migrated to? Will they be migrated automatically? > The 'docker' label references installed software and should be dropped. We > have and will continue to install docker wherever it is required. How do we determine where it's required? If I have a job that relies on docker being installed, do I just get to have it run unlabeled? On Thu, Aug 4, 2016 at 4:18 AM, Gav wrote: > Hi All, > > Following on from my earlier mails regarding Java, Maven and Ant > consolidations, I thought > you might like a page detailing the Jenkins Labels and which nodes they > belong to. > > I've put it up here :- > > https://cwiki.apache.org/confluence/display/INFRA/Jenkins+node+labels > > I hope you find it useful. > > In addition I propose to remove a couple of redundant labels to make > choosing a label > easier. > > Proposal is to remove labels yahoo-not-h2, ubuntu and docker. Why? > yahoo-not-h2 is really not required since H2 is the same as all the other > H* nodes. ubuntu is a copy of Ubuntu and both are identical. > The 'docker' label references installed software and should be dropped. We > have and will continue to install docker wherever it is required. > > If no objections I'll remove these labels in ~2 weeks time on 19th August > > HTH > > Gav... (ASF Infrastructure Team) -- busbey
Re: Jenkins Node Labelling Documentation
On Thu, Aug 4, 2016 at 4:16 PM, Gav wrote: > > > On Fri, Aug 5, 2016 at 3:14 AM, Sean Busbey wrote: >> >> > Why? yahoo-not-h2 is really not required since H2 is the same as all the >> > other H* nodes. >> >> The yahoo-not-h2 label exists because the H2 node was misconfigured >> for a long time and would fail builds as a result. > > > Yes I know, but now its not, so is no longer needed. > >> >> What label will >> jobs taht are currently configured to avoid H2 be migrated to? Will >> they be migrated automatically? > > > Currently I'm asking that projects make the move themselves. Most jobs would > be fine as they have > multiple labels, so just need to drop the yahoo-not-h2 label to give them > access to H2. If, when I drop the label I > see jobs with it in use, I'll remove it. > I don't see a label I can move to that covers the same machines as the current yahoo-not-h2 nodes and H2. It looks like a union of "hadoop" and "docker" would do it, but "docker" is going away. Also I have to have a single label for use in multi-configuration builds or jenkins will treat the two labels as an axis for test selection rather than as just a restriction for where the jobs can run. I could try to go back to using an expression, but IIRC that gave us things like *s in the path used for tests, which was not great. Can we maybe expand the Hadoop label? (or would "beefy" cover the set?) If the H* nodes are all the same, why do we need the labels HDFS, MapReduce, Pig, Falcon, Tez, and ZooKeeper in addition to the Hadoop label? -- busbey
Re: Jenkins Node Labelling Documentation
I'm trying to transition jobs off of the yahoo-not-h2 label, but again I don't see a single label I can use that covers an appropriate set of nodes. Can we expand Hadoop to include H10 and H11? Can we come up with a label that covers both the H* and the physical ubuntu hosts that have been puppetized? On Thu, Aug 4, 2016 at 5:05 PM, Gav wrote: > > > On Fri, Aug 5, 2016 at 7:52 AM, Sean Busbey wrote: >> >> On Thu, Aug 4, 2016 at 4:16 PM, Gav wrote: >> > >> > >> > On Fri, Aug 5, 2016 at 3:14 AM, Sean Busbey wrote: >> >> >> >> > Why? yahoo-not-h2 is really not required since H2 is the same as all >> >> > the >> >> > other H* nodes. >> >> >> >> The yahoo-not-h2 label exists because the H2 node was misconfigured >> >> for a long time and would fail builds as a result. >> > >> > >> > Yes I know, but now its not, so is no longer needed. >> > >> >> >> >> What label will >> >> jobs taht are currently configured to avoid H2 be migrated to? Will >> >> they be migrated automatically? >> > >> > >> > Currently I'm asking that projects make the move themselves. Most jobs >> > would >> > be fine as they have >> > multiple labels, so just need to drop the yahoo-not-h2 label to give >> > them >> > access to H2. If, when I drop the label I >> > see jobs with it in use, I'll remove it. >> > >> >> I don't see a label I can move to that covers the same machines as the >> current yahoo-not-h2 nodes and H2. It looks like a union of "hadoop" >> and "docker" would do it, but "docker" is going away. Also I have to >> have a single label for use in multi-configuration builds or jenkins >> will treat the two labels as an axis for test selection rather than as >> just a restriction for where the jobs can run. I could try to go back >> to using an expression, but IIRC that gave us things like *s in the >> path used for tests, which was not great. >> >> Can we maybe expand the Hadoop label? (or would "beefy" cover the set?) >> >> If the H* nodes are all the same, why do we need the labels HDFS, >> MapReduce, Pig, Falcon, Tez, and ZooKeeper in addition to the Hadoop >> label? > > > I was thinking the same yep, those could all go too imho but wanted to > discuss > that one seperately. > > Gav... > >> >> >> >> -- >> busbey > > -- busbey
Re: Jenkins JDK Matrix - and consolidating of versions.
Hi Gav! I updated all of the HBase related builds to use the 'JDK 1.x (latest)' labels for JDK selection, but now we have several multiple-configuration builds that fail. One of our community members tracked it down to the addition of spaces (and perhaps parens?) used in the JDK names, since these end up in the path of the working directory during multi-configuration builds where JDK version is one of the test axes. Any chance we could consolidate on JDK labels that don't have characters that are problematic, like spaces and parens? -busbey On 2016-08-04 20:03 (-0500), Gav wrote: > Hi all, > > Please note that today is the day 7 days have past since the 7 days notice > that I said I was removing some > jenkins JDK drop down options. > > Unfortunately a fair few projects have failed to move their builds to > another option. > > Therefore I have extended by another 3 days only. > > I have informed all PMCs just in case the rare scenario where a PMC has no > subscribers here. > > Below you will find a list of all Jenkins Jobs still using the deprecated > drop down options. > > Carefully check to see if your jobs are on the list and if so please take > action to change it. > > Any jobs still on the old options after this time I WILL MIGRATE THEM > MYSELF !!! > > HTH > > Gav... > > Project Jobs still using :- > > 'latest1.8' > === > > Accumulo-master-IT > ACE-trunk > ActiveMQ-Artemis-Deploy > ActiveMQ-Artemis-Master > ActiveMQ-Artemis-Nightly-Regression-Test > ActiveMQ-Artemis-PR-Build > Airavata > Ant_BuildFromPOMs > Ant_Nightly > Aries-rsa > Aries-rsa > Aries-Tx-Control-Deploy > Aries-Tx-Control-Trunk-JDK8 > Calcite-Avatica-Master-JDK-1.8 > Calcite-Master-JDK-1.8 > Camel.trunk.fulltest.java8 > Camel.trunk.itest.karaf > Camel.trunk.itest.osgi > Camel.trunk.notest > cayenne-31 > Chemistry > cloudstack-marvin > cloudstack-pr-analysis > ctakes-trunk-compiletest > ctakes-trunk-package > CXF-trunk-deploy > CXF-Trunk-JDK18 > CXF-Trunk-PR > DeltaSpike-PR-Builder > DeltaSpike_Wildfly_10.1 > DeltaSpike_Wildfly_10 > Derby-10.11-suites.All > Derby-10.12-suites.All > Derby-JaCoCo > Derby-trunk > Derby-trunk-JaCoCo > Derby-trunk-suites.All > Geode-nightly > Geode-nightly-copy > Geode-release > Geode-spark-connector > Groovy > hadoop-qbt-osx-java8 > Hadoop-trunk-Commit > hadoop-trunk-osx-java8 > hadoop-trunk-win-java8 > HBase-1.1-JDK8 > HBase-1.2 > HBase-1.2-IT > HBase-1.3 > HBase-1.3-IT > HBase-1.4 > HBase-Trunk-IT > HBase-Trunk_matrix > incubator-eagle-develop > incubator-eagle-test > incubator-rya-develop > Jena_Development_Deploy > Jena_Development_Test > Jena_Development_Test_Windows > johnzon-multi > joshua_master > karaf-pr > Lucene-Artifacts-6.x > Lucene-Artifacts-master > Lucene-Solr-Clover-6.x > Lucene-Solr-Clover-master > Lucene-Solr-Maven-6.x > Lucene-Solr-Maven-master > Lucene-Solr-NightlyTests-6.x > Lucene-Solr-NightlyTests-master > Lucene-Solr-SmokeRelease-6.0 > Lucene-Solr-SmokeRelease-6.1 > Lucene-Solr-SmokeRelease-6.x > Lucene-Solr-SmokeRelease-master > Lucene-Solr-Tests-5.5-Java8 > Lucene-Solr-Tests-6.x > Lucene-Solr-Tests-master > Lucene-Tests-MMAP-master > maven-plugins-ITs-m3.1.x-with-maven-plugin-jdk-1.8_windows > MINA-trunk-jdk1.8-ubuntu > MINA-trunk-jdk1.8-windows > olingo-odata4-all-profiles > olingo-odata4-cobertura > olingo-odata4 > Precommit-HADOOP-OSX > PreCommit-TAJO-Build > River-dev-jdk8 > river-JoinManagerTests > river-JRMPactivationTests > river-LeaseTests > river-LookupServiceTests > river-PolicySecurityLoaderUrlTests > river-ReliabilityThreadTests > river-ServiceDiscoveryManagerTests > river-StartConfigIoIdExport > river-TransactionTests > ServiceMix-6.x > ServiceMix-master > ServiceMix-pr > Solr-Artifacts-6.x > Solr-Artifacts-master > Struts-JDK8-master > Struts-JDK9-master > Tajo-master-nightly > tinkerpop-master > Tobago > UIMAJ-SDK_java8 > ZooKeeper_branch35_jdk8 > > 'latest1.7' > === > > Ambari-branch-1.7.0 > Ambari-branch-2.0.0 > Ambari-branch-2.1 > Ambari-branch-2.2 > Ambari-trunk-Commit > Ambari-trunk-Commit-debug > Ambari-trunk-test-patch > Ambari-view > brooklyn-master-windows > Camel.2.15.x.fulltest > Curator-3.0 > CXF-3.1.x > DeltaSpike > Empire-db > Empire-db > Felix-Connect > Felix-FileInstall > flex-falcon-w2012-test > Geode-trunk-test-patch > Geronimo > Giraph-trunk-Commit > Groovy > Groovy > HADOOP2_Release_Artifacts_Builder > HBase-0.98-matrix > HBase-1.2 > HBase-1.2-IT > HBase-1.3 > HBase-1.3-IT > HBase-1.4 > HBase-Trunk-IT > HBase-Trunk_matrix > incubator-eagle-main > JMeter-trunk > JMeter-Windows > johnzon-multi > Lucene-Solr-Tests-5.5-Java7 > maven-plugins > maven-plugins-ITs-m3.0.4 > olingo-odata2 > PreCommit-SQOOP-Build > PreCommit-ZOOKEEPER-Build > Qpid-Java-Cpp-Test > Qpid-JMS-Deploy > Qpid-proton-c > Reef-pull-request-ubuntu > samza-freestyle-build > Sqoop2-cobertura > Sqoop2 > Sqoop-hadoop100 > Sqoop-hadoop200 > Sqoop-hadoop20 > Sqoop-hadoop23 > Struts-archetypes-JDK7-master > Struts-JDK7-master > Tamaya-Javadoc-Master > Tama
RE: JDK, Maven, Ant versions have been consolidated.
Isn't the wikipedia english corpus licensed either CC-BY-SA or GFDL? I thought those licenses weren't okay to have in an ASF source repo? -- Sean Busbey I set this up and configured Jenkins to use the test-data checked out from Subversion. Uwe - Uwe Schindler uschind...@apache.org ASF Member, Apache Lucene PMC / Committer Bremen, Germany http://lucene.apache.org/ > From: Steve Rowe [mailto:sar...@gmail.com] > Sent: Monday, August 15, 2016 8:40 PM > To: d...@lucene.apache.org > Subject: Re: JDK, Maven, Ant versions have been consolidated. > > Thanks Uwe for moving this process along. > > +1 to make a new SVN dir at https://svn.apache.org/repos/asf/lucene/test- > data/ and put the enwiki data file there. > > -- > Steve > www.lucidworks.com > > > On Aug 15, 2016, at 1:47 PM, Uwe Schindler > wrote: > > > > Hi, > > > > I thought about the wikipedia test files used by Jenkins: I think I would > commit them to SVN inside the Lucene/Solr project’s folder. The Nightly-Test > Jenkins jobs that use them could simply check them out into a separate dir of > the workspace. Byt that it is easier for us to update them. If you also think > this is fine, we can leave it like that. > > > > The ~jenkins/lucene.build.properties file is (as said before) > machine/hardware specific. I’d keep them in Jenkins’ home dir. We may > create a puppet module out of it, but this would prevent us from optimizing > or changing them quickly per project requirements. > > > > Finally the “ant ivy-bootstrap” task was for now separated into a Jenkins > job, that can be manually run (https://builds.apache.org/job/Lucene-Ivy- > Bootstrap/). In the future, we can add the “ivy-bootstrap” target as > dependency to the “ant jenkins” and other targets which are solely triggered > by Jenkins, so separately bootstrapping is no longer required (please keep in > mind that bootstrapping places the ivy.jar file in ~/.ant/lib, so outside > workspace – this is why it’s done separately!). I just have to fix the Jenkins- > specific targets in our build.xml in Git repo to only do the actual > bootstrapping if Ivy is not installed locally on Developer’s/Jenkins’ machine. > Then it’s a no-op on most builds. For now the separate Jenkins job is enough > to quickly do the bootstrap as workaround. > > > > If nobody stops me from committing the following file into SVN: > > uschindler@lucene1-us-west:/x1/jenkins/lucene-data$ ls -lh > > total 2.9G > > -r--r--r-- 1 jenkins jenkins 2.9G May 9 2015 enwiki.random.lines.txt > > > > I will do this later this evening and change the Jenkins builds to do an extra > checkout into the workspace next to Git. > > I’d suggest to place the folder contents of ~/lucene-data here: > https://svn.apache.org/repos/asf/lucene/test-data > > > > Uwe > > > > - > > Uwe Schindler > > uschind...@apache.org > > ASF Member, Apache Lucene PMC / Committer > > Bremen, Germany > > http://lucene.apache.org/ > > > > From: Uwe Schindler [mailto:uschind...@apache.org] > > Sent: Monday, August 15, 2016 12:47 PM > > To: d...@lucene.apache.org; gmcdon...@apache.org; 'Uwe Schindler' > > > Cc: 'builds' > > Subject: RE: JDK, Maven, Ant versions have been consolidated. > > > > Hi, > > > > the test files could be a puppet module. > > > > The lucene.build.properties in the Jenkins home dir is a “hardware specific” > file, as it allows to configure the number of parallel runners and where the > special test files are – one would not be able to use it on any other jenkins > slave. If it is missing, defaults are used, which are not good for optimal CPU > use. We could add them also to the puppet module, but this would make it > harder for use to change the file easily. > > > > All other files are unmodified. > > > > Uwe > > > > - > > Uwe Schindler > > uschind...@apache.org > > ASF Member, Apache Lucene PMC / Committer > > Bremen, Germany > > http://lucene.apache.org/ > > > > From: Gav [mailto:gmcdon...@apache.org] > > Sent: Monday, August 15, 2016 5:36 AM > > To: Uwe Schindler > > Cc: builds ; d...@lucene.apache.org > > Subject: Re: JDK, Maven, Ant versions have been consolidated. > > > > I'll make a copy of the jenkins home dir and get started. > > It might be an idea to add those test files to puppet wdyt? > > (I see nothing private about them) > > > > Gav... > > > > On Sat, Aug 13, 2016 at 3:13 AM, Uwe Schindler > wrote: > >> Hi Gav, > >> > >> I disabled all auto-running Jenkins Jobs of Luc
Re: Please update your jenkins job configs ASAP
without the configuration matrix item of where to run, we run on arbitrary nodes rather than the H* nodes. That's fine if what we intend. Historically the arbitrary nodes have not had enough resources to run our test suite. (Also the image did not come through) On Tue, Nov 15, 2016 at 11:49 AM, Jonathan Hsieh wrote: > Ok, I've fixed the matrix jobs for 0.98, 1.4, and trunk builds are fixed > and running against appropriate JDKs now. > > The fix was to removed the "Slave" Configuration matrix item (that was > configured in the advanced options), and only have the JDK configuration > matrix items. I've set them up so the look like this now (image is from > trunk [1]) > > Jon. > > [1] [image: Inline image 1] > > On Tue, Nov 15, 2016 at 9:13 AM, Jonathan Hsieh wrote: > >> Looks like the matrix jobs -- trunk and 1.4 aren't actually running >> anything as well as the updated 0.98 jobs. This may have something to do >> with the JDK labels that were also updated recently[1]. Maybe this is >> related to the "space in a matrix label name" problem that caused us to >> change 1.2, and 1.3 into separate jobs? (we have one good run in our >> history of 0.98 using the now removed latest1.6, latest1.7 labels. [2] >> >> Jon >> >> [1] https://cwiki.apache.org/confluence/display/INFRA/JDK+Instal >> lation+Matrix >> [2] https://builds.apache.org/view/H-L/view/HBase/job/HBase- >> 0.98-matrix/369/ >> >> On Tue, Nov 15, 2016 at 12:48 AM, Michael Dürig >> wrote: >> >>> >>> Hi, >>> >>> On 15.11.16 5:05 , Ted Yu wrote: >>> Looking at a reportedly successful build: https://builds.apache.org/job/HBase-1.4/533/console I don't see the JDK 1.7 / 1.8 builds. The build only took 2 min 44 sec ? >>> >>> This is pretty much the same we are seeing with our jobs [1]. The UI >>> doesn't show any configured nodes/labels, which I guess is why nothing >>> runs. However it doesn't offer the option to assign nodes/labels neither. >>> >>> Michael >>> >>> >>> [1] http://markmail.org/message/zteyrxolwl46jcfh >>> >> >> >> >> -- >> // Jonathan Hsieh (shay) >> // HBase Tech Lead, Software Engineer, Cloudera >> // j...@cloudera.com // @jmhsieh >> >> > > > > -- > // Jonathan Hsieh (shay) > // HBase Tech Lead, Software Engineer, Cloudera > // j...@cloudera.com // @jmhsieh > >
Re: Precommit-Admin no longer running on Jenkins
What are the associated node labels for it running? That's the most common cause of no runs I know of. On Thu, Aug 3, 2017 at 11:52 AM, Allen Wittenauer wrote: > (BCC: d...@yetus.apache.org, since many Apache Yetus users are dependent upon > precommit-admin too) > > > I’m… very confused. > > The precommit-admin (which kicks off patch testing for a large, large > number of projects) is no longer automatically running. The job code and the > job configuration hasn’t changed in a very long time. However it no longer > appears to be getting fired off automatically by Jenkins. If one does a > build now, it works just fine. > > Could something have changed elsewhere to cause it stop getting > scheduled to run? > > I haven’t looked really hard yet, but I’m thinking other scheduled > jobs aren’t running either. But thought I’d ask here before I started > digging into those since this job is *definitely* broken. > > Thanks. -- Sean
Re: Precommit-Admin no longer running on Jenkins
I switched the job to just use the 'ubuntu' label. The job is super small and quick, only needs python AFAICT. Looks like the ubuntu label always has an executor handy. let's see if that supports the "no H" theory. On 2017-08-04 08:19, Allen Wittenauer wrote: > > > On Aug 3, 2017, at 3:36 PM, Gavin McDonald wrote: > > > > Note that just means the Hadoop nodes, seeing as there is no âUbuntuâ > > label any more, its âubuntuâ > > > Oh, actually, I typoâd that. Itâs using lowercase ubuntu. :) > > But itâs definitely more than just precommit-admin that is not > getting scheduled. hadoop-trunk-win and > hadoop-qbt-trunk-java8-linux-x86 didnât run, amongst others. Itâs > starting to look like any job that uses H isnât getting scheduled.
Re: Precommit-Admin no longer running on Jenkins
Oh wait, you meant H for hashing in the crontab. lol. /facepalm I'll go change that too. :) On 2017-08-04 09:06, "Sean Busbey" wrote: > I switched the job to just use the 'ubuntu' label. The job is super small and > quick, only needs python AFAICT. Looks like the ubuntu label always has an > executor handy. > > let's see if that supports the "no H" theory. > > On 2017-08-04 08:19, Allen Wittenauer wrote: > > > > > On Aug 3, 2017, at 3:36 PM, Gavin McDonald wrote: > > > > > > Note that just means the Hadoop nodes, seeing as there is no > > > âÂÂUbuntuâ label any more, its âÂÂubuntuâ > > > > > > Oh, actually, I typoâÂÂd that. ItâÂÂs using lowercase ubuntu. :) > > > > But itâÂÂs definitely more than just precommit-admin that is not > > getting scheduled. hadoop-trunk-win and > > hadoop-qbt-trunk-java8-linux-x86 didnâÂÂt run, amongst others. > > ItâÂÂs starting to look like any job that uses H isnâÂÂt getting > > scheduled. >
Re: Automate Maven publish
which Project? Would you be publishing SNAPSHOTs or release artifacts? Who's the intended audience of the stuff that gets pushed? On 2018/05/23 12:14:35, Naveen Swamy wrote: > Hello all, > > I am looking to automate our package publish process to Nexus and Maven. > Our Project builds native code(C++) and Scala using Maven on different > platforms(osx/linux-cpu/linux-gpu), currently its been very painful to > perform this manually. We also want to move to continuous deployment and > automating would certainly help. > > I am sure there are other projects that might have already done this and > wanted to borrow and start from existing work. if your project has > automated publishing Could you please point me to it? > > Thanks, Naveen >
Bookkeeper Jenkins job exhausting machine resources and failing
Hi Bookkeeper folks! I just killed this bookkeeper job: https://builds.apache.org/job/bookkeeper_postcommit_master_java9/141/ My apologies for not giving prior notice, but it was causing failures on the sibling executor for the node (H28). Looking at the console, it seems to have exhausted all resources on the node. As it's been slowly failing most other jobs that get scheduled next to it fail with oddball errors (like forking failures during git fetching, shell launching, etc). Could you take a look and make sure this isn't a problem that's going to keep coming up? (cc to builds@a.o for heads up in case other folks had their jobs scheduled on H28) -busbey
Can we get the Corretto JDK on jenkins nodes?
What's involved in getting the Amazon Corretto JDK 8 preview up on the jenkins workers as a JDK option? Or should projects that want to test with it just rely on docker?
Re: Using latest JDK8 for builds
related, I'd love to know the mapping of the other jdk6 and jdk7 options present. While setting things up for matrix builds of 0.98 on 6 and 7 I noticed there were several options. On Wed, Apr 22, 2015 at 10:43 AM, Nick Dimiduk wrote: > Heya, > > I'd like to specify using the latest JDK8 install for a jenkins job. The > "JDK" drop down has two entries that look like they may be relevant: > "jdk-1.8.0" and "latest1.8". Neither match the pattern we were using for > our JDK7 builds, "JDK 1.7 (latest)". > > Which selection is appropriate? > > Thanks, > Nick >
[jira] [Commented] (BUILDS-77) install shellcheck on jenkins slaves
[ https://issues.apache.org/jira/browse/BUILDS-77?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14535169#comment-14535169 ] Sean Busbey commented on BUILDS-77: --- excellent! thanks Andrew. > install shellcheck on jenkins slaves > > > Key: BUILDS-77 > URL: https://issues.apache.org/jira/browse/BUILDS-77 > Project: Infra Build Platform > Issue Type: New Feature > Components: Jenkins >Reporter: Allen Wittenauer >Assignee: Andrew Bayer > > Would it be possible to get shellcheck installed on the Hadoop jenkins boxes > so that we can use it as part of patch testing? > Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (BUILDS-82) Remove MyOpenID as federated login option
Sean Busbey created BUILDS-82: - Summary: Remove MyOpenID as federated login option Key: BUILDS-82 URL: https://issues.apache.org/jira/browse/BUILDS-82 Project: Infra Build Platform Issue Type: Task Components: Jenkins Reporter: Sean Busbey OpenID login ([ref|https://builds.apache.org/federatedLoginService/openid/login?from=%2F]) still lists MyOpenId, even though the service is defunct ([ref|http://www.theregister.co.uk/2013/09/05/myopenid_closes_for_good_2014/]). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (BUILDS-49) Surefire runner JVMs are being killed for HBase 0.98 Jenkins jobs
[ https://issues.apache.org/jira/browse/BUILDS-49?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14585156#comment-14585156 ] Sean Busbey commented on BUILDS-49: --- if these builds were happening in parallel, could it be the post-build zombie finder matching processes from other jobs? > Surefire runner JVMs are being killed for HBase 0.98 Jenkins jobs > - > > Key: BUILDS-49 > URL: https://issues.apache.org/jira/browse/BUILDS-49 > Project: Infra Build Platform > Issue Type: Bug > Components: Jenkins >Reporter: Andrew Purtell >Assignee: Andrew Bayer > > Occasionally the JVMs executing forked runners from Surefire are being > killed, failing HBase-0.98 Jenkins jobs. > For example, see https://builds.apache.org/job/HBase-0.98/794: > {noformat} > Running org.apache.hadoop.hbase.security.access.TestCellACLs > Running org.apache.hadoop.hbase.security.access.TestAccessController > Killed > Killed > {noformat} > or https://builds.apache.org/job/HBase-0.98/797/ > {noformat} > Running org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient > Running org.apache.hadoop.hbase.snapshot.TestRestoreFlushSnapshotFromClient > Killed > Killed > {noformat} > Is there something we can do to avoid this? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (BUILDS-49) Surefire runner JVMs are being killed for HBase 0.98 Jenkins jobs
[ https://issues.apache.org/jira/browse/BUILDS-49?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14602333#comment-14602333 ] Sean Busbey commented on BUILDS-49: --- bq. Well that would be embarrassing. Sounds like an argument to experiment with isolation via docker/LXC. The first release of the patch-tester project will have docker isolation for pre-commit tests. We could see if getting a minimal harness in place to do the same for nightly builds is doable? It's essentially the same kind of action just on the set of commits since last run instead of the set of commits from a jira, right? > Surefire runner JVMs are being killed for HBase 0.98 Jenkins jobs > - > > Key: BUILDS-49 > URL: https://issues.apache.org/jira/browse/BUILDS-49 > Project: Infra Build Platform > Issue Type: Bug > Components: Jenkins >Reporter: Andrew Purtell >Assignee: Andrew Bayer > > Occasionally the JVMs executing forked runners from Surefire are being > killed, failing HBase-0.98 Jenkins jobs. > For example, see https://builds.apache.org/job/HBase-0.98/794: > {noformat} > Running org.apache.hadoop.hbase.security.access.TestCellACLs > Running org.apache.hadoop.hbase.security.access.TestAccessController > Killed > Killed > {noformat} > or https://builds.apache.org/job/HBase-0.98/797/ > {noformat} > Running org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient > Running org.apache.hadoop.hbase.snapshot.TestRestoreFlushSnapshotFromClient > Killed > Killed > {noformat} > Is there something we can do to avoid this? -- This message was sent by Atlassian JIRA (v6.3.4#6332)