> On 28 Sep 2015, at 10:05, Vinayakumar B <vinayakum...@apache.org> wrote: > > Setting the version to unique value sounds reasonable. > > Is there anyway in mvn to clean such artifacts installed.. as part of > cleanup in the same build instead of nightly cleanup? >
Well, there's a dependency:purge-local-repository maven thing, that could maybe be set up to delete the stuff, but unless you can restrict to only the local build number, it's going to stamp on other builds. http://maven.apache.org/plugins/maven-dependency-plugin/purge-local-repository-mojo.html There's a jenkins explicit plugin https://wiki.jenkins-ci.org/display/JENKINS/Maven+Repo+Cleaner+Plugin Andrew Bayer is about to -> cloudbees so maybe he may review this -I'll ask him if I see him at the apachecon coffee break. Otherwise, well, bash and a complex enough "find ~/m2/repository" path could possibly do it > -Vinay > On Sep 28, 2015 1:22 PM, "Steve Loughran" <ste...@hortonworks.com> wrote: > >> >> the jenkins machines are shared across multiple projects; cut the >> executors to 1/node and then everyone's performance drops, including the >> time to complete of all jenkins patches, which is one of the goals. >> >> https://builds.apache.org/computer/ >> >> Like I said before: I don't think we need one mvn repo/build. All we need >> is a unique artifact version tag on generated files. Ivy builds do that for >> you, maven requires the build version in all the POMs to have a -SNAPSHOT >> tag, which tells it to poll the remote repos for updates every day. >> >> We can build local hadoop releases with whatever version number we desire, >> simply by using "mvn version:set" to update the version before the build. >> Do that and you can share the same repo, with different artifacts generated >> and referenced on every build. We don't need to play with >1 repo, which >> can be pretty expensive. A du -h ~/.m2 tells me I have an 11GB local cache. >> >> >>> On 26 Sep 2015, at 06:43, Vinayakumar B <vinayakum...@apache.org> wrote: >>> >>> Thanks Andrew, >>> >>> May be we can try making it to 1 exec, and try for sometime. i think also >>> need to check what other jobs, hadoop ecosystem jobs, run in Hadoop >> nodes. >>> As HADOOP-11984 and HDFS-9139 are on the way to reduce build time >>> dramatically by enabling parallel tests, HDFS and COMMON precommit builds >>> will not block other builds for much time. >>> >>> To check, I dont have access to jenkins configuration. If I can get the >>> access I can reduce it myself and verify. >>> >>> >>> -Vinay >>> >>> On Sat, Sep 26, 2015 at 7:49 AM, Andrew Wang <andrew.w...@cloudera.com> >>> wrote: >>> >>>> Thanks for checking Vinay. As a temporary workaround, could we reduce >> the # >>>> of execs per node to 1? Our build queues are pretty short right now, so >> I >>>> don't think it would be too bad. >>>> >>>> Best, >>>> Andrew >>>> >>>> On Wed, Sep 23, 2015 at 12:18 PM, Vinayakumar B < >> vinayakum...@apache.org> >>>> wrote: >>>> >>>>> In case if we are going to have separate repo for each executor, >>>>> >>>>> I have checked, each jenkins node is allocated 2 executors. so we only >>>> need >>>>> to create one more replica. >>>>> >>>>> Regards, >>>>> Vinay >>>>> >>>>> On Wed, Sep 23, 2015 at 7:33 PM, Steve Loughran < >> ste...@hortonworks.com> >>>>> wrote: >>>>> >>>>>> >>>>>>> On 22 Sep 2015, at 16:39, Colin P. McCabe <cmcc...@apache.org> >>>> wrote: >>>>>>> >>>>>>>> ANNOUNCEMENT: new patches which contain hard-coded ports in test >>>> runs >>>>>> will henceforth be reverted. Jenkins matters more than the 30s of your >>>>> time >>>>>> it takes to use the free port finder methods. Same for any hard code >>>>> paths >>>>>> in filesystems. >>>>>>> >>>>>>> +1. Can you add this to HowToContribute on the wiki? Or should we >>>>>>> vote on it first? >>>>>> >>>>>> I don't think we need to vote on it: hard code ports should be >>>> something >>>>>> we veto on patches anyway. >>>>>> >>>>>> In https://issues.apache.org/jira/browse/HADOOP-12143 I propose >>>> having a >>>>>> better style guide in the docs. >>>>>> >>>>>> >>>>>> >>>>> >>>> >> >>