Outstanding, thanks. I believe the job cleanup runs when the next build runs. You could manually trigger a build to test, or we can check next time the build runs automatically (presuming it runs nighty.)
-Chris > On Jun 10, 2019, at 11:10 AM, Matteo Merli <mme...@apache.org> wrote: > > For pulsar-website-build and pulsar-master, the "discard old builds" > wasn't set unfortunately. I just enabled it now. Not sure if there's a > way to quickly trigger a manual cleanup. > > Regarding "pulsar-pull-request": this was an old Jenkins job no longer > used (since we switched to multiple smaller PR validation jobs a while > ago). I have removed the Jenkins job. Hopefully that should take care > of cleaning all the files. > > > Thanks, > Matteo > > -- > Matteo Merli > <mme...@apache.org> > > On Mon, Jun 10, 2019 at 10:57 AM Chris Lambertus <c...@apache.org> wrote: >> >> Hello, >> >> The jenkins master is nearly full. >> >> The workspaces listed below need significant size reduction within 24 hours >> or Infra will need to perform some manual pruning of old builds to keep the >> jenkins system running. The Mesos “Packaging” job also needs to be corrected >> to include the project name (mesos-packaging) please. >> >> It appears that the typical ‘Discard Old Builds’ checkbox in the job >> configuration may not be working for multibranch pipeline jobs. Please refer >> to these articles for information on discarding builds in multibranch jobs: >> >> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job- >> https://issues.jenkins-ci.org/browse/JENKINS-35642 >> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489 >> >> >> >> NB: I have not fully vetted the above information, I just notice that many >> of these jobs have ‘Discard old builds’ checked, but it is clearly not >> working. >> >> >> If you are unable to reduce your disk usage beyond what is listed, please >> let me know what the reasons are and we’ll see if we can find a solution. If >> you believe you’ve configured your job properly and the space usage is more >> than you expect, please comment here and we’ll take a look at what might be >> going on. >> >> I cut this list off arbitrarily at 40GB workspaces and larger. There are >> many which are between 20 and 30GB which also need to be addressed, but >> these are the current top contributors to the disk space situation. >> >> >> 594G Packaging >> 425G pulsar-website-build >> 274G pulsar-master >> 195G hadoop-multibranch >> 173G HBase Nightly >> 138G HBase-Flaky-Tests >> 119G netbeans-release >> 108G Any23-trunk >> 101G netbeans-linux-experiment >> 96G Jackrabbit-Oak-Windows >> 94G HBase-Find-Flaky-Tests >> 88G PreCommit-ZOOKEEPER-github-pr-build >> 74G netbeans-windows >> 71G stanbol-0.12 >> 68G Sling >> 63G Atlas-master-NoTests >> 48G FlexJS Framework (maven) >> 45G HBase-PreCommit-GitHub-PR >> 42G pulsar-pull-request >> 40G Atlas-1.0-NoTests >> >> >> >> Thanks, >> Chris >> ASF Infra