Yeah, that's fine - the only viable way to do this was across the board, so I'm sorry for messing with your jobs. =) On Aug 20, 2015 17:41, "Uwe Schindler" <uschind...@apache.org> wrote:
> Hi Andrew, > > see my mail thread about the Lucene builds this morning with Gav: > > > From: Uwe Schindler [mailto:u...@thetaphi.de] > > Sent: Thursday, August 20, 2015 11:28 AM > > To: builds@apache.org > > Subject: Re: Number of build prozessors on Lucene node > > > > Hi, > > > > The backlog of Lucene is generally large. That's wanted, because we don't > > trigger builds based on commits; just to always have one build of > everything > > in the queue we do it time-based. Because of our pseudorandomized test > > infrastructure, the build slave should never be out of work because every > > run may trigger a bug in Lucene or the Java VM. Because Jenkins never > > triggers the same job multiple times, it's perfectly fine to have at > least one > > job of all in the queue. > > > > If you don't want your statistics broken, you may exclude the lucene > slave > > from counting towards queue size. It's completely on its own and only > allows > > ecplicitely assigned jobs. > > > > You can always remove triggered builds from queu (I sometimes do this for > > testing puposes or to trigger a special build manually). They get queued > > anyways a while later if deleted. > > > > Uwe > > > > Am 20. August 2015 11:15:17 MESZ, schrieb Gavin McDonald > > <gmcdon...@apache.org>: > > > > > >> On 20 Aug 2015, at 9:17 am, Uwe Schindler <uschind...@apache.org> > > >wrote: > > >> > > >> Hi, > > >> > > >> could someone please change back the number of "build processors" for > > >the "lucene" Jenkins node to 1? Currently it executes always 2 Jobs in > > >parallel. The underlying server only has 4 cpu cores and the Lucene Job > > >configuration is done in a way that it uses all available CPU nodes, so > > >running 2 builds in parallel on is in most cases not a good idea. There > > >are in fact some jobs, that don't require much CPU or are not > > >multithreaded (like artifact builds), but those are generally quick. > > >The main tasks taking several hours to execute are very CPU intensive - > > >the reason why we have an own slave. > > >> > > >> Any information why this was changed? Unfortunately I don't have the > > >power to configure this correctly. > > > > > >I changed it short term to help with backlog. > > >When I did this (2 days ago) there were 166 builds in the queue, over > > >30 of those were waiting on the Lucene node. > > > > > >Will change it back now. > > > > > >HTH > > > > > >Gav… > > > > > >> > > >> Uwe > > >> > > As said before, the Lucene builds are only affecting the Lucene slave and > this one should be used as much as possible, I reverted to the timer > triggers. I am glad that you only changed builds which used timers, > otherwise the whole cross-project dependency tree in Lucene would have been > broken. So it was only a few jobs to start timely (per commit did not make > sense for "nightly" jobs). > > Uwe > > ----- > Uwe Schindler > uschind...@apache.org > ASF Member, Apache Lucene PMC / Committer > Bremen, Germany > http://lucene.apache.org/ > > > -----Original Message----- > > From: Andrew Bayer [mailto:andrew.ba...@gmail.com] > > Sent: Thursday, August 20, 2015 7:33 PM > > To: builds@apache.org > > Subject: FYI - disabled timer triggers across the board on > builds.apache.org > > > > I noticed a lot of jobs that were running every day regardless of whether > > anything changed and eating up a lot of executors, so I bulk-removed all > of > > them, changing those jobs to poll for changes hourly instead. If your > project > > has one or two jobs that you need to run daily whether there are code > > changes or not, you can re-enable the timer, but please do not do that > for > > more than a couple jobs, and please do not do it for jobs that take > longer > > than half an hour. > > > > A. > >