|
||||||||
This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators. For more information on JIRA, see: http://www.atlassian.com/software/jira |
- [JIRA] (JENKINS-15560) Lazy loading cause’s ht... jswa...@alohaoi.com (JIRA)
- [JIRA] (JENKINS-15560) Lazy loading cause... jswa...@alohaoi.com (JIRA)
- [JIRA] (JENKINS-15560) Lazy loading cause... jgl...@cloudbees.com (JIRA)
- [JIRA] (JENKINS-15560) Lazy loading cause... jswa...@alohaoi.com (JIRA)
I'm not sure of any easy way to reproduce. A temporary solution seems to be rebooting Jenkins.
The way that I would try to reproduce is to load Jenkins with lots of jobs and lots of builds for each job. I'm guessing that if the memory used by the job/build objects is very high in regards to the available memory (sorry - don't know how to define "high"), the problem might occur. Having the jobs limit their number of archived builds, then running many times beyond that so that archived builds are being deleted might help cause the problem; I'm guessing this because a reboot temporarily fixes our problem. And although we keep 20 to 50 builds archived on each job, we have a high enough turn over that in two weeks, all builds are replaced - and the problem seems to happen the longer we run...
Sorry for the vague and imprecise descriptions. When the problem occurs, we usually go into panic mode because Jenkins begins to fail in a variety of ways and we don't have a lot of time before we get flooded with calls from managers/engineers complaining about false build/test failure notifications, busted dashboards, etc.