Unfortunately the conf changes to fork the compilation of JSPs and the increased 'modificationTestInterval' value made no real improvement so I am continuing to work towards replacing the language text request scope variables with property files references.
However, if this is a problem, this has made me concerned that all request scope variables aren't being released, not just the ones we use for language text! Ian On 25 July 2011 10:45, Ian Marsh <i...@sportplan.com> wrote: > Good morning and thanks for the replies! > > Unfortunately, as far as I am aware, our bank determines who we must > use as the PCI Auditor so our hands are tied with that one, but it > does seem like they're just covering their backs by choosing the > latest available version rather than actually determining a reason as > to why its needed. > > However, thanks for looking into my problem... the first thing that > rang a bell was the mention that it could be to do with the JSPs. I > have checked for development mode settings and neither have it > enabled. The bell it rang though was that, for some slightly silly > reasons that I am planning on finally clearing up today to see if they > are the cause, we have, in certain pages of the site, roughly 1000 > <c:set> tags of text for different languages set in request scope. I > know these should be implemented through properties files instead but > the excuses are for me to deal with not you guys! > > Could this be link to the cause though? Could it somehow be that the > text variables are being held or referenced for longer than the > request scope period and so quickly eating up memory? > > I also remembered some changes we made to the tomcat settings in > web.xml to fork the compilation of JSPs to a separate JVM and we had > set the 'modificationTestInterval' to 40 seconds. These may have had > an impact as well so I'll change these in the later versions of Tomcat > to see if they have any effect. > > Thanks again, > > Ian > > > > > On 22 July 2011 22:42, Pid <p...@pidster.com> wrote: >> On 22/07/2011 20:17, Mark Thomas wrote: >>> On 22/07/2011 17:26, Ian Marsh wrote: >>>> Hi, >>>> >>>> I am in charge of running a Apache-2, Tomcat-7, Ubuntu-10.04 set up >>>> for which we have to be PCI Compliant. We recently upgraded to >>>> Apache-2.2.17 and Tomcat-7.0.8 (from Apache-2.0.x and Tomcat 5.0.28) >>>> in order to comply with the requirements of the PCI Compliance checks >>>> and ironed out any issues to get us back to a satisfactory running >>>> state. >>> >>> Hmm. I think you need some better PCI auditors. If your app was running >>> on Tomcat 5.0.x and you trust the app (which seems reasonable given it >>> is doing something that requires PCI compliance) then an upgrade to >>> 7.0.12 should be sufficient if you using the HTTP BIO connector. >> >> Indeed. >> >> In my experience, I'd expect a QSA/PCI Auditor to be far, far more >> conservative than to promote Tomcat 7.0.x as a 'safe' version compared >> to 6.0.recent. >> >> >> p >> >> >>> Since Tomcat appears to behind httpd then there is a strong chance you >>> are using AJP (BIO or APR), in which case 7.0.2 should be sufficient. >>> >>> It appears your current auditors are blindly (and wrongly) assuming any >>> vulnerability in Tomcat will impact your installation. Expect a demand >>> to upgrade to 7.0.19 when they get around to reading the Tomcat security >>> pages again. >>> >>> <snip/> >>> >>>> It seems that the character arrays [C, java.lang.String and >>>> javax.servlet.jsp.tagext.TagAttributeInfo entries are considerably >>>> higher in Tomcat-7.0.10 than in Tomcat-7.0.8 and I am wondering if >>>> this could lead to an explanation for the difference. >>> >>> Maybe. What you really want to look at is the GC roots for those >>> objects. That will tell you what is holding on to the references. Based >>> on that data I'd start looking at the arrays of TagAttributeInfo but >>> that might be completely the wrong place to look. >>> >>> I've just triggered a heap dump on the ASF Jira instance (running >>> 7.0.19) to see what that looks like. I'll report back what I find (once >>> the 4GB heap has finished downloading - it may be some time). >>> >>>> Would anyone know of any changes between the two versions, possibly >>>> linked to those memory entries, that could lead to such behaviour? >>> >>> Nothing jumped out at me from the changelog. >>> >>>> Any help or suggestions is greatly appreciated! I'm sorry for a long >>>> post, but hopefully its got the information needed to help diagnosis. >>> >>> To be honest, there isn't enough info hear to diagnose the root cause >>> but there is enough to demonstrate that there is probably a problem and >>> maybe where to start looking. That might not seem like much but it is a >>> heck of a lot better than most of the reports we get here. Thanks for >>> providing such a useful problem report. >>> >>> Mark >>> >>> >>> >>> --------------------------------------------------------------------- >>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org >>> For additional commands, e-mail: users-h...@tomcat.apache.org >>> >> >> >> --------------------------------------------------------------------- >> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org >> For additional commands, e-mail: users-h...@tomcat.apache.org >> >> > --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org