In order to monitor java memory at chrash time you can add to JAVA_OPTS these directives
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/your/tomcat/folder/memorydump.hprof In this way, if tomcat goes in out of memory, you have an image of memory (memorydump.hprof) that you can analyze by an external application like MemoryAnalyzer [ http://www.eclipse.org/mat/ ]. 2010/1/13 Carl <c...@etrak-plus.com> > From the original posting: > > This is a new server, a Dell T110 with a Xeon 3440 processor and 4GB > memory. I have turned off both the turbo mode and hyperthreading. > > The environment: > > 64 bit Slackware Linux > > java version "1.6.0_17" > Java(TM) SE Runtime Environment (build 1.6.0_17-b04) > Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) > > Tomcat: apache-tomcat-6.0.20 > > These are the current JAVA_OPTS="-Xms1024m -Xmx1024m -XX:PermSize=368m > -XX:MaxPermSize=368m" > > In the previous posting, I noted that I have observed the memory usage and > general performance with Java VisualVM and have seen nothing strange. GC > seems to be performing well and the memory rarely gets anywhere near the > max. New information: I thought I was seeing GC as memory usage was going > up and down but in fact it was mostly people coming onto the system and > leaving it. After several hours, the memory settles to a baseline of about > 375MB. Forced GC never takes it below that value and the ups and downs from > the people coming onto and leaving the system also returns it to pretty much > that value. The maximum memory used never was above 700MB for the entire > day. > > The server runs well, idling along at 2-5% load, except for a quick spike > during GC, serving jsp's, etc. at a reasonable speed. Without warning and > with no tracks in any log (Tomcat or system) or to the console, the JVM will > just go away, disappear. New information: The JVM does not just go away but > somehow Tomcat shutsdown as the ports used by Tomcat are closed (pointed out > by Konstantin.) Sometimes, the system will run for a week, sometimes for > only several hours. Initially, I thought the problem was the turbo or > hyperthreading but, no, the problem persists. > > When Tomcat shuts down, the memory that it held is still being held (as > seen from top) but it is nowhere near the machine physical memory. > > The application has been running on an older server (Dell 600SC, 32 bit > Slackware, 2GB memory) for several years and, while the application will > throw exceptions now and then, it never crashed. This lead me to believe > the problem had something to do with the 64 bit JVM but, with without seeing > errors anywhere, I can't be certain and don't know what I can do about it > except go back to 32 bit. > > New information. > > Last evening, I observed the heap and permGen memory usage with Visual JVM. > It was running around 600MB before I forced a GC and 375MB afterward. > Speed was good. Memory usage from top was 2.4GB. Five minutes later, > Tomcat stopped leaving no tracks that I could find. The memory usage from > top was around 2.4GB. The memory usage from Visual JVM was still showing > 400MB+ although the Tomcat process was gone. I restarted Tomcat (did not > reboot) so Tomcat had been shutdown gracefully enough to close the ports > (8080, 8443, 443.) Tomcat stayed up for less than an hour (under light > load) and stopped again. The memory used according to top was less than 3GB > but I didn't get the exact number. I restarted it again (no server reboot) > and it ran for the rest of the night (light load) and top was showing 3.3GB > for memory this morning. > > I brought up a new server last night and have switched to that server for > production (same Linux, JDK, server.xml, JAVA_OPTS, etc.). It would seem if > the problem is with my application or the JVM, that the problem will follow > me to the new server. > > Anyone have any ideas how I might track this problem down? > > Thanks, > > Carl