It's not so much the trace part in this case as the exact error. Sometimes the exact error will help you know what you need to look into:
java.lang.OutOfMemoryError: Java heap space vs java.lang.OutOfMemoryError: PermGen space Here are a couple things that make app server memory issues more difficult: 1) You sometimes don't know if the app is leaking memory or the app server itself. 2) Modern app servers restart and redeploy applications without restarting the app server. Thus, the memory leak might be from a previous application instance or application deployment. I think someone reported a possible Cayenne issue for that recently. I generally try to give an app server (or Eclipse for that matter) as much memory as is possible. Most of the time, app servers run on dedicated machines so if you're not allowing the app server to use the memory, it's going to waste. Obviously, that's not as easy if you have to share memory among multiple JVM instances. The only ways I know to determine how much memory is needed is to actually measure it or to calculate it somehow. Most of the time, it's far faster to just bump up the memory. At some point, if there's a real problem, this still doesn't help and you'll have to measure or calculate usage. I think there's an option to the Sun JVM to log periodic memory statistics, but I might be thinking of something else. On Tue, Mar 13, 2012 at 3:41 PM, Joe Baldwin <jfbald...@earthlink.net> wrote: > Mike, > > thanks for your ideas. I am still a bit mystified by the rules concerning > garbage collection I worked on a a project some time back on this and became > very adept at managing memory. However, when I have advanced software like > Tomcat & Cayenne in between my software and the JVM, then I get a tad > confused with what is going on. > > What I have been reading today tells me that the "rule of thumb" is to simply > increase memory. However, I have two conflicting thoughts about this: > 1. if I do, in fact, have an error in my code then increasing memory > will simply hide the error for a while. > 2. if you look at it like an operating system, if you try to run OSX > on a computer with 128MB of ram, then you are insane - and no amount of > correct programming will allow your app to run without running out of memory. > > As to your questions: the stack trace was not terribly helpful since it > indicated an out of memory error when a query was launched (on a table that > had all of 7 items in it). > > Your idea of trying to set up a similar config on my dev server may be a good > idea. I probably need to take a look at the general health of the memory > management. Of course, I could just be running with way too-little memory. > > I wonder what a good base memory setting should be: 64, 128, 512, 1Gig? The > comments on the web seem to indicate all of them. > > Thanks > Joe > > > > On Mar 13, 2012, at 3:19 PM, Mike Kienenberger wrote: > >> If you have almost no activity, why not set up a duplicate environment >> running the same version of Tomcat and hit your application using >> JMeter or some other testing tool? >> >> Or perhaps you can get sent the application http access.log file and >> duplicate the exact series of requests that generated the problem in >> your dev environment. >> >> That said, Tomcat did often seem to have intrinsic memory issues, >> which is another reason I stopped using a few years back. >> >> Also, you might want to ask what the exact stacktrace is. We've had >> situations where it was a Tomcat permgen memory issue. See this >> article for details -- there are more details in the comments by >> others: >> >> http://www.mkyong.com/tomcat/tomcat-javalangoutofmemoryerror-permgen-space/ >> >> Again, a disclaimer as I haven't used Tomcat personally in a while, >> although some of my colleagues continue to do so for development. And >> we don't run app servers using it. >> >> >> On Tue, Mar 13, 2012 at 2:59 PM, Joe Baldwin <jfbald...@earthlink.net> wrote: >>> OK, I think that I may have run into this before. The ultimate "solution" >>> was to increase memory - however, I am concerned that may have been a >>> quick-fix and not a long-term fix. >>> >>> The problem is out of memory errors associated with tomcat heap. >>> >>> I have a webapp (powered primarily by cayenne). The database has *very* >>> little in it. I am essentially serving data (via cayenne & tomcat) and >>> images (via tomcat). >>> >>> I have a private tomcat instance running on a webhost in a "shared" >>> environment. What this means is that I *absolutely* cannot attach a >>> profiler. >>> >>> I am being told by the webhost IT people (who are not always accurate in >>> their objectivity) that my app is leaking memory (badly), and that is what >>> caused tomcat to crash. >>> >>> My intuition tells me that with almost no activity on the website (because >>> it is not live yet) and Cayenne memory management that I should be able to >>> manage memory well, but it is not the case. >>> >>> So, if my goal is to determine what the problem is, and if I simply >>> increase heap size, won't I just be masking a potential problem? i.e. if >>> the app runs fine for a while, then mysteriously causes tomcat to run out >>> of heap space, then couldn't there be a memory leak? >>> >>> If there is a memory leak, and I don't see it on my development server, and >>> I *can't* use a profiler on my webhost, then how do I get visibility into >>> the memory usage? >>> >>> Thanks >>> Joe >>> >