> 1. I had always used top to see memory used until I saw the system monitor > tools in Slackware. Had not compared the two. At this moment, the system > monitor is reporting .96GB of memory used while top and vmstat are reporting > 3.6GB... quite a difference. From now on, top/vmstat it is. Further, the > fact that this machine is running that close to the 4GB physical memory > would seem to make it a candidate for failure with a fair amount of > activity. Today could be interesting and revealing. >
When you say top is reporting 3.6 GB used, do you mean, system wide, or for your Tomcat Java process? It is perfectly normal for Top to report that your global "used" memory is almost equal to the total RAM in the system, and for the "free" global memory to be close to 0. The kernel uses the rest of the RAM for disk cache. For example - from one of my systems with 8 GB of RAM, which is doing nothing of interest.... Mem: 8190912k total, 8138948k used, 51964k free, 298692k buffers Swap: 1020116k total, 46692k used, 973424k free, 6423912k cached Top reports that only 51,964k is free. But 6,423,912k of that is nothing more than disk cache, which the kernel will dump if a process requests more RAM allocation. You need to add the number from the "cached" column to the "free" column, to find out how much is really free for the system to allocate to processes on demand - when you look at the numbers from "top". I don't believe that the OOM killer would do anything, until it has reduced the cache down to near 0. And if you can't find a log entry indicating that the OOM killer ran... then I doubt it ran. Unless you have run into 2 bugs at once in completely different tools.... Sorry... this message is no help in resolving your issue. Dan --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org