Just wanted to comment that I'm having very similar issues with an
application that I upgraded the production version from 5.1 to 5.2. The
5.1 app ran fine indefinitely with no issues.
After the upgrade it took about 1.5 days for the app to crash due to
running out of PermGen space (at that point PermGen was not being tuned
at all via JVM properties). After restarting it was stable for a few
days, then another restart and it crashed in 1 day. This behavior was
never observed on a demo machine running the same app with a very long
uptime (months). The only differences between the apps are the database
connected to and the number of users on the system over time.
No real resolution to provide yet nor am I even sure whether this is
inherently Tapestry or some private code issue, but I'll be watching
this thread and comment if I find anything out related.
-Rich
On 06/28/2011 12:16 PM, Norman Franke wrote:
On Jun 24, 2011, at 7:28 PM, Kalle Korhonen wrote:
On Fri, Jun 24, 2011 at 3:33 PM, Norman Franke <nor...@myasd.com> wrote:
After I had everything working pretty well, I put it onto the
production
server where it ran for a few hours and then died with a PermGen
exception.
Previously, my app would run for months with the 64 MB of PermGen
allocated
to servers by default. Now, after upgrading to 5.2.5 it wouldn't run
for
more than a few hours at 128 MB. I ended up setting it to 512 MB,
but that
just seems outrageous. Is this normal to require 8x PermGen? I
haven't made
any other change to my app, just those required to upgrade.
Good post, thanks for insights. As for the permgen usage, perhaps it's
not normal, but expected and even documented. At 250 pages, your web
application is likely bigger than a typical Tapestry app and permgen
consumption correlates with the size of the web app. However, in
return for higher permgen usage you'll have lower heap consumption, so
you'll get better scalability. What's the max you are allocating to
the JVM (the -Xmx) and have you tried finding a lower setting that
would still work? Permgen usage is for the whole JVM and no more will
be required even if you see increase in traffic, unlike in T5.1 case
with page pool. Since you have an internal app with fairly predictable
traffic pattern and scalability requirements it may not matter for
you, but for the common case it's a win with only minor disadvantages
(memory is cheap).
I'm allocating 768MB for the heap. I haven't tried lower, but jconsole
indicates I'm using no more than 80 MB. So I'm a but puzzled at how it
exceeded 128 MB. Perhaps when it threw an exception while I was
getting some of the conversion issues resolved? Does throwing an
exception leak PermGen?
-Norman
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tapestry.apache.org
For additional commands, e-mail: users-h...@tapestry.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tapestry.apache.org
For additional commands, e-mail: users-h...@tapestry.apache.org