> From: Alexey Solofnenko [mailto:[EMAIL PROTECTED] 
> Subject: Re: Tomcat with 8 GB memory
> 
> No, each of two 4GB processes will have only a half of the 
> objects under the same load.

There's a significant amount of objects created by the container and the
webapps that are essentially permanent; with two JVM instances, you now
have two copies of these, not just one.  The number of short-lived
objects in existence during request processing is dependent on the
number of concurrent requests, but not directly on the rate of requests.

> And I heard that GC does not scale linear with heap size.

This was true in the days of primitive GC algorithms, but the current
ones are little affected by gross heap size.  Having labored under the
old belief myself, I was somewhat surprised when measurements showed
only very minor variations with different heap sizes.  (That probably
has more to do with the lower L1/L2/L3 cache hit rates when using larger
heaps rather than something inherent in the GC algorithms themselves.)

> And this is without multi-threading performance considerations.

Note that modern GC operations are parallelized, so reducing the number
of CPU resources available by running multiple JVMs causes a given GC to
run longer.  There is little interaction among the parallel GC threads,
so lock conflicts and cache invalidations don't impact GC much.  (Which
says nothing about whether or not a given app can benefit from more CPUs
being available.)

> As usual, your mileage may vary and only tests can tell for sure.

Most definitely.

 - Chuck


THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY
MATERIAL and is thus for use only by the intended recipient. If you
received this in error, please contact the sender and delete the e-mail
and its attachments from all computers.

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to