On 3/9/06, Caldarale, Charles R <[EMAIL PROTECTED]> wrote:
> > From: Leon Rosenberg [mailto:[EMAIL PROTECTED]
> > Subject: Re: Performance tricks with multiple tomcat instances
> >
> > so with 4 tomcats I should reduce the number of gc threads to 2 per vm
> > and with 2 tomcats to 4 for vm to have the same number of threads as
> > with single tomcat, and hope that shorter gc runs will increase
> > overall performance?
>
> You'll have to try it - this is all very application dependent.
>
> > What do you think, is it adviseable to have multiple tomcat/jvm
> > instance, or is the problem just that the cpu's doesn't scale?
>
> The bottlenecks are usually not in the app server itself, but tend to be
> in the actual app or auxiliary resources such as a data base.  If you do
> have external scaling issues (e.g., database locking), multiple Tomcat
> instances won't help.  Adding Tomcat instances may increase overhead if
> the now distributed app has to coordinate with its peers.  But you
> really can't predict without detailed knowledge of the app and its
> resource usage patterns.

Well in our case the bottleneck is really the app server, the only
limited resource is the cpu. Memory consumption is minimal, db
accesses are cached away in multiple cascading cache hierarchies, the
backend seem to scale very well. The distribution isn't an issue,
since the app runs on ~20 servers (not counting the backend) right
now. The synchronization between the containers is almost zero, the
only 'synchronizing' communication is performed over cascading
channels with repeater inbetween.
If we wouldn't use the jsps, we would probably consume no ressources
at all, but, darn, somehow we must transport the information to the
user :-)

Anyway, thank you for the help. As soon as we completed the tests,
I'll give you a status update, if you are interested.

>
>  - Chuck
>

Leon

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to