-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi JR,
Based on your description of the problem, as you have looked at
everything else, MaxThreads is the only option you have left us with.
Further below however you let slip that mod_jk is also involved.
Why? This is a really great way to kill performance. You now not
only have the scheduling from Tomcat, you also have the same problem
with Apache.
Two other things to play with are keep-alives - and perhaps trying
the tomcat NIO adapter. (This was suggested by 2 other posters)
You will really need to disable keepalives for busy sites, or you will
need to configure a LOT of threads.
On 16/02/2007, at 3:49 PM, j r wrote:
I am gleaning from your comments that MaxThreads is the only thing to
tweak. Yes I do really have a connection issue. I have millions and
millions of connection requests on a very small pool of servers.
The app
has been tuned constantly over years. I am either bound to buy
more servers
or tweak tomcat to get more throughput. In reality, I probably
need to do
both.
Why are you running out of connections?
How many requests per second are you getting?
How long does it take to deal with one request?
Do you have keepalives disabled? (The should be if you have so much
traffic)
Two be honest, the 750 threads are only really going to help you
to deal with 'spikes' in traffic - or you have 750 cores to deal with
your traffic.
At some stage (after you have saturated your backend queues) your
application
will need to ramp down this number....
For example, if your machine can only deal with 100 requests per
second, and you are
receiving 200 requests per second, this will, after about 8 seconds
expand over your thread limit.
This is of course a simplified model, but the same is true if the
application
sends requests to the backend which can only deal with x requests per
second.
I would run a 'ps auxH |grep -c java' every minute (or perhaps even
every 10 sec)
to see what is going on - I would suggest that you probably have a
cyclical number
of connections. I would possibly do the same with apache as well, as
you will probably
have the same problem here.
On a large pageview day, we will overflow the 750 MaxThreads. This is
noticed by the MaxThreads limit being exceeded error message. We have
tweaked all pieces of the tomcat config. I was hoping a post here
could get
more explanation of the parameters. We have our experience to fall
back on,
but I was hoping for more.
I would seriously suggest monitoring the number of threads you are using
- - as 750 connector threads does not really sound healthy
- - and I hope you running this on Linux with so many threads...
If MaxThreads is the main thing to tweak, we will continue doing
so. There
is a limit to this though. You should create the funnel for customer
requests (webserver limits, mod_jk limits, and tomcat limits).
Exploding
MaxThreads to a large number just to be large does not seem to fit
with
having an acceptCount value or the funnel that should be created.
I honestly do not believe that 'tweaking' MaxThreads will do anything
other
than to help you get around 'spikes' in your traffic.
One of the sites I am working for also has millions of requests, and
our thread
count varies from 200 threads at any one time (based on a complicated
backend)
to 90 threads on a simple tomcat logging application.
I would also recommend separating your static/ image traffic from your
dynamic content - using separate urls.
Regards
Andrew
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (Darwin)
iD8DBQFF2N90W126qUNSzvURAooQAJwKWtXqkp+WgatZ/jT2gFu7OOSBoACdG2kG
izsIiWZQssyAHG+Kbd2jwBk=
=CsVm
-----END PGP SIGNATURE-----
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]