Peter Lin wrote:
I've done testing on SLES9/64 with JDK5 and current apr release from apache (apr-1.1.1). The performance is equal or APR is slightly faster, but what's more important is the scalability for keep-alive connections. Now you can have hundreds of keep-alive connections without going over the thread limit.
This wouldn't really happen, as keepalive gets a lot more aggressive (ie, timeout is shorter) and keepalive is disabled when the amount of busy threads gets too high.
However, this does put some strain on schedulers (should be ok with a modern setup) and resource usage (allocating a thread and all its related resources used for processing isn't insignificant, so dedicating most of these threads to block on a read isn't that great).
The point is to allow getting the benefits of keepalive (much better performance for page loading times and network usage) without the cost (besides the cost of keeping a socket open which obviously cannot be avoided). This should be good for one-machine web servers, and should make Tomcat more appealing for that usage.
On the other end of the scalability scale, JK has issues if there are too many frontend servers, and actually anytime the amount of processors in Tomcat is not equal to the amount of processors on the front servers. The only workaround for this use is to have processors timeout, but the current implementation will be inefficient with that kind of setup.
One other thing. Use some unix for Tomcat, or you will need to patch the APR for windows. The reason is that the APR uses standard windows FD_SETSIZE that is 64. I did recompile the apr with setting the FD_SETSIZE to 16384 before including winsock2.h, so we don't have that limit.
I did that because I thought that unixes has unlimited FD_SETSIZE, but it seems that the common value is 1024, so that is probably our limit for now. Think that we'll need multiple Poller threads if higher number is required. Anyhow don't test more then 1024 concurrent users at the moment, or 64 if using vanilla APR on windows.
That's the only problem I can see. Could this default value be changed unless there's a really good reason for it ? (from what I can see, higher values work well on my Windows)
More about testing: Right now the code waits for 50ms (configurable or will be) after
All the configuration should be ok now. This is the JMX view of the APR HTTP connector.
Name: Catalina:type=ThreadPool,name=http-8080 modelerType: org.apache.tomcat.util.net.AprEndpoint serverSocketPool: 53067816 running: true firstReadTimeout: 100 soTimeout: 20000 threadPriority: 5 port: 8080 currentThreadsBusy: 2 soLinger: -1 maxSpareThreads: 0 maxThreads: 150 pollerSize: 512 pollTime: 100000 keepAliveCount: 0 tcpNoDelay: true minSpareThreads: 0 daemon: true paused: false backlog: 100 currentThreadCount: 2 name: http-8080
- firstReadTimeout (in ms): timeout before a socket goes to the poller
- currentThreadsBusy: Number of workers doing some processing; it's always +1 compared to what is expected as the acceptor will get a new worker before calling accept on the server socket (so that there's no chance that an accepted socket could wait for a worker to become available)
- pollerSize: Maximum amount of sockets which can be placed in the poller, which means the amount of connections which can be keptalive (previously, each one of these would have used a thread)
- keepAliveCount: Number of sockets currently in the poller
- pollTime (in microseconds): timeout for the poll call
Rémy
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]