On 12/09/17 10:00, Yasser Zamani wrote:
> 
> 
> On 9/12/2017 1:17 AM, Mark Thomas wrote:
>> On 07/09/17 23:30, Yasser Zamani wrote:
>>> Thanks for your attention.
>>>
>>> Now I downloaded a fresh apache-tomcat-7.0.81-windows-x64 and chenged
>>> it's connector in same way (BIO,20,20,10). I get same result, fortunately :)
>>>
>>> OUTPUT:
>>>
>>> Using CATALINA_BASE:
>>> "C:\Users\user\.IntelliJIdea2016.3\system\tomcat\Unnamed_Async-Servlet-Example_2"
>>> Using CATALINA_HOME:
>>> "C:\Users\user\Downloads\apache-tomcat-7.0.81-windows-x64-IJ\apache-tomcat-7.0.81"
>>> Using CATALINA_TMPDIR:
>>> "C:\Users\user\Downloads\apache-tomcat-7.0.81-windows-x64-IJ\apache-tomcat-7.0.81\temp"
>>> Using JRE_HOME:        "E:\jdk1.7.0_79"
>>> INFO: Server version:        Apache Tomcat/7.0.81
>>> INFO: Server built:          Aug 11 2017 10:21:27 UTC
>>> INFO: Server number:         7.0.81.0
>>> INFO: OS Name:               Windows 8.1
>>> INFO: OS Version:            6.3
>>> INFO: Architecture:          amd64
>>> INFO: Java Home:             E:\jdk1.7.0_79\jre
>>> INFO: JVM Version:           1.7.0_79-b15
>>> INFO: JVM Vendor:            Oracle Corporation
>>> INFO: CATALINA_BASE:
>>> C:\Users\user\.IntelliJIdea2016.3\system\tomcat\Unnamed_Async-Servlet-Example_2
>>>
>>> Container MAX used threads: 10
>>
>> I see similar results.
>>
>> There looks to be things going on either in JMeter or at the network
>> level I don't understand. I had to resort to drawing it out to get my
>> head around what is happening.
>>
> 
> Sorry for bothering you,
> 
> To examine if things going on either in JMeter or at the network, I 
> tested same config (BIO,20,20,10) on Jetty. All 70 requests returned 
> successfully and response time was ~20 seconds for all, as I expects.

I'm fairly sure Jetty uses NIO, not BIO which would explain the
differences you are observing.

> Then to make myself sure, I tested same servlet but a sync one (removed 
> my own thread pool and asyncStart etc) on Jetty. Average response time 
> increased to 95s, as I expected.
> 
>> The first 20 requests (10 seconds) are accepted and processed by Tomcat.
>> The 21st request is accepted but then the acceptor blocks waiting for
>> the connection count to reduce below 20 before proceeding.
>>
> 
> You have forgot.

No I haven't.

> My configuration is maxThreads=maxConnections=20 (not 
> 10) and acceptCount=10. As it prints "Container MAX used threads: 10" so 
> it never reaches maxThreads. So why acceptor blocks ?!

After the first 20 requests you have 20 connections so you hit the
maxConnection limit. That is why the acceptor blocks and subsequent
requests go into the accept queue.

> I think although 
> I have an async servlet and own thread pool, and although time consuming 
> process (Thread.sleep) is inside my own thread pool, but Tomcat's 
> container thread wrongly does not back to thread pool

Incorrect. The container thread does return to the container thread
pool. It does so almost immediately. Given that there are 0.5 seconds
between requests and that the time taken to process an incoming request,
dispatch the request to your thread pool and return the container thread
to the container thread pool is almost certainly less than 0.5 it is
very likely that there is never more than one container thread active at
any one point.

> and fails to> satisfy Servlet 3's async API!

Also incorrect.

>> Requests 22 to 31 are placed in the accept queue. We are now 15.5s into
>> the test and the first request accepted won't finish processing for
>> another 4.5 seconds.
>>
>> Requests 32 to 40 are dropped since the request queue is full. We are
>> now 20s into the test and the first request is about to complete
>> processing. Oddly, JMeter doesn't report these as failed until some 35
>> seconds later.
>>
>> Request 1 completes. This allows request 21 to proceed. The acceptor
>> takes a connection from the accept queue (this appears to be FIFO).
>> Request 41 enters the accept queue.
>>
>> The continues until request 10 completes, 30 starts processing and 50
>> enters the accept queue.
>>
>> Next 11 completes, 41 starts processing and 51 enters the accept queue.
>> This continues until 20 completes, 50 starts processing and 60 enters
>> the accept queue.
>>
>> At this point there are 20 threads processing, 10 in the accept queue
>> and no thread due to complete for anther 10s.
>>
>> I'd expected requests 61 to 70 to be rejected. However, 65 to 70 are
>> processed. It looks like there is some sort of timeout for acceptance or
>> rejection in the accept queue.
>>
>> That explains the rejected requests.
>>
> 
> I'm not such smart to understand your analysis :) I just understand when 
> my servlet is completely async, then I should not have any rejected 
> requests (specially in such low load) like Jetty result.

You are failing to take account of the maxConnections configuration.
With a new request being made every 0.5 seconds and a wait time of 20
seconds Tomcat needs to be able to handle at least 40 concurrent
connections to prevent dropped connections. You have configured Tomcat
to allow 30 (20 from maxConnections and 10 in the accept queue). Hence
you are going to see dropped connections.


> As I said, max 
> Tomcat container's threads used by my APP is 10 (half of initial size 
> 20) so why I see rejected requests while thread pool has 10 free threads 
> to accept new requests ?!
> 
>> The other question is why maxThreads is reported as it is.
>>
>> The answer is that the thread pool never grows beyond its initial size
>> of 10. A request comes in, it is processed by a container thread,
>> dispatched to an async thread and then the container thread is returned
>> to the pool to await the next request. Tomcat is able to do this because
>> the container doesn't perform any I/O on the connection once it enters
>> async mode until it is dispatched back to the container. The default
>> thread pool implementation cycles through the threads so the max value
>> you see is 10 which is the initial size.
>>
> 
> My config was maxThreads=20 (not 10).

Irrelevant. The container threads aren't doing very much. As I explained
above it is unlikely that there is ever more than one container thread
doing work at any one time. Tomcat's thread pool has an INITIAL size of
10 that only grows if more than 10 concurrent threads are required.
Given that your test case only requires one concurrent container thread
(NOT 20 because you are using async so the container thread is freed as
soon as the async dispatch completes) the thread pool never grows. What
happens is:

The thread pool is a FIFO pool.
At the start:
Pool: 1 2 3 4 5 6 7 8 9 10
Thread 1 processes request 1 and returns to the pool.
Pool: 2 3 4 5 6 7 8 9 10 1
Thread 2 processes request 2 and returns to the pool.
Pool: 3 4 5 6 7 8 9 10 1 2
...
Pool: 10 1 2 3 4 5 6 7 8 9
Thread 10 processes request 10 and returns to the pool.
Pool: 1 2 3 4 5 6 7 8 9 10
Thread 1 processes request 11 and returns to the pool.
...

Hence the highest container thread number you see in your test is 10.

You are making the incorrect assumption that the highest thread number
you see always represents the highest concurrency. That assumption is
not true when maximum concurrency is less than the initial size of the
thread pool.

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to