On 03/09/17 09:01, Yasser Zamani wrote:
> Hi there,
> 
> At [1] we read:
> 
>>     Web containers in application servers normally use a server thread
>>     per client request. Under heavy load conditions, containers need a
>>     large amount of threads to serve all the client requests.
>>     Scalability limitations include running out of memory or
>>     *exhausting the pool of container threads*. To create scalable web
>>     applications, you must ensure that no threads associated with a
>>     request are sitting idle, so *the container can use them to
>>     process new requests*. Asynchronous processing refers to
>>     *assigning these blocking operations to a new thread and returning
>>     the thread associated with the request immediately to the container*.
>>
> I could not achieve this scalability in tomcat via calling 
> `javax.servlet.AsyncContext.start(Runnable)`! I investigated the cause 
> and found it at [2]:
> 
>          public synchronized void asyncRun(Runnable runnable) {
>     ...
>                      processor.execute(runnable);
> 
> I mean `processor.execute(runnable)` uses same thread pool which it's 
> also it's duty to process new requests! Such usage made things worse! 
> i.e. not only does not make thread pool more free to process new 
> requests, but also has an overhead via thread switching!
> 
> I think Tomcat must use another thread pool for such blocking operations 
> and keep current thread pool free for new requests; It's the philosophy 
> of Servlet 3.0's asynchronous support according to Oracle's 
> documentation. wdyt?

I think this is a good question that highlights a lot of
misunderstanding in this area. The quote above is misleading at best.

There is no way that moving a blocking operation from the container
thread pool to some other thread will increase scalability any more then
simply increasing the size of the container thread pool.

Consider the following:

- If the system is not at capacity then scalability can be increased by
  increasing the size of the container thread pool

- If the system as at capacity, the container thread pool will need to
  be reduced to create capacity for these 'other' blocking threads.

- If too many resources are allocated to these 'other' blocking threads
  then scalability will be reduced because there will be idle 'other'
  blocking threads that could be doing useful work elsewhere such as
  processing incoming requests.

- If too few resources are allocated to these 'other' blocking  threads
  then scalability will be reduced because a bottleneck will have been
  introduced.

- The 'right' level of resources to allocate to these 'other' blocking
  threads will vary over time.

- Rather than try and solve the complex problem of balancing resources
  across multiple thread pools, it is far simpler to use a single thread
  pool, as Tomcat does.


Servlet 3 async can only increase scalability where the Servlet needs to
perform a genuinely non-blocking operation. Prior to the availability of
the async API, the Servlet thread would have to block until the
non-blocking operation completed. That is inefficient. That does limit
scalability. The async API allows this the thread to be released while
this non-blocking operation completes. That does increase scalability
because rather than having a bunch of threads waiting for these
non-blocking operations to complete, those threads can do useful work.

Mark


> 
> [1] https://docs.oracle.com/javaee/7/tutorial/servlets012.htm
> [2] 
> https://github.com/apache/tomcat/blob/trunk/java/org/apache/coyote/AsyncStateMachine.java#L451
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to