Hi there,

At [1] we read:

>     Web containers in application servers normally use a server thread
>     per client request. Under heavy load conditions, containers need a
>     large amount of threads to serve all the client requests.
>     Scalability limitations include running out of memory or
>     *exhausting the pool of container threads*. To create scalable web
>     applications, you must ensure that no threads associated with a
>     request are sitting idle, so *the container can use them to
>     process new requests*. Asynchronous processing refers to
>     *assigning these blocking operations to a new thread and returning
>     the thread associated with the request immediately to the container*.
>
I could not achieve this scalability in tomcat via calling 
`javax.servlet.AsyncContext.start(Runnable)`! I investigated the cause 
and found it at [2]:

         public synchronized void asyncRun(Runnable runnable) {
    ...
                     processor.execute(runnable);

I mean `processor.execute(runnable)` uses same thread pool which it's 
also it's duty to process new requests! Such usage made things worse! 
i.e. not only does not make thread pool more free to process new 
requests, but also has an overhead via thread switching!

I think Tomcat must use another thread pool for such blocking operations 
and keep current thread pool free for new requests; It's the philosophy 
of Servlet 3.0's asynchronous support according to Oracle's 
documentation. wdyt?

[1] https://docs.oracle.com/javaee/7/tutorial/servlets012.htm
[2] 
https://github.com/apache/tomcat/blob/trunk/java/org/apache/coyote/AsyncStateMachine.java#L451

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to