>From some of the test cases I can safely say that tomcat is hitting some
limits, I have two test cases ran with two diff size of payload and without
any queryParams. The servlet is a empty servlet just returns after
receiving without doing any business side logic

h2load -n100 -c1 -m1 --header="Content-Type:application/json" -d
/home/local/santhosh/A-Test/nghttp2/agentRequest.txt
https://localhost:9191/HTTP_2_TEST_APP/Http2Servlet
starting benchmark...
spawning thread #0: 1 total client(s). 100 total requests
TLS Protocol: TLSv1.3
Cipher: TLS_AES_256_GCM_SHA384
Server Temp Key: X25519 253 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done

finished in 5.16s, 10.48 req/s, 552B/s
requests: 100 total, 55 started, 54 done, 54 succeeded, 46 failed, 46
errored, 0 timeout
status codes: 55 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 2.78KB (2846) total, 1.77KB (1815) headers (space savings 43.10%),
0B (0) data
                     min         max         mean         sd        +/- sd
time for request:     1.57ms      9.43ms      2.24ms      1.17ms    94.44%
time for connect:     4.69ms      4.69ms      4.69ms         0us   100.00%
time to 1st byte:        0us         0us         0us         0us     0.00%
req/s           :      10.48       10.48       10.48        0.00   100.00%

This above configuration always returns 54 succeeded, payload size is 1200B
(1200x54=64800)
------------------------------------------------------------------------------------------------------------------------------
Now reduce the payload and trying the same test,

h2load -n100 -c1 -m1 --header="Content-Type:application/json" -d
/home/local/santhosh/A-Test/nghttp2/agentRequest2.txt
https://localhost:9191/HTTP_2_TEST_APP/Http2Servlet
starting benchmark...
spawning thread #0: 1 total client(s). 100 total requests
TLS Protocol: TLSv1.3
Cipher: TLS_AES_256_GCM_SHA384
Server Temp Key: X25519 253 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done

finished in 5.21s, 16.11 req/s, 839B/s
requests: 100 total, 85 started, 84 done, 84 succeeded, 16 failed, 16
errored, 0 timeout
status codes: 85 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 4.27KB (4376) total, 2.74KB (2805) headers (space savings 43.10%),
0B (0) data
                     min         max         mean         sd        +/- sd
time for request:     1.43ms      5.80ms      2.04ms       760us    89.29%
time for connect:     5.02ms      5.02ms      5.02ms         0us   100.00%
time to 1st byte:        0us         0us         0us         0us     0.00%
req/s           :      16.11       16.11       16.11        0.00   100.00%

This above configuration always returns 84 succeeded, payload size is 775B
(775x84=65100)
------------------------------------------------------------------------------------------------------------------------------
Reducing the payload much smaller,

h2load -n200 -c1 -m1 --header="Content-Type:application/json" -d
/home/local/santhosh/A-Test/nghttp2/agentRequest3.txt
https://localhost:9191/HTTP_2_TEST_APP/Http2Servlet
starting benchmark...
spawning thread #0: 1 total client(s). 200 total requests
TLS Protocol: TLSv1.3
Cipher: TLS_AES_256_GCM_SHA384
Server Temp Key: X25519 253 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done

finished in 5.41s, 34.40 req/s, 1.73KB/s
requests: 200 total, 187 started, 186 done, 186 succeeded, 14 failed, 14
errored, 0 timeout
status codes: 187 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 9.35KB (9578) total, 6.03KB (6171) headers (space savings 43.10%),
0B (0) data
                     min         max         mean         sd        +/- sd
time for request:     1.18ms     13.49ms      1.91ms      1.13ms    95.16%
time for connect:     5.93ms      5.93ms      5.93ms         0us   100.00%
time to 1st byte:        0us         0us         0us         0us     0.00%
req/s           :      34.41       34.41       34.41        0.00   100.00%

This above configuration always returns 186 succeeded, payload size is
356 (356x186=66216)

On Wed, Mar 6, 2019 at 9:15 PM John Dale <jcdw...@gmail.com> wrote:

> When you run your test(s), does it fail after a certain period of
> time, or just keep on going under a certain number of requests?
>
> Also, to confirm: you're sending 1000 Byte + query strings?
>
> Are you doing anything in the server side component to verify that
> your parameters have been received successfully?
>
> I seems very possible that there is increased overhead parsing the
> request (POST) body.  That's why I was wondering about the dynamics of
> your test case.  If you can achieve a steady load state, either some
> optimization of the POST request parser could be done, or you could
> accept that overhead if it is comparable to other solutions.
>
> On 3/6/19, Santhosh Kumar <santhosh...@gmail.com> wrote:
> > I hope so, I used updated packages/components at the time of development.
> > few may be outdated like tomcat native as I was using 1.2.18 while
> > developing but 1.2.21 got released recently.
> >
> > On Wed, Mar 6, 2019 at 6:18 PM John Dale <jcdw...@gmail.com> wrote:
> >
> >> Have you upgraded to the most recent release of your major version?
> >>
> >> If so, and if this issue still persists, it is something that the core
> >> development team might want to look at assuming they can replicate the
> >> issue.
> >>
> >> On 3/5/19, Santhosh Kumar <santhosh...@gmail.com> wrote:
> >> > Sometimes more than 10x
> >> >
> >> > On Tue, Mar 5, 2019 at 10:00 PM John Dale <jcdw...@gmail.com> wrote:
> >> >
> >> >> How many orders of magnitude slower are the post requests?
> >> >>
> >> >> On 3/5/19, Santhosh Kumar <santhosh...@gmail.com> wrote:
> >> >> > I was testing in the localhost
> >> >> >
> >> >> > On Tue, Mar 5, 2019 at 9:32 PM John Dale <jcdw...@gmail.com>
> wrote:
> >> >> >
> >> >> >> Are you running your test client (h2load) on the same machine,
> same
> >> >> >> network, or is it over the net (so, like 20ms latency on each
> >> >> >> request)?  The reason I ask is that if you are local (especially),
> >> >> >> it
> >> >> >> may queue up too many requests for tomcat to handle in the testing
> >> >> >> period with its thread pool.  Will let you know if I have any
> other
> >> >> >> ideas.
> >> >> >>
> >> >> >> On 3/5/19, Santhosh Kumar <santhosh...@gmail.com> wrote:
> >> >> >> > Bytes
> >> >> >> >
> >> >> >> > On Tue, Mar 5, 2019 at 9:28 PM John Dale <jcdw...@gmail.com>
> >> wrote:
> >> >> >> >
> >> >> >> >> 1000-1500 MB or KB?
> >> >> >> >>
> >> >> >> >> On 3/4/19, Santhosh Kumar <santhosh...@gmail.com> wrote:
> >> >> >> >> > As per the documentation,
> >> >> >> >> >
> >> >> >> >>
> >> >> >>
> >> >>
> >>
> https://tomcat.apache.org/tomcat-9.0-doc/config/http.html#SSL_Support_-_SSLHostConfig
> >> >> >> >> >
> >> >> >> >> > this connector supports maxPostSize, by default the limit is
> >> >> >> >> > set
> >> >> >> >> > to
> >> >> >> 2MB
> >> >> >> >> >
> >> >> >> >> > On Tue, Mar 5, 2019 at 5:09 AM John Dale <jcdw...@gmail.com>
> >> >> wrote:
> >> >> >> >> >
> >> >> >> >> >> Does anyone know if this connector supports maxPostSize
> >> >> >> >> >> parameter?
> >> >> >> >> >>
> >> >> >> >> >> On 3/4/19, Santhosh Kumar <santhosh...@gmail.com> wrote:
> >> >> >> >> >> > Hi,
> >> >> >> >> >> >
> >> >> >> >> >> > We have a tomcat instance which is http2 enabled and it
> >> >> >> >> >> > needs
> >> >> >> >> >> > to
> >> >> >> >> >> > serve
> >> >> >> >> >> > large number of requests using multiplexing, so we have
> >> >> >> >> >> > configured
> >> >> >> >> >> > our
> >> >> >> >> >> > instance as follows,
> >> >> >> >> >> >
> >> >> >> >> >> > <Connector port="9191"  URIEncoding="UTF-8"
> >> >> >> >> >> > sslImplementationName="org.apache.tomcat.util.net
> >> >> >> >> >> .openssl.OpenSSLImplementation"
> >> >> >> >> >> > protocol="org.apache.coyote.http11.Http11Nio2Protocol"
> >> >> >> >> >> >                          maxThreads="50000"
> >> >> >> >> >> > SSLEnabled="true"
> >> >> >> >> >> >
> >> >> >> >> >>
> >> >> >> >>
> >> >> >>
> >> >>
> >>
> compressibleMimeType="text/html,text/xml,text/plain,text/css,text/javascript,application/javascript,application/json,application/xml"
> >> >> >> >> >> >                          compression="on"
> >> minSpareThreads="25"
> >> >> >> >> >> > noCompressionUserAgents="gozilla, traviata" scheme="https"
> >> >> >> >> >> > secure="true"
> >> >> >> >> >> > keystoreFile="conf/myfile.keystore"
> keystorePass="password"
> >> >> >> >> >> >                          socket.appReadBufSize="81920"
> >> >> >> >> >> > socket.appWriteBufSize="81920" socket.rxBufSize="251880"
> >> >> >> >> >> > socket.txBufSize="438000">
> >> >> >> >> >> >                         <UpgradeProtocol compression="on"
> >> >> >> >> >> >
> >> >> maxConcurrentStreamExecution="200"
> >> >> >> >> >> > maxConcurrentStreams="200"
> >> >> >> >> >> > className="org.apache.coyote.http2.Http2Protocol"/>
> >> >> >> >> >> >       </Connector>
> >> >> >> >> >> >
> >> >> >> >> >> > This instance mainly serves concurrent POST request which
> >> will
> >> >> >> >> >> > have
> >> >> >> >> >> payload
> >> >> >> >> >> > of size, approx 1000-1500, which can be verified by tomcat
> >> >> >> >> >> > logs
> >> >> >> >> >> >
> >> >> >> >> >> > org.apache.coyote.http2.Http2Parser.validateFrame
> >> >> >> >> >> > Connection
> >> >> [0],
> >> >> >> >> >> > Stream
> >> >> >> >> >> > [19], Frame type [DATA], Flags [1], Payload size [*1195*]
> >> >> >> >> >> >
> >> >> >> >> >> > We tested our server with the help of h2load as follows,
> >> >> >> >> >> >
> >> >> >> >> >> > h2load -n100 -c1 -m100 https://localhost:9191/ -d
> >> >> >> >> >> > '/agentRequest.txt'
> >> >> >> >> >> >
> >> >> >> >> >> > We are getting this error as follows,
> >> >> >> >> >> >
> >> >> >> >> >> >
> >> >> >> >> >> >
> org.apache.coyote.http2.Http2UpgradeHandler.upgradeDispatch
> >> >> >> >> >> > Connection
> >> >> >> >> >> [0]
> >> >> >> >> >> >  java.io.IOException: Unable to unwrap data, invalid
> status
> >> >> >> >> >> > [BUFFER_OVERFLOW]
> >> >> >> >> >> >         at
> >> >> >> >> >> > org.apache.tomcat.util.net
> >> >> >> >> >> .SecureNio2Channel$2.completed(SecureNio2Channel.java:1041)
> >> >> >> >> >> >         at
> >> >> >> >> >> > org.apache.tomcat.util.net
> >> >> >> >> >> .SecureNio2Channel$2.completed(SecureNio2Channel.java:1000)
> >> >> >> >> >> >         at
> >> >> >> >> >> >
> >> java.base/sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:127)
> >> >> >> >> >> >         at
> >> >> >> >> >> >
> java.base/sun.nio.ch.Invoker.invokeDirect(Invoker.java:158)
> >> >> >> >> >> >         at
> >> >> >> >> >> > java.base/sun.nio.ch
> >> >> >> >> >>
> >> >> >> >>
> >> >> >>
> >> >>
> >>
> .UnixAsynchronousSocketChannelImpl.implRead(UnixAsynchronousSocketChannelImpl.java:552)
> >> >> >> >> >> >         at
> >> >> >> >> >> > java.base/sun.nio.ch
> >> >> >> >> >>
> >> >> >> >>
> >> >> >>
> >> >>
> >>
> .AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:276)
> >> >> >> >> >> >         at
> >> >> >> >> >> > java.base/sun.nio.ch
> >> >> >> >> >>
> >> >> >> >>
> >> >> >>
> >> >>
> >>
> .AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:297)
> >> >> >> >> >> >         at
> >> >> >> >> >> > org.apache.tomcat.util.net
> >> >> >> >> >> .SecureNio2Channel$2.completed(SecureNio2Channel.java:1027)
> >> >> >> >> >> >         at
> >> >> >> >> >> > org.apache.tomcat.util.net
> >> >> >> >> >> .SecureNio2Channel$2.completed(SecureNio2Channel.java:1000)
> >> >> >> >> >> >         at
> >> >> >> >> >> > org.apache.tomcat.util.net
> >> >> >> >> >> .SecureNio2Channel.read(SecureNio2Channel.java:1067)
> >> >> >> >> >> >         at
> >> >> >> >> >> > org.apache.tomcat.util.net
> >> >> >> >> >>
> >> >> >> >>
> >> >> >>
> >> >>
> >>
> .Nio2Endpoint$Nio2SocketWrapper$VectoredIOCompletionHandler.completed(Nio2Endpoint.java:1153)
> >> >> >> >> >> >         at
> >> >> >> >> >> > org.apache.tomcat.util.net
> >> >> >> >> >> .Nio2Endpoint$Nio2SocketWrapper.read(Nio2Endpoint.java:1026)
> >> >> >> >> >> >         at
> >> >> >> >> >> > org.apache.tomcat.util.net
> >> >> >> >> >> .SocketWrapperBase.read(SocketWrapperBase.java:1012)
> >> >> >> >> >> >         at
> >> >> >> >> >> >
> >> >> >> >> >>
> >> >> >> >>
> >> >> >>
> >> >>
> >>
> org.apache.coyote.http2.Http2AsyncParser.readFrame(Http2AsyncParser.java:61)
> >> >> >> >> >> >         at
> >> >> >> >> >> >
> >> >> org.apache.coyote.http2.Http2Parser.readFrame(Http2Parser.java:69)
> >> >> >> >> >> >         at
> >> >> >> >> >> >
> >> >> >> >> >>
> >> >> >> >>
> >> >> >>
> >> >>
> >>
> org.apache.coyote.http2.Http2UpgradeHandler.upgradeDispatch(Http2UpgradeHandler.java:322)
> >> >> >> >> >> >         at
> >> >> >> >> >> >
> >> >> >> >> >>
> >> >> >> >>
> >> >> >>
> >> >>
> >>
> org.apache.coyote.http2.Http2AsyncUpgradeHandler.upgradeDispatch(Http2AsyncUpgradeHandler.java:37)
> >> >> >> >> >> >         at
> >> >> >> >> >> >
> >> >> >> >> >>
> >> >> >> >>
> >> >> >>
> >> >>
> >>
> org.apache.coyote.http11.upgrade.UpgradeProcessorInternal.dispatch(UpgradeProcessorInternal.java:54)
> >> >> >> >> >> >         at
> >> >> >> >> >> >
> >> >> >> >> >>
> >> >> >> >>
> >> >> >>
> >> >>
> >>
> org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:53)
> >> >> >> >> >> >         at
> >> >> >> >> >> >
> >> >> >> >> >>
> >> >> >> >>
> >> >> >>
> >> >>
> >>
> org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:834)
> >> >> >> >> >> >         at
> >> >> >> >> >> > org.apache.tomcat.util.net
> >> >> >> >> >> .Nio2Endpoint$SocketProcessor.doRun(Nio2Endpoint.java:1769)
> >> >> >> >> >> >         at
> >> >> >> >> >> > org.apache.tomcat.util.net
> >> >> >> >> >> .SocketProcessorBase.run(SocketProcessorBase.java:49)
> >> >> >> >> >> >         at
> >> >> >> >> >> > org.apache.tomcat.util.net
> >> >> >> >> >> .AbstractEndpoint.processSocket(AbstractEndpoint.java:1048)
> >> >> >> >> >> >         at
> >> >> >> >> >> > org.apache.tomcat.util.net
> >> >> >> >> >>
> >> >> >> >>
> >> >> >>
> >> >>
> >>
> .SecureNio2Channel$HandshakeWriteCompletionHandler.completed(SecureNio2Channel.java:116)
> >> >> >> >> >> >         at
> >> >> >> >> >> > org.apache.tomcat.util.net
> >> >> >> >> >>
> >> >> >> >>
> >> >> >>
> >> >>
> >>
> .SecureNio2Channel$HandshakeWriteCompletionHandler.completed(SecureNio2Channel.java:109)
> >> >> >> >> >> >         at
> >> >> >> >> >> >
> >> java.base/sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:127)
> >> >> >> >> >> >         at
> >> >> >> >> >> >
> java.base/sun.nio.ch.Invoker.invokeDirect(Invoker.java:158)
> >> >> >> >> >> >
> >> >> >> >> >> > Why is this error is thrown? How can I configure tomcat to
> >> >> handle
> >> >> >> >> >> > concurrent POST requests which have a decent payload?
> >> >> >> >> >> >
> >> >> >> >> >> >
> >> >> >> >> >> > We have tried with various java clients like
> >> >> >> >> >> > http-client-5-beta,
> >> >> >> >> >> > jetty
> >> >> >> >> >> > or
> >> >> >> >> >> > okhttp3 and spam requests to our tomcat using http2
> >> >> >> >> >> > multiplexing
> >> >> >> and
> >> >> >> >> we
> >> >> >> >> >> > found the time taken to process a requests
> >> increases(sometimes
> >> >> >> >> >> > even
> >> >> >> >> >> > 10x)
> >> >> >> >> >> > when load is increased.
> >> >> >> >> >> > We have tweaked all common configuration related to http2
> >> >> >> >> >> > on
> >> >> both
> >> >> >> >> >> > client
> >> >> >> >> >> > and server side with no luck.
> >> >> >> >> >> >
> >> >> >> >> >> > But same tomcat configuration can handle 10s of 1000s of
> >> >> >> >> >> > get
> >> >> >> request
> >> >> >> >> >> > concurrently without a problem, its only creating problem
> >> with
> >> >> >> >> >> > POST
> >> >> >> >> >> > requests.
> >> >> >> >> >> >
> >> >> >> >> >> > What is wrong in our configuration?
> >> >> >> >> >> >
> >> >> >> >> >> > Kindly someone shed some light.
> >> >> >> >> >> >
> >> >> >> >> >> > Tomcat - 9.0.16
> >> >> >> >> >> > APR-1.2.18
> >> >> >> >> >> > OpenSSL-1.1.1a
> >> >> >> >> >> > JDK-10.0.2
> >> >> >> >> >> > OS - Ubuntu/Centos
> >> >> >> >> >> > HeapSize - 4GB
> >> >> >> >> >> > RAM -16GB
> >> >> >> >> >> >
> >> >> >> >> >> >
> >> >> >> >> >> > Kindly help
> >> >> >> >> >> >
> >> >> >> >> >> > --
> >> >> >> >> >> > *With Regards,*
> >> >> >> >> >> > *Santhosh Kumar J*
> >> >> >> >> >> >
> >> >> >> >> >>
> >> >> >> >> >>
> >> >> ---------------------------------------------------------------------
> >> >> >> >> >> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> >> >> >> >> >> For additional commands, e-mail:
> users-h...@tomcat.apache.org
> >> >> >> >> >>
> >> >> >> >> >>
> >> >> >> >> >
> >> >> >> >> > --
> >> >> >> >> > *With Regards,*
> >> >> >> >> > *Santhosh Kumar J*
> >> >> >> >> >
> >> >> >> >>
> >> >> >> >>
> >> ---------------------------------------------------------------------
> >> >> >> >> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> >> >> >> >> For additional commands, e-mail: users-h...@tomcat.apache.org
> >> >> >> >>
> >> >> >> >> --
> >> >> >> > *With Regards,*
> >> >> >> > *Santhosh Kumar J*
> >> >> >> >
> >> >> >>
> >> >> >>
> ---------------------------------------------------------------------
> >> >> >> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> >> >> >> For additional commands, e-mail: users-h...@tomcat.apache.org
> >> >> >>
> >> >> >> --
> >> >> > *With Regards,*
> >> >> > *Santhosh Kumar J*
> >> >> >
> >> >>
> >> >> ---------------------------------------------------------------------
> >> >> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> >> >> For additional commands, e-mail: users-h...@tomcat.apache.org
> >> >>
> >> >>
> >> >
> >> > --
> >> > *With Regards,*
> >> > *Santhosh Kumar J*
> >> >
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> >> For additional commands, e-mail: users-h...@tomcat.apache.org
> >>
> >>
> >
> > --
> > *With Regards,*
> > *Santhosh Kumar J*
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>

-- 
*With Regards,*
*Santhosh Kumar J*

Reply via email to